Hwiyeon Yoo

I am a machine learning researcher at Boeing Korea Engineering and Technology Center (BKETC). At BKETC, I have been working on developing vision-based AI models for aircraft manufacturing such as embodied vision-language model, document understanding, and anomaly detection.

I received my PhD in robotics perception at Robot Learning Laboratory in Seoul National University (SNU), Korea, in 2024, under supervision of Prof. Songhwai Oh . I received my BS in Electrical and Computer Engineering from SNU in 2017.

My research interests are focused on vision-based robot learning and semantic perception. Also, my research interests include embodied navigation AI, multi-modal semantic perception, robotics, VLM for embodied system, document understanding, and anoamly detection.

Email  /  CV  /  Google Scholar  /  Github  /  Linkedin  

profile photo

Research

Commonsense-Aware Object Value Graph for Object Goal Navigation
Hwiyeon Yoo, Yunho Choi, Jeonho Park, and Songhwai Oh.
IEEE Robotics and Automation Letters (RA-L), 2024
40th Anniversary of the IEEE Conference on Robotics and Automation (ICRA@40), 2024  
paper

Local Selective Vision Transformer for Depth Estimation Using a Compound Eye Camera
Wooseok Oh, Hwiyeon Yoo, Timothy Ha, and Songhwai Oh.
Pattern Recognition Letters, 2023. 
paper

Topological Semantic Graph Memory for Image-Goal Navigation
Nuri Kim, Obin Kwon, Hwiyeon Yoo, Yunho Choi, Jeongho Park, and Songhwai Oh.
Conference on Robot Learning (CoRL), 2022. Oral presentation  
project page / paper / code / video

Visual Graph Memory with Unsupervised Representation for Visual Navigation
Obin Kwon, Nuri Kim*, Yunho Choi*, Hwiyeon Yoo*, Jeongho Park*, Songhwai Oh. (* equal contribution)
International Conference on Computer Vision (ICCV), 2021.  
project page / paper / code / video

Actualization of Deep Ego-motion Classification on Miniaturized Octagonal Compound Eye Camera
Hwiyeon Yoo, Jungho Yi, Jong Mo Seo, and Songhwai Oh.
International Conference on Control, Automation and Systems (ICCAS), 2021. Best Poster Paper Award Winner 
paper

Vision-Based 3D Reconstruction Using a Compound Eye Camera
Wooseok Oh, Hwiyeon Yoo, Timothy Ha, and Songhwai Oh.
International Conference on Control, Automation and Systems (ICCAS), 2021. 
paper

Localizability-based Topological Local Object Occupancy Map for Homing Navigation
Hwiyeon Yoo, and Songhwai Oh.
International Conference on Ubiquitous Robots (UR), 2021.  
paper

Path-Following Navigation Network Using Sparse Visual Memory
Hwiyeon Yoo, Nuri Kim, Jeongho Park, and Songhwai Oh.
International Conference on Control, Automation and Systems (ICCAS), 2020.  
paper

Deep Ego-Motion Classifiers for Compound Eye Cameras
Hwiyeon Yoo, Geonho Cha, and Songhwai Oh.
Sensors, vol. 19, no. 23, Dec.2019.  
paper

Unsupervised Holistic Image Generation from Key Local Patches
Donghoon Lee, Sangdoo Yun, Sungjoon Choi, Hwiyeon Yoo, Ming-Hsuan Yang, and Songhwai Oh.
European Conference on Computer Vision (ECCV), 2018. 
paper / code

Text2Action: Generative Adversarial Synthesis from Language to Action
Hyemin Ahn, Timothy Ha*, Yunho Choi, Hwiyeon Yoo*, and Songhwai Oh. (* equal contribution)
IEEE International Conference on Robotics and Automation (ICRA), 2018.  
paper / code / video

Estimating Objectness Using a Compound Eye Camera
Hwiyeon Yoo, Donghoon Lee, Geonho Cha, and Songhwai Oh.
International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), 2017.  
paper

Light-Weight Semantic Segmentation for Compound Images
Geonho Cha, Hwiyeon Yoo, Donghoon Lee, and Songhwai Oh.
International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), 2017.  
paper

Projects

General-Purpose Deep Reinforcement Learning Using Metaverse for Real World Applications
National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (MSIT), 2023-
  • Implementation of a vision-based object goal navigation algorithm for embodied agents in real robot navigation.
AI Technology for Guidance of a Mobile Robot to its Goal with Uncertain Maps in Indoor/Outdoor Environments
Ministry of Science and ICT (MSIT), 2019-2023
  • Development of a vision-based path following navigation algorithm for embodied mobile robots with sparse implicit memory.
  • Development of a vision-based path following and homing navigation algorithm for embodied mobile robots with building semantic map.
  • Development of a vision-based object goal navigation algorithm for unknown environments for embodied mobile robots using semantic graph memory.
BioMimetic Robot Research Center - Biomimetic Recognition Technology
Defense Acquisition Program Administration and Agency for Defense Development (ADD), 2016-2021
  • Development of an insect-like compound eye camera prototype.
  • Development of light-weight computer vision models on the compound eye: objectness estimation, semantic segmentation, ego-motion estimation, depth estimation, and 3D environment reconstruction.
Realistic 4D Reconstruction of Dynamic Objects
Ministry of Science and ICT (MSIT), 2017-2019
  • Development of a 3D point cloud matching algorithm.
  • Development of a 3D human motion reconstruction algorithm by using human part segmentation and tracking.

This webpage template is from Jon Barron's.