I study methods at the interface of machine learning and control, including reinforcement learning, imitation learning, model predictive control and dynamic mode decomposition. My main interest is in approaches that build on mathematical structure and optimization principles, enabling both interpretability and efficiency in decision-making and control. My research aims to develop interpretable and theoretically grounded methods for decision-making in partially observable, real-world systems.
I am a Ph.D. student in Computer Science at Cornell University, working with Sarah Dean. Before joining Cornell, I was a research scientist at the Korea Institute of Science and Technology (KIST). I received my M.S. from Korea University and my B.S. from the University of Seoul.
π Educations
- Ph.D. Student in Computer Science, Aug. 2024 - Present
- Cornell University
- Advisor: Sarah Dean
- M.S. in Electrical and Computer Engineering, Mar. 2021 - Aug. 2023
- Major : Control, Robotics and Systems. Korea University (GPA: 4.39/4.5)
- Research Scholarship from Hyundai Motor Group, Mar. 2021 - Dec. 2022,
- B.S. in Electrical and Computer Engineering, Mar. 2017 - Feb. 2021
- University of Seoul. (GPA: 4.0/4.5)
- Research Scholarship from Hyundai Motor Group, Sep. 2019 - Dec. 2020
π Publications

Distilling Realizable Students from Unrealizable Teachers
- Policy distillation under asymmetric imitation learning setting.
- Propose two new IL/RL algorithms robust to state aliasing.
- Submitted to IROS 2025

Subspace-wise Hybrid RL for Articulated Object Manipulation
- Develop skills for operating equipment (e.g., valve, switch, gear leverβ¦) at industrial sites with a manipulator.
- Learn skills while minimizing human-engineered features using reinforcement learning.

Whole-body motion planning of dual-arm mobile manipulator for compensating for door reaction force
- Address challenges of door traversal motion planning
- Unified framework for door traversal, from approaching, opening, passing through, and closing the door with dual-armed mobile manipulator
- Decision making for optimal contact point planning with RL.
- Challenge to skewed sub-goal distribution for goal-conditioned RL controller.
- Enable adaptive sub-goal planning and efficient reward learning via MPC-synchronized rewards.
Reinforcement Learning for Autonomous Vehicle using MPC in Highway Situation.
- Addresses the challenge of reward shaping for continuous RL controllers by using MPC reference. <!β - Implemented DDPG.
- Paper published at ICEIC, 2022 (oral). β>
π Honors and Awards
- 2024 Student Travel Grant, ICRA 2024 (MOMA.v2 Workshop)
- Spring 2018, Scholarship for Excellent Achievement, University Of Seoul.
- Sep. 2018 - Dec. 2022, Full Scholarship for Selected Research Student, Hyundai Motor Company.
- May 2022, 10th F1TENTH Autonomous Racing Grand Prix, 3rd Place, ICRA 2022.
- Jul. 2018, 2018 Intelligent Model Car Competition, 3rd Place, Hanyang University.
- Jul. 2017, 14th Microrobot Competition, Special Award for Women Engineer, Dankook University.
π» Work Experience
- 2025 IROS reviewer
- Jan. 2023 - 2024, Korea Institute of Science and Technology, Republic of Korea.
- Jul. 2019, Hyundai Motor Group, Republic of Korea.