Contributed to the development of a three-stage agentic AI workflow: compile, implement, and execute, supporting structured workflow automation.
Assisted in building and testing modules that transform high-level workflow plans into atomic functions and execute them with LLM-powered agents.
Research Intern
KAIST MLAI Lab
Weight Generation for Large Language Models
Conducted a comprehensive literature survey on generative models for weight generation and alternative approaches to weight-space learning in neural networks.
Ran and analyzed experimental results from existing codebases, assisted in debugging and reproducing key experiments to validate methodologies.
Co-authored a paper (under review at ICLR 2026): Merging Language Models in Latent Space, proposing a VAE-based framework for merging heterogeneous large language models in a shared latent space.
Investigated weight distribution properties (kurtosis, compressibility) and contributed to the design and implementation of latent-space fusion experiments.
AI Researcher
DeepAuto.ai
LLM Agent on Hyperparameter Optimization
Conducted a literature review on state-of-the-art hyperparameter optimization techniques for large language models (LLMs).
Analyzed existing codebases and replicated experiments to understand optimization workflows.
Implemented and tested existing optimization methods to assess their impact on model performance.
Research Intern
KAIST MLAI Lab
Hyperparameter Optimization
Developed and implemented baseline optimization algorithms, including Bayesian Optimization with Hyperband (BOHB), Differential Evolution with Hyperband (DEHB), and Functional Surrogate-Based Optimization (FSBO) to efficiently optimize complex black-box functions.
Evaluated the performance of these algorithms on benchmark problems and real-world applications, demonstrating their effectiveness in sample-efficient hyperparameter tuning and optimization.
Co-authored a paper (NeurIPS 2025): Cost-Sensitive Freeze-Thaw Bayesian Optimization for Efficient Hyperparameter Tuning, introducing a novel cost-aware strategy to improve resource allocation in freeze-thaw Bayesian optimization.
AWS AI & ML Scholarship Recipient
Udacity
AI Programming with Python
Participated in the AWS DeepRacer Student League and received the AWS AI & ML Scholarship.
Completed a collaborative virtual course that teaches programming tools and techniques fundamental to machine learning, with support from Udacity teachers in weekly group sessions.
Project 1: Use a Pre-trained Image Classifier to Identify Dog Breeds.
Project 2: Create an Image Classifier.
Technical Consulting Virtual Intern
SAP (via Forage)
Virtual experience program participant in SAP, through Forage.
Completed practical task modules in assembling the data, data analysis, and presenting the results.
Education
M.S. (incoming)
KAIST MLAI Lab
Incoming M.S. student at KAIST’s Machine Learning and Artificial Intelligence (MLAI) Lab, advised by Prof. Sung Ju Hwang.
My graduate research will focus on large language models, generative modeling, weight space learning, and agentic workflows for scalable foundation models.
B.S. Computer Science and Engineering & Mathematics
Korea University
B.S. in Computer Science and Engineering with a double major in Mathematics at Korea University.
Exchange Program, Department of Mathematics
The Hong Kong University of Science and Technology (HKUST)
Exchange student at HKUST School of Science (Mathematics), where I broadened my academic perspective in applied mathematics, optimization, and machine learning.
Completed advanced coursework in statistics, stochastic processes, regression analysis, time series, and machine learning — providing a rigorous theoretical foundation for research on LLMs and generative modeling.