Jaehyun Jeong

Jaehyun Jeong

CTO & Co-founder in Daejeon, Korea

jj.for.jaehyun@gmail.com

About

Bachelor of Science in Applied Mathematics, Kongju National University (Feb 14, 2025)

Yamaguchi University (Faculty of Science): non-degree visiting student, April 2023–March 2024; completed 33 credits in analysis, algebra, computer science, statistics, modeling, and simulation.

Interests include reinforcement learning (RL) and AI based on fundamentals in Markov Decision Processes, optimization, and programming.

Selected coursework: Deep Learning; Advanced Calculus I–II; Applied Linear Algebra I–II; Probability and Statistics; Data Structures and Algorithms; Programming for Data Science; Artificial Intelligence and Mathematics; R Programming.

Experience with: RL Algorithms (PPO, DQN, A2C, REINFORCE), Planning/Control (MPC), Time Series Forecasting, Image Classification, NLP (Basic concepts to Transformer)

Work Experience

GlucoUS

Gongju-si, Chungcheongnam

CTO & Co-founder

Jul. 2025 - now

  • Designed the MySQL schema and REST APIs; set up CI/CD with automated deployments, health checks, and monitoring.
  • Prototyped a simple recommendation and glucose-forecast model using minimal inputs; explored safe personalization under sparse data.
  • Selected for the Asan Nanum Foundation “Doers” Program — Pre-Startup Track (Jeong Joo-Young Startup Competition).

AI Playground

Yamaguchi, Japan

AI Tech Lead

May. 2023 - Mar. 2024

  • Led development of ML learning modules and reference implementations (k-NN, logistic regression, decision trees) with tests and documentation from scratch.

  • Produced animated explainers with Manim(3blue1brown math tool); set code review standards for reproducibility.

  • Supported curriculum design, demo pipelines, and contributor workflows.

Kongju National University

Gongju-si, Chungcheongnam-do, Republic of Korea

Undergraduate Researcher, AI-integrated Physics Lab

Sep. 2024 - Feb. 2025

  • Participated in the KIAS workshop titled “Machine learning integrated Fourier transform for enhanced signal analysis,” exploring ML-driven spectral methods.

  • Studied and conducted experiments on the double descent phenomenon, analyzing weights dynamics across model capacity and sample regimes.

Education

Kongju National University

Gongju-si, Chungcheongnam

Applied Mathematics

Mar. 2019 - Feb. 2025

  • Bachelor of Science in Applied Mathematics; Graduation: February 14, 2025; 130 credits; GPA: 3.81/4.5 (93%).

  • Core coursework: Advanced Calculus I/II, Applied Linear Algebra I/II, Applied Differential Equations, Probability Theory, Probability and Statistics, Discrete Mathematics, Artificial Intelligence and Mathematics, Deep Learning, R Programming, Data Structures and Algorithms, Modeling and Simulation.

Yamaguchi University

Yamaguchi, Japan

Exchange Student

Apr. 2023 - Mar. 2024

  • Faculty of Science; Non-degree exchange; Apr 2023–Mar 2024; 33 credits.

  • Studied analysis, algebra, probability and statistics, data structures, programming for data science (RL lecture), modeling and simulation, graph theory.

Skills

Programming Languages

  • Python (NumPy, Pandas, SciPy) for data analysis and prototyping
  • C++ for performance-focused tasks
  • C for basic systems programming
  • R (tidyverse) for statistics and plots

Machine Learning & Deep Learning Frameworks

  • PyTorch

  • TensorFlow/Keras

  • scikit-learn

  • PyTorch Lightning

Data Science & Tools

  • Jupyter Notebook

  • RapidMiner/tableau

  • MATLAB

  • RStudio

  • Relevant coursework: Data Structures and Algorithms; Programming for Data Science; Applied Data Science Practice; Seminar in Multimedia Processing; Computer Programming II; Deep Learning

Version Control & Collaboration

  • Git

  • Notion

  • Asana

Reinforcement Learning

  • Familiar with PPO, SAC, A2C, and DQN

  • Understand MDPs, policy/value methods, reward design, and exploration basics

  • Used PyTorch, Gym/Gymnasium

AI Experimentation & MLOps

  • Track experiments with Weights & Biases, MLflow, and TensorBoard
  • Hyperparameter tuning with Optuna
  • Export and run models with ONNX and TorchScript

Mathematics & Statistics

  • Calculus

  • Linear Algebra

  • Convex Optimization

  • Statistics

  • Probability Theory

  • Graph Theory

  • Multivariate Analysis

  • Modeling and Simulation

  • Group, Ring Theory

Open Source

GRLL: General Reinforcement Learning Library

Independent

  • Engineered and modularized core deep reinforcement learning algorithms (DQN, Averaged-DQN, A2C, REINFORCE) in PyTorch, designing extensible value-based and policy-gradient architectures.

  • Benchmarked GRLL against Stable-Baselines3 (SB3) on representative RL tasks; observed faster training and simpler implementation.

  • Created OpenAI Gym–compatible environments and wrappers that enable vectorized rollouts, batched sampling, and reward shaping for controlled experimentation.

  • Enabled real-time diagnostics and performance monitoring with TensorBoard.

May. 2022 - Nov. 2022

Reinforcement Learning Lecture Development

Kongju National University

  • Supported lecture materials covering MDPs, Bellman equations, value iteration, policy gradients, and actor–critic basics.

  • Prepared Conda environments and PyTorch training scripts for hands-on exercises with reproducible settings.

  • Created simple plots for learning curves and policy evaluation to aid understanding.

Mar. 2022 - Jul. 2022

Projects

Data Science Creator Camp: Image Classification (2nd Place)

DCC

Oct. 2022 - Nov. 2022

  • Completed a one-month intensive on the end-to-end deep learning workflow. From data analysis and cleaning to preprocessing, model training, and evaluation.

  • Implemented ResNet and VGG variants and set up preprocessing (normalization, resizing) with basic augmentation.

  • Built advanced architecture of VGG, improving accuracy.

  • Built a PyTorch training and evaluation pipeline with adjustable hyperparameters and run tracking.

Agricultural Product Price Prediction Competition (15th of 550)

galmingalmin

Oct. 2024 - now

  • Compared XGBoost, CatBoost, and regularized linear models; selected the best-performing approach on validation.

  • Set up a time-series workflow with rolling cross-validation and leakage-aware feature engineering.

2nd Anomalous Diffusion (AnDi) Challenge (12th Place)

KNU-ON

Dec. 2023 - Jul. 2024

  • Built a sequence model combining 1D CNN, BiLSTM, attention, and MLP for trajectory data.
  • Applied attention-based sequence learning to capture temporal patterns in trajectories.

Contacts

GitHub

Jaehyun-Jeong

Email

jj.for.jaehyun@gmail.com

Website

https://jaehyun-jeong.github.io/