I am an Assistant Professor in the Department of Computer Science and Engineering at Oakland University, based in Rochester, MI. My research focuses on machine learning theory, distributed optimization, and graph sampling. In particular, I design efficient algorithms grounded in applied probability and Markov chain theory — pushing the boundaries of how quickly and effectively learning tasks can be performed over networks.

Prospective Students: I am recruiting PhD students to start in Winter / Fall 2026 with expertise in applied probability, optimization, machine-learning algorithms, or Markov chain Monte Carlo. Please email me with your CV and a brief description of your research interests.
  • Aug 2025 Joined Oakland University as Assistant Professor in the Department of Computer Science and Engineering.
  • Jul 2025 Attended ICML 2025 in Vancouver and gave an oral presentation on our paper History-Driven Target for Nonlinear MCMC.

Selected Publications

2025
  • Jie Hu, Yi-Ting Ma, Do Young Eun
    International Conference on Machine Learning (ICML), Vancouver, Canada, 2025.
    Oral PDF
    Abstract

    We propose a history-driven target (HDT) framework in MCMC to improve any random walk algorithm on discrete state spaces, such as general undirected graphs, for efficient sampling from a target distribution. Recent innovations like the self-repellent random walk (SRRW) achieve near-zero variance by prioritizing under-sampled states through transition-kernel modifications, but suffer high computational overhead and require time-reversibility. HDT instead replaces the target distribution itself with a history-dependent one, preserving lightweight implementation while achieving compatibility with both reversible and non-reversible MCMC samplers and retaining unbiased samples with near-zero variance.

2024
  • Jie Hu, Vishwaraj Doshi, Do Young Eun
    International Conference on Learning Representations (ICLR), Vienna, Austria, 2024.
    Oral PDF
    Abstract

    We replace the standard linear Markovian token in random-walk-based distributed SGD by a Self-Repellent Random Walk (SRRW) - a nonlinear Markov chain that is less likely to revisit recently visited states. Our SA-SRRW algorithm achieves almost-sure convergence and a CLT, with asymptotic covariance always smaller than that of an algorithm driven by the base Markov chain and decreasing at rate O(1/alpha^2) - amplifying SRRW’s benefit in the stochastic optimization context.

2023
  • Vishwaraj Doshi, Jie Hu, Do Young Eun
    International Conference on Machine Learning (ICML), Honolulu, HI, 2023.
    🏆 Outstanding Paper Award PDF
    Abstract

    We design a self-repellent random walk (SRRW) parameterized by alpha > 0 that is less likely to transition to highly visited nodes. We prove almost-sure convergence of the empirical distribution to the target stationary distribution, and a CLT showing that stronger repellence (larger alpha) yields strictly smaller asymptotic covariance in the Loewner order. For SRRW-driven MCMC, the decrease in asymptotic sampling variance is O(1/alpha), eventually going to zero.

Teaching