I am an Assistant Professor in the Department of Computer Science and Engineering at Oakland University, based in Rochester, MI. My research focuses on machine learning theory, distributed optimization, and graph sampling. In particular, I design efficient algorithms grounded in applied probability and Markov chain theory — pushing the boundaries of how quickly and effectively learning tasks can be performed over networks.

  • Apr 2026 Our paper Score-Repellent Monte Carlo: Toward Efficient Non-Markovian Sampler with Constant Memory in General State Spaces was accepted at ICML 2026 as a Spotlight!
  • Feb 2026 Awarded the 2026 URC Faculty Research Fellowship ($10K) from Oakland University to support research on efficient sampling and distributed learning.
  • Oct 2025 Received the AI Mini Integration Teaching Grant ($1K) from Oakland University to integrate AI tools into undergraduate coursework.

Selected Publications

  • Jie Hu, Lingyun Chen, Geeho Kim, Jinyoung Choi, Bohyung Han, Do Young Eun
    International Conference on Machine Learning (ICML), 2026.
    Spotlight PDF
    Abstract

    We propose a score-repellent Monte Carlo sampler that is non-Markovian yet requires only $O(1)$ memory, generalizing self-repellent ideas from discrete graphs to general state spaces. The sampler shapes its proposal distribution using a running score estimate, achieving asymptotically minimal sampling variance without storing the full history.

  • Jie Hu, Yi-Ting Ma, Do Young Eun
    International Conference on Machine Learning (ICML), Vancouver, Canada, 2025.
    Oral PDF
    Abstract

    We propose a history-driven target (HDT) framework in MCMC to improve any random walk algorithm on discrete state spaces, such as general undirected graphs, for efficient sampling from a target distribution $\boldsymbol{\mu}$. Recent innovations like the self-repellent random walk (SRRW) achieve near-zero variance by prioritizing under-sampled states through transition-kernel modifications, but suffer high computational overhead and require time-reversibility. HDT instead introduces a history-dependent target $\boldsymbol{\pi}[\mathbf{x}]$ to replace the original target $\boldsymbol{\mu}$, where $\mathbf{x}$ is the empirical measure of past visits. This preserves a lightweight implementation, achieves compatibility with both reversible and non-reversible MCMC samplers, and retains unbiased samples with near-zero variance.

  • Jie Hu, Vishwaraj Doshi, Do Young Eun
    International Conference on Learning Representations (ICLR), Vienna, Austria, 2024.
    Oral PDF
    Abstract

    We replace the standard linear Markovian token in random-walk-based distributed SGD by a Self-Repellent Random Walk (SRRW) – a nonlinear Markov chain parameterized by $\alpha > 0$ that is less likely to revisit recently visited states. Our SA-SRRW algorithm achieves almost-sure convergence and a CLT, with asymptotic covariance always smaller than that of an algorithm driven by the base Markov chain and decreasing at rate $O(1/\alpha^2)$ – amplifying SRRW’s benefit in the stochastic optimization context.

  • Vishwaraj Doshi, Jie Hu, Do Young Eun
    International Conference on Machine Learning (ICML), Honolulu, HI, 2023.
    🏆 Outstanding Paper Award PDF
    Abstract

    We design a self-repellent random walk (SRRW) parameterized by $\alpha > 0$ that is less likely to transition to highly visited nodes. We prove almost-sure convergence of the empirical distribution to the target stationary distribution and a central limit theorem showing that stronger repellence (larger $\alpha$) yields strictly smaller asymptotic covariance in the Loewner order. For SRRW-driven MCMC, the decrease in asymptotic sampling variance is $O(1/\alpha)$, eventually going to zero.

Teaching

  • CSI 2560 Computational Linear Algebra Winter 2026
  • CSI 2470 Introduction to Computer Networks Fall 2025