Welcome to my website

Hi, I am Yu-Han. I have been a PhD student in LPSM at Sorbonne University since September 2024. I am supervised by Gérard Biau, Claire Boyer and Pierre Marion. I am also part of the co-advised PhD student program at Google DeepMind, co-advised by Quentin Berthet and Romu Ellie. My research interests include: diffusion models, regularizations, ratent representation learning.

Brief Curriculum Vitae

  • 2024- : Sorbonne University and google DeepMind, co-advised PhD student
  • 2020-2024: Ecole Normale Supérieure
  • 2022-2023: University Paris-Saclay, Master degree (Mathematics of Randomness)
  • 2018-2020: Lycée Louis-le-Grand, Prepatory classes

Publications

  • Wu, Y. H., Berthet, Q., Biau, G., Boyer, C, Elie, R., & Marion P. (2025). Optimal Stopping in Latent Diffusion Models. arXiv preprint arXiv:2510.08409.
  • Wu, Y. H., Marion, P., Biau, G., & Boyer, C. (2025). Taking a Big Step: Large Learning Rates in Denoising Score Matching Prevent Memorization. Proceedings of Thirty Eighth Conference on Learning Theory, PMLR 291:5718-5756.
  • Marion, P.*, Wu, Y.-H.*, Sander, M. E., & Biau, G. (2024). Implicit regularization of deep residual networks towards neural ODEs. The Twelfth International Conference on Learning Representations.
  • Conlon, D., & Wu, Y. H. (2023). More on lines in Euclidean Ramsey theory. Comptes Rendus. Mathématique, 361(G5), 897-901.

Internships

  • 2024: Sorbonne University, Research Internship
  • 2023: Owkin, Research Internship
  • 2023: Sorbonne University, Research Internship
  • 2022: Caltech, Research Internship