Yuta Oshima

I’m a first-year PhD student at The University of Tokyo, mentored by Professor Yutaka Matsuo.

My research enthusiasm lies in understanding and controlling video generation models. My ultimate goal is to achieve an interactive and highly controllable generation that fully leverages the video model’s priors on world dynamics.

selected publications

  1. Preprint
    multibanana.png
    MultiBanana: A Challenging Benchmark for Multi-Reference Text-to-Image Generation
    Yuta Oshima, Daiki Miyake, Kohsei Matsutani, Yusuke Iwasawa, Masahiro Suzuki, Yutaka Matsuo, and Hiroki Furuta
    2025
  2. NeurIPS
    dlbs.gif
    Inference-Time Text-to-Video Alignment with Diffusion Latent Beam Search
    Yuta Oshima, Masahiro Suzuki, Yutaka Matsuo, and Hiroki Furuta
    In Advances in Neural Information Processing Systems (NeurIPS), 2025
  3. NeurIPS
    ADOPT: Modified Adam Can Converge with Any \beta_2 with the Optimal Rate
    Shohei Taniguchi, Keno Harada, Gouki Minegishi, Yuta Oshima, Seong Cheol Jeong, Go Nagahara, Tomoshi Iiyama, Masahiro Suzuki, Yusuke Iwasawa, and Yutaka Matsuo
    In Advances in Neural Information Processing Systems (NeurIPS), 2024
  4. ICLR WS
    ssm.png
    SSM Meets Video Diffusion Models: Efficient Video Generation with Structured State Spaces
    Yuta Oshima, Shohei Taniguchi, Masahiro Suzuki, and Yutaka Matsuo
    In 5th Workshop on Practical ML for Limited/Low Resource Settings (ICLR Workshop), 2024