I work on machine learning systems for world simulation at OpenAI. I am currently on leave from my BS at Stanford University, where I studied computer science. In the past, I worked on physical AI at NVIDIA on the Cosmos team and diffusion transformer acceleration with the Hazy Research group. I'm deeply grateful to the wonderful mentors who have shaped my path, including Dan Fu, Ethan He, and Jan-Philipp Fränken.
Contributor
Sora 2 is a video and audio generation model with advanced world simulation capabilities that accurately models physical dynamic and maintains temporal consistency across shots.
Leo Gao, Achyuta Rajaram, Jacob Coxon, Soham V. Govande, Bowen Baker, Dan Mossing
Demonstrates that constraining language models to use sparse weight connections produces disentangled circuits where individual behaviors can be isolated and understood.
Austin Silveria, Soham V. Govande, Dan Fu
Introduces hardware-aware dynamic sparsity patterns and optimized attention + GEMM CUDA kernels to selectively recompute rapidly-changing activations, accelerating diffusion transformers by up to 3.7x without retraining. More links: GitHub, arXiv, spotlight at ES-FoMo @ ICML2025.