About Me
I am a Machine Learning Engineer with a Master's in Computer Science from Columbia University. My work focuses on 3D content generation, deep generative models, and avatar synthesis using diffusion models and multimodal learning. Currently, I’m part of the Avatars R&D team at Genies Inc, where I work on developing generative AI models and production-grade pipelines for creating expressive and controllable 3D avatars. My work spans diffusion-based image synthesis, view-consistent generation, mesh reconstruction, and multimodal conditioning to bring personalized digital humans to life at scale.
Prior to Genies, I spent several years at Unity Technologies, working on applied ML research and computer vision problems like human pose estimation, saliency detection, and synthetic data generation. I’ve also contributed to open-source perception pipelines and published work on synthetic dataset generation for human-centric vision tasks (Unity Perception, PSP-HDRI+).
My broader research interests include Gen AI, Vision, LLMs, and efficient learning for real-time systems. I'm passionate about building tools that bridge the physical and virtual worlds, and enabling scalable AI-powered creation of digital humans and experiences.