Hi, I'm Long-Nhat (or you can call me Nhat).
I'm a Ph.D. student at MBZUAI in UAE, where I work under the supervision of Dr.Hao Li.
My research is focusing on real-time generative models for human portraits.
A 3D-aware one-shot head reenactment method that generates expressive facial animations from any input video and a single 2D portrait, requiring no calibration or fine-tuning. Our real-time solution ensures view consistency and identity preservation, demonstrated in monocular video and VR telepresence systems.
A high-fidelity 3D-aware one-shot head reenactment technique. Our method transfers the expression of a driver to a source and produces view consistent renderings for holographic displays.