Meet the 8 Asian AI Aces on Meta’s New “Superintelligence” Squad
Recently, Scale AI founder Alexander Wang joined Meta's Superintelligence team.

Mark Zuckerberg is doubling down on Meta’s pursuit of superintelligence, committing significant resources and strategic focus to build the next generation of AI.
With ambitious investments in infrastructure, research, and top-tier talent, Zuckerberg aims to position Meta at the forefront of advanced AI capabilities and responsible development.
Recently, Scale AI founder Alexander Wang joined Meta's Superintelligence team, bringing extensive experience in large-scale model optimisation and AI safety.
Meta’s ambitious Superintelligence Lab has quietly assembled a dream team of AI talent—and among them are eight brilliant Asian minds leading the charge into the future of super artificial intelligence.
From multimodal breakthroughs to AI safety, each brings a unique vision and experience from industry giants like OpenAI, DeepMind, and Anthropic.
- Trapid Bansal, formerly at Google and Facebook AI Research, is widely respected for his work in multimodal learning. He arrives at Meta from OpenAI, where he was a key contributor to OpenAI’s first AI reasoning model, o1, helping lay the groundwork for its logical and problem-solving capabilities. His arrival is expected to significantly strengthen Meta’s Superintelligence Lab as it races to build a next-generation AI reasoning model to rival OpenAI’s o3 and DeepSeek’s R1. As of now, Meta has yet to release a dedicated reasoning model to the public.

- Jiahui Yu joined from DeepMind, where he focused on teaching machines how to reason logically. His models can follow multi-step arguments and solve complex problems—essential skills for any AI hoping to reach human-level intelligence. He’s expected to bring this cognitive depth to Meta’s evolving language models.

- Shuchao Bi, once a key multimodal architect at OpenAI, specialises in creating systems that can handle both text and images simultaneously. His work enables models to generate more contextual and accurate outputs—critical for next-gen digital assistants and content tools.

- Huiwen Chang, was also poached from OpenAI by Meta. Chang played a key role in developing GPT-4o’s image generation capabilities and brings deep expertise in generative AI. During her time at Google Research, she created the MaskGIT and Muse architectures—both now considered foundational models in the evolution of text-to-image synthesis.

5. Ji Lin is another OpenAI veteran, known for developing synthetic datasets that train AI without compromising privacy. Ji Lin was instrumental in developing a range of influential models, including o3/o4-mini, GPT-4o, GPT-4.1, GPT-4.5, 4o-ImageGen, and the Operator reasoning stack. His work focused on advancing reasoning capabilities and refining model architectures, helping these systems tackle complex AI tasks with greater accuracy and efficiency.

6. Hongyu Ren worked on OpenAI’s o3 and o4 models, focusing on post-training refinement. Instead of starting from scratch, Ren made small models smarter through intelligent fine-tuning, making them nearly as capable as their massive counterparts.

- Shengjia Zhao helped build the very core of GPT-4’s architecture. With deep expertise in large-scale transformer models, Zhao is now working with Meta to push the boundaries of model performance and design, balancing speed, size, and smarts.

- Pei Sun contributed to post-training, coding, and reasoning efforts for the Gemini project at Google DeepMind. Before that, he led the development of the last two generations of perception models at Waymo, showcasing his expertise in AI for autonomous driving. He now focuses on strengthening reasoning and real-world utility in advanced AI systems.