Luo Fuli, head of Xiaomi’s MiMo large model team, made her debut at Xiaomi this morning at the 2025 “Mi, Car, Home” Ecosystem Partner Conference and officially released the latest MiMo-V2-Flash Mixture-of-Experts (MoE) model, which has also been open-sourced.
Luo said the model demonstrates exceptional base-model potential and ranks among the top two global open-source models in world-class evaluation benchmarks. It features low cost and high-speed inference, achieving inference speeds three times faster than Deepseek V3.2 while remaining cheaper to run.
Dubbed a “post-95 AI prodigy,” Luo previously worked at Alibaba DAMO Academy, then held positions at Phantom Quant and DeepSeek, where she was a key developer of DeepSeek-V2. Since November 2025, she has led Xiaomi’s MiMo large model team.
Explore more exclusive insights at nextfin.ai.
Insights
What is the MiMo-V2-Flash model's core technical principle?
What are the origins of the Mixture-of-Experts (MoE) model?
What current trends are influencing the open-source AI model market?
How has user feedback been regarding the MiMo-V2-Flash model?
What recent updates have been made to Xiaomi's AI models?
What are the implications of the open-sourcing of the MiMo-V2-Flash model?
What potential developments can we expect for AI models in the next few years?
What challenges does Xiaomi face in competing with other AI model developers?
How does the MiMo-V2-Flash compare to Deepseek V3.2 in performance?
What are the long-term impacts of open-source models in the AI industry?
What controversies surround the use of Mixture-of-Experts models?
How does Luo Fuli's background influence her work on the MiMo model?
What are key performance metrics for evaluating AI models like MiMo-V2-Flash?
What factors contribute to the high-speed inference of MiMo-V2-Flash?
What historical cases can provide context for the evolution of AI models?
What industry trends are emerging in AI model development from 2025 onwards?