Ant Group’s Technology Research Institute announced the launch of the LLaDA2.0 series of discrete diffusion large language models (dLLMs) today, releasing the accompanying technical report.
The LLaDA2.0 series includes two versions—16B (mini) and 100B (flash) based on a Mixture-of-Experts (MoE) architecture—marking the first time a diffusion-based language model has reached the 100-billion parameter scale.
Explore more exclusive insights at nextfin.ai.

