YuanLab Open-Sources Yuan3.0 Ultra, Joins Top 3 Trillion-Parameter Open Multimodal Models Globally

32    2026-03-06

YuanLab.ai open-sourced Yuan3.0 Ultra, one of only three trillion-parameter open multimodal foundation models worldwide. It uses a unified multimodal architecture with a 103-layer MoE language backbone, optimized to 1.01T parameters (68.8B active). Its Localized Filtering Attention (LFA) boosts semantic modeling and cuts pre-training compute by 49%. It supports text, image, audio, and video I/O for content creation, intelligent interaction, and research.

Keywords:Yuan3.0 Ultra, open-source, trillion parameters, multimodal LLM, MoE architecture

8685_w4e5_6311.png