Researchers working on Huawei Technologies’ large language model (LLM) Pangu claimed they have improved on DeepSeek’s original approach to training artificial intelligence (AI) by leveraging the US-sanctioned company’s proprietary hardware.
A paper – published last week by Huawei’s Pangu team, which comprises 22 core contributors and 56 additional researchers – introduced the concept of Mixture of Grouped Experts (MoGE). It is an upgraded version of the Mixture of Experts (MoE) technique that…
Huawei claims better AI training method than DeepSeek using own chips

RELATED ARTICLES
Jimmy Lai’s conviction in Hong Kong becomes flashpoint between China, West
Russian court declares punk protest group Pussy Riot ‘extremist’ organisation
Hong Kong’s Kai Tak Stadium ranks world’s No 3 in ticket sales in debut year