Allegations stating that Huawei ripped off Alibaba’s Qwen AI model for its Pangu model have surfaced. The company has since denied all those allegations.
As open models become more popular, so do concerns over proper attribution, training transparency, and compliance with licensing terms. That has led up to Huawei facing allegations over whether or not it independently developed its AI model.
Huawei has strongly denied the claims that a version of its artificial intelligence large language model, Pangu Pro Moe, copied elements from Alibaba’s Qwen 2.5-14B.
The company’s AI research division, Noah’s Ark Lab, released a statement over the weekend to deny the allegations brought to light in a paper published by an entity called HonestAGI.
HonestAGI posted a technical report on GitHub on Friday, alleging that Huawei’s Pangu Pro Moe, which is a Mixture of Experts (MoE) version of its Pangu Pro model, shows “extraordinary correlation” with Alibaba’s Qwen 2.5-14B, a smaller member of the Qwen 2.5 model family launched in May 2024.
The HonestAGI report claimed that the similarities were significant enough to suggest that Huawei did not train its model entirely from scratch. The paper accused the company of “upcycling” another manufacturer’s model.
This is an act that, if done without proper attribution or licensing, could constitute copyright infringement. The paper further alleged that there was a fabrication in Huawei’s technical documentation and misrepresentation of the resources invested in model training.
In response, Noah’s Ark Lab firmly rejected these claims, stating, “Pangu Pro Moe is not based on incremental training of other manufacturers’ models.” The lab emphasized that the model was “independently developed and trained” and highlighted innovations in architecture and technical design.
The lab pointed out that Pangu Pro Moe is the first large-scale model fully trained on Huawei’s proprietary Ascend AI chips and also insisted that its team strictly followed open-source licensing rules when incorporating third-party components, although it did not specify which open-source models, if any, were used as references.
As of the time of writing, Alibaba has not commented on the situation, and HonestAGI has not provided further information.
Chinese tech companies are currently vying for dominance in the generative AI space. Bolstered by government backing and strong investor interest, major players in the industry are in a race to roll out more efficient, powerful, and accessible AI models that can rival global leaders like OpenAI and Google DeepMind.
Huawei was among the first Chinese companies to enter the large language model (LLM) field when it debuted the original Pangu model in 2021. However, the company’s momentum has since slowed compared to competitors like Alibaba, Baidu, and DeepSeek.
In late June, Huawei attempted to reassert itself in the industry by open-sourcing its Pangu Pro Moe models on the Chinese developer platform GitCode. The goal was to attract more developers and promote wider use of its technology by offering free and open access.
The company’s strategy is similar to the one adopted by other Chinese firms following the release of DeepSeek’s open-source R1 model earlier this year.
Alibaba’s Qwen series is regarded as more consumer-oriented. The Qwen 2.5 family, which includes the 14-billion parameter model at the center of the controversy, is designed for flexible deployment across devices like PCs and smartphones. It also supports chatbot services similar to ChatGPT, making it more immediately visible to the public and end-users.
Huawei’s Pangu models, on the other hand, are reportedly geared toward enterprise and government applications, including sectors like finance and manufacturing.
While disputes like the one started by HonestAGI add to the international scrutiny on Chinese-made AI models, they also add to the involution narrative festering at home in Chinese tech industries.
KEY Difference Wire: the secret tool crypto projects use to get guaranteed media coverage