Chinese social media platform RedNote, also known domestically as Xiaohongshu, released its first open-source large language model (LLM) last Friday. The new model, dubbed “dots.llm1,” contains 142 billion parameters in total, but only 14 billion are supposedly activated for each response.
According to the Asian news outlet, South China Morning Post, this architecture could help the LLM balance performance with cost-efficiency to rival competitors like OpenAI’s ChatGPT while reducing the expense of both training and inference.
RedNote’s internal Humane Intelligence Lab developed the LLM, or “hi lab,” which evolved from the company’s previous artificial intelligence team. RedNote said its model outperforms other open-source systems in Chinese language understanding, surpassing Alibaba’s Qwen2.5-72B-Instruct and DeepSeek-V3.
RedNote issued a statement to explain the standards behind the training of its LLM. Unlike some other models in the market, the company asserted that no synthetic data was used during pretraining.
Developers insisted that dots.llm1 was trained on 11.2 trillion tokens of non-synthetic data, an approach RedNote says is imperative for the model to achieve higher fidelity and more reliable results.
The company has also begun trialing an AI research assistant called Diandian on its platform. Diandian, launched via a dialogue box within the app, features a “deep research” function and is powered by one of RedNote’s in-house models. Still, the company has yet to confirm if this assistant is based on dots.llm1.
RedNote’s open-source AI announcement came just a day prior to the company’s opening of a new office in Hong Kong, its first outside of mainland China. The new location is situated in Times Square, a commercial area in Causeway Bay.
“RedNote’s presence will improve the interactions between local content creators, brands and organisations, and promote East-meets-West cultural exchanges and content marketing development among Hong Kong, the Mainland and the global markets,” InvestHK’s Director-General of Investment Promotion Alpha Lau told reporters during a press conference last Saturday.
RedNote, headquartered in Shanghai, is one of China’s most widely used social media platforms, with 300 million monthly active users. Per company officials, the expansion is part of plans to increase RedNote’s overseas reach, in preparation for a potential TikTok ban in the United States.
RedNote joins the list of Chinese firms that have moved towards making their large language models more open-source AI. More companies are trying to mirror the success of low-cost, high-performance models like those released by the startup DeepSeek.
Earlier this year, DeepSeek launched its open-source R1 model, which topped downloads on several app stores for delivering strong results at a fraction of the cost associated with Western LLMs.
Tech giants Alibaba, Tencent, and ByteDance have made significant investments in AI infrastructure. Alibaba, for instance, has released several new LLMs as part of its Qwen series, including the latest Qwen3 Embedding models. These support over 100 languages capable of code and language retrieval.
Alibaba said the Qwen3 models have improved efficiency and performance in embedding and reranking systems. Speaking earlier this year, Wang Jian, founder of Alibaba Cloud, claimed that the progress of large language models is exceeding expectations and will continue to do so.
Wang mentioned startups like DeepSeek as examples of how young innovators solve problems with creative approaches.
According to Wang, Alibaba’s ZEROSEARCH demonstrates how innovation can significantly lower development costs. ZEROSEARCH, showcased in May, is designed to simulate search engine behavior during training without making actual API calls. The company claims this can reduce training costs by up to 90%.
Cryptopolitan Academy: Want to grow your money in 2025? Learn how to do it with DeFi in our upcoming webclass. Save Your Spot