At a time when there are issues with the growing costs of developing and maintaining AI and the limited amount of available hardware, DeepSeek has presented a new plan for developing and scaling artificial intelligence (AI).
The Chinese based start-up believes it can create significantly better AI models without necessarily adding more chips and therefore increasing power consumption. Although the proposed mHC concept has garnered significant attention from many researchers of the subject, it is generally considered to still be in the early stages.
Further research will be required to determine the benefits of the approach in developing larger AI systems. A technical paper detailing the mHC concept was released last week and is co-authored by Liang Wenfeng, DeepSeek’s founder and CEO.
One of the main components of the work is a re-evaluation of how information is transferred between the various layers of a multi-layered neural network.
Each layer in a neural network passes on a form of processed information to the next layer in the model, creating what has been termed a ‘Residual Learning Network’ (ResNet). Developed by Microsoft Research’s Kaiming He and others approximately ten years ago, ResNets provided the fundamental basis to a number of today’s most advanced AI systems.
A concept developed by DeepSeek was created after ByteDance introduced Hyper-Connections in 2024. Hyper-Connections allow information to travel multiple routes through a network, rather than just one main path, which can increase the speed of learning and the richness of the experience.
However, while they can be beneficial, they can also lead to problematic training occurrences, where models experience training instability or complete failure.
According to Song Linqi (City University of Hong Kong), DeepSeek’s research is a progression of an existing idea, a continuation of how DeepSeek looks at other companies’ work, instead of inventing something from the ground up.
ResNet is compared to a one-lane expressway while Hyper-Connections resemble a multi-lane expressway; however, Song cautioned that having multiple lanes with no proper rules may lead to more collisions.
Professor Guo Song of the Hong Kong University of Science and Technology believes that this research paper may indicate a change in research behaviour for AI research. Instead of continuing to make small modifications to the designs of existing models, he feels that research may evolve towards developing new models based on theoretical constructs.
While there is excitement over the recent milestone reached in the testing of mHC for deep learning, experts have stressed that the research is still not done. The testing provided by DeepSeek only utilized four paths of data when testing models with 27 billion parameters.
“The experiments validated models up to 27 billion parameters, but how would it perform on today’s frontier models that are an order of magnitude larger?”
Professor Guo Song.
The AI models that are available today are larger and typically have hundreds of billions of parameters compared to the 30 billion parameters that were the standard just a few years ago.
Guo echoed these sentiments and stated that no one can conclude yet if mHC will be able to perform work at the frontier of AI technology. He also stated that the infrastructure needed for mHC to function may be too advanced for smaller research institutions to use and for companies to utilize on mobile devices.
According to Cryptopolitan, DeepSeek’s popularity came from their release of the DeepSeek V3 large language model, and the subsequent release of their DeepSeek-R1 reasoning model only a couple of weeks after.
When comparing the results of the models to their competitors during benchmark tests, both models were able to reach or exceed the results of their competitors despite being released using only a fraction of the training data used for the other competing language models.
Get $50 free to trade crypto when you sign up to Bybit now