

Market Analysis
Image Credit: Reuters
The rise of DeepSeek's artificial intelligence (AI) models is giving Chinese chipmakers, including Huawei, a better chance to compete domestically against more powerful U.S. processors.
For years, Huawei and other Chinese chipmakers have struggled to match Nvidia in developing high-end chips capable of competing with U.S. products for training AI models, a process where data is fed to algorithms to improve decision-making accuracy. However, DeepSeek's models focus on "inference," the phase when an AI model draws conclusions, optimizing computational efficiency rather than just relying on raw processing power.
Analysts believe this focus on efficiency could help bridge the gap between Chinese-made AI processors and their more powerful U.S. counterparts. Huawei and other Chinese companies, such as Hygon, Tencent-backed EnFlame, Tsingmicro, and Moore Threads, have recently indicated that their products will support DeepSeek models, though few details have been disclosed.
Huawei declined to comment, while Moore Threads, Hygon, EnFlame, and Tsingmicro did not respond to Reuters’ requests for further information.
Industry experts predict that DeepSeek's open-source nature and low fees will encourage AI adoption and the development of practical applications, helping Chinese companies overcome U.S. export restrictions on their most advanced chips. Even before DeepSeek gained attention, Huawei’s Ascend 910B was already considered better suited for less computationally demanding “inference” tasks, such as running chatbots or making predictions.
In China, numerous companies, from automakers to telecom providers, have announced plans to integrate DeepSeek's models into their products and operations.
“This development aligns with the capabilities of Chinese AI chip vendors,” said Lian Jye Su, chief analyst at Omdia. “Chinese AI chips have struggled to compete with Nvidia’s GPUs in AI training, but inference tasks are more forgiving and require local, industry-specific knowledge.”
Despite these advancements, Nvidia still dominates the market. Bernstein analyst Lin Qingyuan noted that while Chinese AI chips are cost-effective for inference, this advantage is mainly limited to the Chinese market, as Nvidia's chips are still superior, even for inference tasks.
While U.S. export restrictions prevent Nvidia’s most advanced AI training chips from entering China, the company is still allowed to sell less powerful training chips for inference tasks.
Nvidia also highlighted in a recent blog post that inference time is rising as a new scaling law, emphasizing that its chips remain crucial for making DeepSeek and other "reasoning" models more effective.
Beyond raw computing power, Nvidia's CUDA platform, which allows developers to use Nvidia GPUs for general-purpose computing, has become a key part of its market dominance. Many Chinese AI chipmakers have not directly challenged CUDA but instead claimed their chips are compatible with it.
Huawei has been the most proactive in trying to break free from Nvidia by offering an alternative called Compute Architecture for Neural Networks (CANN), but experts suggest it faces challenges in convincing developers to abandon CUDA.
“Chinese AI chip firms still lag in software performance,” said Omdia’s Su. “CUDA offers a comprehensive library and diverse software capabilities, which require significant long-term investment.”
Paraphrasing text from "Reuters" all rights reserved by the original author