AI & AI PROMPT SHARING FORUM

AI Discussion => AI News => DeepSeek => Topic started by: Admin on Feb 01, 2025, 12:51 PM

Title: Nvidia says its new GPUs are the fastest for DeepSeek AI
Post by: Admin on Feb 01, 2025, 12:51 PM
Nvidia is praising the performance of DeepSeek's open-source AI models on its new RTX 50-series GPUs, claiming they can run DeepSeek's models faster than anything else on the PC market. But this might not be the most important point.

(https://assets.bizclikmedia.net/1200/28c5d44ebf7a24ed4b804f4525cbfcfd:87248c14eb8f3baa703d541a11f5e653/dh1l4415-hdr-20220527-r5.jpg)

This week, Nvidia saw the biggest one-day loss in market value ever for a U.S. company, and a lot of that is being linked to DeepSeek. The Chinese company's new R1 model doesn't need powerful Nvidia hardware to match the performance of OpenAI's models, meaning DeepSeek was able to train it at a much lower cost. This suggests that Nvidia's top chips might not be as essential for AI progress, which could impact the company's future.

That said, DeepSeek did use Nvidia GPUs (just not the top-end ones). They used weaker H800 chips, which the U.S. allows Nvidia to export to China. Nvidia's latest blog post is highlighting how its new 50-series RTX GPUs can be used for running R1 models, claiming they offer maximum performance on PCs.

But the big deal is how DeepSeek trained its models in the first place. (And remember, China is getting a less powerful version of the RTX 5090.)

Other companies are also jumping on the DeepSeek train. R1 is now available on AWS, and Microsoft has made it available on its Azure AI Foundry platform and GitHub. But there are rumors that Microsoft and OpenAI are looking into whether DeepSeek used OpenAI's data without permission.