articles Ratings /ratings/en/research/articles/250205-seek-and-deploy-how-deepseek-is-reshaping-u-s-tech-industry-s-competitive-dynamics-13405923 content esgSubNav
In This List
COMMENTS

Seek And Deploy: How DeepSeek Is Reshaping U.S. Tech Industry's Competitive Dynamics

COMMENTS

Instant Insights: Key Takeaways From Our Research

COMMENTS

Issuer Ranking: North American Unregulated Power Companies, Strongest To Weakest

COMMENTS

Credit FAQ: Assessing The Credit Quality Of Large U.S. Media Companies (2025 Edition)

COMMENTS

China's Car Sector: A Shakeout Looms


Seek And Deploy: How DeepSeek Is Reshaping U.S. Tech Industry's Competitive Dynamics

Over the past two months, Chinese AI startup DeepSeek has sent shockwaves through the technology industry with the release of two large language models. Competitive dynamics among U.S. technology issuers may have changed as a result, and downstream industries that use these technologies may rethink their deeper AI strategies.

Net beneficiaries have yet to emerge, though a few indicators of the potential credit impact on U.S. technology issuers have appeared, and S&P Global Ratings will be examining these in this report. The sector is likely to see disruption, particularly in three key areas--training efficiencies, AI adoption, and competition. But ultimately, we see DeepSeek's groundbreaking technology as a normal and healthy part of the sector's growth story.

Deep Changes Ahead For Tech Industry

DeepSeek challenges long-held assumptions about AI development and could reshape the technology industry's competitive dynamics. It claimed extraordinary cost efficiency in its model development, stating it trained one model for approximately $5.6 million using only 2,000 chips--a fraction of the 16,000 chips typically employed by industry leaders. Instead of using massive infrastructure, DeepSeek uses innovative training techniques.

Perhaps most impressively, it used NVIDIA's Parallel Thread Execution, an intermediate instruction set, which allowed DeepSeek engineers to make fine-grained optimizations, bypassing NVIDIA's CUDA GPU programming software, widely seen as a key moat for the company. The industry already understood many of these techniques, but DeepSeek implemented them in combination to achieve a highly efficient outcome. We expect the leading model makers to research and adopt these techniques to the extent their capabilities will allow.

The $5.6 million figure may be modest as a measure of the model's cost, and DeepSeek acknowledges it excludes earlier-stage research and experiments. Still, Anthropic CEO Dario Amodei characterized DeepSeek's achievements as "an expected point on an ongoing cost reduction curve" in AI development. DeepSeek's claims throw into question the conventional wisdom that developing increasingly powerful AI models requires more and more computational power--the so-called scaling law that has driven massive investments in data centers and high-end chips. By claiming to achieve comparable performance to leading models at a fraction of the cost, DeepSeek pushes the industry toward a greater focus on training efficiency and away from raw computing power.

For U.S. technology issuers, the credit implications aren't cut and dried. The very training efficiency gains that sparked a recent plunge in the share price of companies like NVIDIA Corp.could accelerate AI adoption, as described by the Jevons Paradox: the principle that making a resource more efficient often increases its total consumption. As AI becomes more cost-effective, we expect broader AI implementation across industries, potentially driving greater aggregate demand for computing resources despite lower per-model requirements.

If training efficiencies advance far enough, we believe it would drive value away from infrastructure--where it mainly exists today due to the scarcity and importance of GPUs and high-end components. Instead, value would move toward platforms and software, where participants must currently pay a hefty price for access to the infrastructure, leaving their return on investment sparse.

In a world of falling training costs and expanding AI adoption, infrastructure investment priorities would shift from training to "inference," or deploying AI to respond to real cases. This trend could slow the growth in demand for the highest-end GPUs and open opportunities for custom silicon, CPUs, and other less expensive AI accelerators.

DeepSeek's models also increase the prospects for edge AI in personal computers and smartphones. This could catalyze an upgrade cycle for device original equipment manufacturers and open even more use cases.

Finally, DeepSeek's innovations may also reduce barriers to entry in AI development by reducing training costs. This opens the field to more players and increases competition.

Role models and the role of models

One controversial aspect of DeepSeek's methodology is its use of model distillation--a process by which a student model asks a teacher model hundreds of thousands or millions of questions and uses the answers as part of its training data. While DeepSeek has acknowledged using distillation on open-source models from Meta and Alibaba in the past (which the open source community typically supports), OpenAI says it has evidence that DeepSeek also distilled its proprietary models, which would violate its terms of service.

Distillation may be a way for followers to catch up quickly, but we haven't seen much evidence that it can push the frontier forward. It also creates two problems for the industry that we will closely watch: Who will invest to train the next frontier model if it can be distilled a few months later? And can distillation be effectively limited?

Still, the success of DeepSeek's open-source approach may boost the competitiveness of open-source models against proprietary ones. With that boost, the commoditization of foundational models would likely accelerate.

DeepSeek's success also raises questions about the effectiveness of export controls on advanced semiconductors. That DeepSeek achieved these breakthroughs despite limited access to cutting-edge U.S. chips suggests that raw computing power may be less critical than previously thought. This development, combined with China's demonstrated AI competitiveness, suggests that further restrictions might provoke China to accelerate domestic semiconductor development.

Table 1

Potential impacts on key stakeholders

NVIDIA Corp.

NVIDIA will maintain its very strong market position at the cutting edge of compute. While we don’t expect any ratings impact, efficiencies in training could slow demand growth in the near term and increase spending on inference. This would increase competition for NVIDIA given lower-cost alternatives, likely leading to some erosion of its nearly 70% S&P Global Ratings-adjusted EBITDA margin. Furthermore, DeepSeek demonstrated that it is possible to use NVIDIA’s GPUs without relying on its programming software, which is one of NVIDIA’s key competitive advantages. 
Training infrastructure  Companies overindexed to training workloads and NVIDIA GPU allocations may face challenges given increased focus on training efficiency and the potential shift in priority for inferencing infrastructure. 
Frontier model makers  Lower training costs shrink barriers to entry for upstart model makers, potentially resulting in the commoditization of foundational models. Increasing challenge of preventing distillation.  
U.S. export restrictions  China delivered a competitive model despite U.S. export restrictions. Either training can be done with less powerful chips allowed under current controls or Chinese companies have found a way to access restricted chips. 
Software  Lower costs and potential commoditization of foundational models could improve the ROI of AI use cases, increasing AI adoption. 

Apple Inc.

Similar to software, Apple will benefit from lower training costs and cheaper models. It was the lone big tech player who refrained from significant AI spending. Also, it could benefit from an upgrade cycle to facilitate edge AI--which combines edge computing with AI to process data locally--and its App Store will be a leading distribution channel for edge AI applications. 
Platforms (e.g. Microsoft, Amazon)  Platforms have all the pieces that enterprises need to plug in an array of models into their systems and processes (Microsoft Azure hosts over 1,800 models including DeepSeek R1). Efficiency advances could help slow capital spending. 
China  China has proven that it can catch up in the AI race quickly and innovate around U.S. export controls. 
Custom chip and CPU makers  Custom chipmakers have a growing training business because NVIDIA’s chips are so expensive, which could be hurt if NVIDIA becomes more price competitive. However, they could benefit if the training chip mix-shifts in favor of non-GPU accelerators. Investment in inference will expand the competitiveness of non-GPU accelerators and possibly CPUs, improving their positions in the space.  
Meta and Alphabet  As frontier model makers, Meta and Alphabet will face more competition and an increasing risk of distillation. In particular, Meta faces a new open-source competitor. The value of Alphabet’s TPU custom accelerator as a differentiator may fall if GPUs get relatively cheaper. But they are both massive consumers of AI in their core businesses, which would benefit from greater efficiency and faster innovation. 

Leading Indicators: Frontier Modelers, Hyperscalers, Picks And Shovels

For now, we have not taken any rating actions on U.S. technology companies based on the DeepSeek developments. Many of our hypotheses require more time to interrogate, and the magnitude of such change is difficult to judge. Furthermore, many potentially challenged issuers have good cushion within the ratings to absorb a modestly weaker competitive position. We already have a favorable view of the business profiles of the potential beneficiaries.

That said, indicators we're watching include comments from the leading model makers, spending patterns of the hyperscale data center providers, and guidance from leading AI semiconductor companies--the so-called "picks and shovels" of AI investment. We will also pay close attention to insights from frontier model makers OpenAI, Anthropic, Meta, and Alphabet regarding how the competitive landscape is changing.

Spending plans from hyperscale data center providers Microsoft Corp., Alphabet Inc., Meta Platforms Inc., Amazon.com Inc., and Oracle Corp. will also reveal oncoming shifts in AI. These plans are downstream from the model makers; therefore, if model makers keep their training plans as closely held secrets, their intentions will eventually show up in hyperscale spending.

Guidance from Microsoft's and Meta's recent earnings suggest that their trajectory has not yet changed. Microsoft's fiscal third- and fourth-quarter capital expenditure (capex) guidance is similar to the second quarter's, which indicates about 60% growth adjusted for leases (40% cash only) for the fiscal year ending in June. It also said that fiscal 2026 capex will increase but at a slower pace than fiscal 2025. Furthermore, the mix of investing in short-lived assets such as data center equipment will increase relative to long-lived assets such as land and buildings, which is pushing up fiscal 2025 capex. This means that spending on data center equipment will be stronger than headline 2026 capex growth will suggest.

Meta also maintained its capex guidance, which suggests 68% growth at the midpoint of the range. CEO and founder Mark Zuckerberg said, "I continue to think that investing very heavily in capex and infra is going to be a strategic advantage over time. It's possible that we'll learn otherwise at some point, but I just think it's way too early to call that." We think he left open the possibility for some capex consolidation in the future, but that massive infrastructure will continue to provide Meta a competitive advantage.

Other major signs that we will monitor are results and guidance for leading AI semiconductor beneficiaries. In order of importance, these are NVIDIA, memory makers (SK Hynix Inc., Micron Technology Inc., and Samsung Electronics Co. Ltd.), and custom chip makers (Broadcom Inc. and Marvell Technology Inc.). These indicators will provide insight into the how the mix of hyperscale spending is changing.

Finally, the progression of the $500 million Stargate Project and OpenAI's reported effort to raise $40 billion at a $340 billion valuation are points to watch. If these efforts are delayed or derailed, it could indicate that the increased focus on training efficiency we expect is taking hold.

Related Research

This report does not constitute a rating action.

Primary Credit Analyst:Christian Frank, San Francisco + 1 (415) 371 5069;
christian.frank@spglobal.com
Secondary Contact:David T Tsui, CFA, CPA, San Francisco + 1 415-371-5063;
david.tsui@spglobal.com

No content (including ratings, credit-related analyses and data, valuations, model, software, or other application or output therefrom) or any part thereof (Content) may be modified, reverse engineered, reproduced, or distributed in any form by any means, or stored in a database or retrieval system, without the prior written permission of Standard & Poor’s Financial Services LLC or its affiliates (collectively, S&P). The Content shall not be used for any unlawful or unauthorized purposes. S&P and any third-party providers, as well as their directors, officers, shareholders, employees, or agents (collectively S&P Parties) do not guarantee the accuracy, completeness, timeliness, or availability of the Content. S&P Parties are not responsible for any errors or omissions (negligent or otherwise), regardless of the cause, for the results obtained from the use of the Content, or for the security or maintenance of any data input by the user. The Content is provided on an “as is” basis. S&P PARTIES DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE OR USE, FREEDOM FROM BUGS, SOFTWARE ERRORS OR DEFECTS, THAT THE CONTENT’S FUNCTIONING WILL BE UNINTERRUPTED, OR THAT THE CONTENT WILL OPERATE WITH ANY SOFTWARE OR HARDWARE CONFIGURATION. In no event shall S&P Parties be liable to any party for any direct, indirect, incidental, exemplary, compensatory, punitive, special or consequential damages, costs, expenses, legal fees, or losses (including, without limitation, lost income or lost profits and opportunity costs or losses caused by negligence) in connection with any use of the Content even if advised of the possibility of such damages.

Credit-related and other analyses, including ratings, and statements in the Content are statements of opinion as of the date they are expressed and not statements of fact. S&P’s opinions, analyses, and rating acknowledgment decisions (described below) are not recommendations to purchase, hold, or sell any securities or to make any investment decisions, and do not address the suitability of any security. S&P assumes no obligation to update the Content following publication in any form or format. The Content should not be relied on and is not a substitute for the skill, judgment, and experience of the user, its management, employees, advisors, and/or clients when making investment and other business decisions. S&P does not act as a fiduciary or an investment advisor except where registered as such. While S&P has obtained information from sources it believes to be reliable, S&P does not perform an audit and undertakes no duty of due diligence or independent verification of any information it receives. Rating-related publications may be published for a variety of reasons that are not necessarily dependent on action by rating committees, including, but not limited to, the publication of a periodic update on a credit rating and related analyses.

To the extent that regulatory authorities allow a rating agency to acknowledge in one jurisdiction a rating issued in another jurisdiction for certain regulatory purposes, S&P reserves the right to assign, withdraw, or suspend such acknowledgement at any time and in its sole discretion. S&P Parties disclaim any duty whatsoever arising out of the assignment, withdrawal, or suspension of an acknowledgment as well as any liability for any damage alleged to have been suffered on account thereof.

S&P keeps certain activities of its business units separate from each other in order to preserve the independence and objectivity of their respective activities. As a result, certain business units of S&P may have information that is not available to other S&P business units. S&P has established policies and procedures to maintain the confidentiality of certain nonpublic information received in connection with each analytical process.

S&P may receive compensation for its ratings and certain analyses, normally from issuers or underwriters of securities or from obligors. S&P reserves the right to disseminate its opinions and analyses. S&P's public ratings and analyses are made available on its Web sites, www.spglobal.com/ratings (free of charge), and www.ratingsdirect.com (subscription), and may be distributed through other means, including via S&P publications and third-party redistributors. Additional information about our ratings fees is available at www.spglobal.com/usratingsfees.

 

Create a free account to unlock the article.

Gain access to exclusive research, events and more.

Already have an account?    Sign in