NVIDIA’s third-quarter earnings report for this year is not bad—in fact, it could even be described as exceeding expectations. In the fiscal third quarter of 2026, the company reported revenue of 57billion,ayear−over−year increase of6231.9 billion, up 65% from the same period last year.
However, Wall Street’s reaction was surprisingly tepid. NVIDIA’s stock initially rose more than 5% in early trading on Thursday but ultimately closed down 3%. By the end of Thursday’s session, NVIDIA shares had fallen 5.88tocloseat180.64, a decline of 3.15%, wiping out approximately $142.9 billion in market value.
Overnight, NVIDIA’s stock plummeted sharply, causing the AI chipmaker’s market capitalization to evaporate by about $115 billion. Although the stock ultimately closed down 2.6%, it had dropped more than 7% at one point during the day. Shares of companies associated with NVIDIA also declined in tandem.
For an NVIDIA that was riding high and poised to hit new highs, the cold shoulder from the capital markets was nothing short of a wake-up call. GPUs are selling better than ever, yet the stock price keeps falling—leaving those of us on the sidelines to ask:

What’s going on with NVIDIA?
Why the Sharp Drop?
While this round of correction came suddenly, many seemed to have braced themselves for it. After multiple quarters of rapid growth, market expectations for NVIDIA had been built up to extraordinarily high levels, and any negative factors were bound to be magnified. One of these factors was a series of moves by Google, which played a significant role in NVIDIA’s sharp drop on Thursday evening.
According to the Financial Times, Google recently released its latest large language model, Gemini 3, which is considered to have surpassed OpenAI’s ChatGPT. Notably, Google’s model was trained using its own Tensor Processing Units (TPUs) rather than the NVIDIA chips used by systems like OpenAI’s. Jones Trading analyst Mike O’Rourke commented that the release of Gemini 3 “may prove to be a more subtle but more important version of the disruption caused by DeepSeek.”
Later on Monday evening, a report from The Information suggested that Google is actively pitching its TPU chips to potential customers—including Meta—for use in their own data centers, instead of relying on NVIDIA’s chips. Previously, Google’s TPUs were only available for lease through its cloud computing services. Both Meta and OpenAI are among NVIDIA’s largest customers.
On Tuesday, NVIDIA acknowledged the impact of Gemini, stating: “We’re happy for Google — they’ve made tremendous progress in AI, and we’ll continue to provide them with products and services.” The company added: “NVIDIA is a full generation ahead — it’s the only platform that can run all AI models and can operate anywhere computing is done.”
In a media interview, NVIDIA reiterated: “We’re very happy for Google — they’ve made great progress in AI, and we will continue to supply them. We believe our chips outperform ASIC chips, such as Google’s TPU.”
Looking more deeply, NVIDIA’s pullback appears to be the result of several factors converging at once: valuation pressure, shifting expectations for industry momentum, real challenges in AI investment returns, fluctuations in macro liquidity, and cautious commentary from some prominent investors regarding the current AI boom. As the “anchor” of the entire AI investment chain, NVIDIA’s stock naturally bore the brunt.
First is the issue of valuation. Over the past two years, the explosive growth in demand for AI computing power has almost positioned NVIDIA as “irreplaceable,” sending its stock price to historic highs. The market has long been accustomed to a single narrative: as long as AI continues to expand, NVIDIA will keep soaring.
However, as growth rates shift from “extremely high” to merely “high,” inevitable corrections in market expectations follow. Some fund managers pointed out that NVIDIA’s valuation at certain stages had already baked in a multi-year path of perfect growth. When guidance becomes even slightly conservative, the stock is prone to sharp volatility. The recent market correction is a manifestation of this recalibration process.
More critically, the capital return cycle for AI is entering a new phase. While businesses, cloud service providers, and startups continue to expand their computing infrastructure, the actual returns at the application layer have yet to fully materialize. Some investors are beginning to question whether the pace of AI server purchases can be sustained in the second and third waves as it was in the first. Especially given the rapidly rising costs of AI model training and persistently high inference costs, the market is increasingly focused on whether “inputs match outputs.” Even though NVIDIA still holds core supply capabilities, even a slight slowdown in AI demand during its transition from explosive growth to stabilization is enough to trigger emotional shifts in the capital markets.
At the same time, the cautious stance of well-known investor Michael Burry—the real-life figure portrayed in The Big Short—has also become a significant trigger for shifting market sentiment. Burry has repeatedly expressed concern about “overcrowded trades,” particularly those involving tech giants with high valuations and heavy institutional ownership.
In his view, the current AI boom shares structural similarities with the dot-com bubble of 1999—not because the technology lacks value, but because the market has front-loaded its worth too quickly. He emphasizes that while the long-term prospects of AI are unquestionable, capital markets often reach inflection points precisely when “the story sounds the most compelling.” With NVIDIA’s market cap repeatedly hitting record highs, Burry’s views have resurfaced in social media and research circles, further amplifying investor caution.
Burry’s perspective is not isolated. Analysts from major firms such as Morgan Stanley and Bernstein have also begun warning clients that the AI hardware supply chain may face changes in demand rhythms over the next few quarters. While such changes may not necessarily imply pessimism on a fundamental level, they are enough to trigger volatility in the capital markets.
Even some long-time bulls of NVIDIA admit that the scale of the past two years’ gains has been so massive that “even the best companies can’t keep climbing at the same steep trajectory forever.” These voices are often ignored during periods of rapid market ascent but are seen as “belated warnings” when volatility intensifies.
Meanwhile, the macro environment is another inescapable backdrop. Surprisingly strong U.S. employment data has weakened expectations for Federal Reserve rate cuts, putting pressure on tech stocks overall—because the higher a company’s valuation and the more its cash flows are tied to the future, the more sensitive it is to interest rates. NVIDIA sits squarely in that sensitive zone.
When rising rate expectations are repriced, the tech sector faces systemic selling pressure. Even a fundamentally solid company like NVIDIA can hardly remain unscathed. In this context, market sentiment is more easily amplified, and even minor developments within the industry or the company can become triggers for large-scale profit-taking.
From within the industry, certain signs have also sparked market speculation. A number of companies are pushing to develop their own chips to reduce long-term dependence on NVIDIA. While these “in-house alternatives” are unlikely to upend NVIDIA’s dominance in the short term, they remind investors that the AI competitive landscape is shifting from “compute scarcity” to “application competition,” and that hardware monopoly premiums will eventually normalize.
Perhaps the most worrying phenomenon, however, is the “circularity” of investment.
Take the recent $100 billion deal between NVIDIA and OpenAI: NVIDIA is investing in building data centers for OpenAI, which in turn will equip those centers with NVIDIA chips. Some analysts say this structure essentially amounts to NVIDIA subsidizing one of its largest customers, artificially inflating AI demand.
“Put simply, I’m NVIDIA, I want OpenAI to buy more of my chips, so I give them money to do it,” said one analyst. “This kind of thing happens in small-scale deals, but at the scale of tens or hundreds of billions of dollars, it’s highly unusual.” He pointed out that the last time such practices were widespread was during the dot-com bubble.
Even less well-known players have joined the game. CoreWeave, a former crypto mining startup turned data center builder, is riding the AI wave. Various AI companies have chosen CoreWeave to train and run their models.
OpenAI and CoreWeave have struck deals worth tens of billions of dollars: CoreWeave rents out data center compute capacity to OpenAI in exchange for equity, and OpenAI can use that equity to pay for the leased capacity.
At the same time, NVIDIA, which also holds a stake in CoreWeave, has agreed to absorb all of CoreWeave’s unused data center capacity through 2032.
In short, NVIDIA and most other major AI players are playing a game of “left foot on right foot,” where the higher NVIDIA’s stock price goes, the more concerns arise about the hidden bubbles beneath it.
Is Jensen Huang Getting Anxious?
Perhaps even more interesting than NVIDIA’s falling stock price is the series of recent remarks by CEO Jensen Huang, which reveal more unease and urgency than in the past.
In the earnings call, Huang stated that sales of Blackwell architecture chips have far exceeded expectations, with all cloud-based GPUs already sold out. This isn’t short-term hype, but a sign of real, surging demand. He also pointed out that compute demand is accelerating across both training and inference, and that this exponential growth is very real. As he put it, the AI ecosystem is expanding rapidly, with a proliferation of new foundational model builders and AI startups, and more industries and countries getting involved—this is a genuine transformation.
During the call, he stated bluntly: “There’s a lot of talk about AI bubbles, but from our perspective, it’s a very different situation.” Huang believes that AI is not only reshaping traditional workloads but also enabling entirely new applications. He remains convinced that NVIDIA has entered a “virtuous cycle,” and emphasized that the company is well-positioned—with its architectural advantages—to cover all phases of AI, from pre-training to post-training and inference. For him, this is not a bubble, but a genuinely sustainable expansion of infrastructure.
Beyond dismissing the bubble narrative, he also shared a grander vision. He has publicly stated that over the next decade, NVIDIA will transform into a global AI infrastructure company, with compute demand growing exponentially. Huang even predicted that by the end of the century, the scale of investment in AI infrastructure will reach 3to4 trillion. He stressed that this is not a short-lived technological craze, but the start of a new industrial revolution—a profound transformation in infrastructure construction.
In an internal all-hands meeting, Huang showed considerable anxiety. He admitted the company is in a “no-win” situation: “If we deliver a bad quarter, the market will say this proves the AI bubble; but even if we deliver an outstanding quarter, people will say we’re just fueling the bubble.” This makes it hard to please everyone. He even joked about online memes claiming NVIDIA is “holding up the Earth,” saying, “It sounds a bit exaggerated, but not entirely wrong.” When discussing the volatility of the company’s market cap, he quipped: “NVIDIA’s market cap has been extremely high before, but it has also evaporated by hundreds of billions of dollars in just a few weeks—‘Has anyone ever lost $500 billion in a few weeks?’” Though laced with humor, his words betrayed his frustration with market sentiment swings.
To shore up confidence, he offered highly positive guidance: the company’s outlook for the next quarter is very strong. “Our guidance is very, very good,” Huang emphasized. He also reiterated that the supply-demand logic behind NVIDIA’s shipments is not only robust but structurally supported. He noted that the company currently has massive backlogs and orders stretching far into the future, which he views as a foundation for long-term growth.
These statements expose the multiple pressures Huang is under.
First is the need to maintain market confidence. As the bellwether of the AI industry, NVIDIA’s stock performance directly influences broader market sentiment toward AI. When strong financial results are still met with skepticism about bubbles, he realizes that financial data alone is no longer enough to persuade investors—he must speak out actively to stabilize expectations and rebuild trust.
Second is strategic positioning. NVIDIA is no longer content to be just a chip supplier—it sees itself as a builder of AI infrastructure. This shift implies greater ambition—not just selling products, but leading the construction of the entire computing ecosystem. Creating and meeting the demand for trillions of dollars in future compute capacity is the key challenge Huang now faces.
Finally, there’s geopolitical pressure. Restrictions in the Chinese market and uncertainties around export controls are squeezing NVIDIA’s growth potential. Against this backdrop, Huang must emphasize the company’s global footprint and irreplaceable strategic value to offset potential policy risks.
These anxieties together form the deeper reasons behind Huang’s frequent and impassioned public statements.
Where Is NVIDIA Headed?
That said, NVIDIA is far from being out of options.
During the earnings call, Huang reiterated a core message: the company has completed a crucial transformation over the past 25 years—“from a gaming GPU company to an AI data center infrastructure company.”
This is not new messaging. As early as 2019, Huang began repeatedly telling the world that NVIDIA is no longer your traditional chipmaker.
There’s a reason for this repositioning. Even as more challengers enter the AI training chip space, NVIDIA still controls a complete, hard-to-replicate system that competitors can’t easily match in the near term.
The most prominent advantage lies in networking and interconnectivity. While the raw compute power of chips like the H100 or H200 is important, what truly enables the efficiency of superclusters is NVIDIA’s NVLink, InfiniBand, Spectrum-X networking solutions, and the entire communication protocol stack, switches, NICs, and interconnect paths between GPUs.
Tech giants like Google, AWS, and Meta all have their own in-house chips, but none match NVIDIA’s completeness in network interconnectivity. In particular, InfiniBand has become the default option for the vast majority of AI superclusters. It’s no longer just a piece of hardware, but an entire ecosystem, toolchain, and tuning process—making migration costs extremely high.
This is one of NVIDIA’s most defensible strongholds for years to come. GPUs may face increasing substitution, but network connectivity is an industry-wide scarce resource. NVIDIA knows this well, which is why it has been emphasizing the importance of networking growth and even reminding investors that future growth will come more from networks and systems, not just individual GPUs. As long as it controls the core technology of how “multiple GPUs work together,” even if customers bring in other accelerators, they can’t bypass NVIDIA’s position in the training system.
Another enduring strength is its software ecosystem. CUDA, NCCL, Triton, TensorRT, Omniverse, DGX Cloud… NVIDIA’s true moat has never been just a single chip, but developer habits. Large-scale AI training isn’t as simple as stacking chips on racks. Model partitioning, inter-layer communication, operator optimization, bottleneck analysis, system tuning—all of these become exponentially harder without the CUDA ecosystem.
Although more and more competitors claim “CUDA compatibility” or “easy migration,” compatibility is not the same as substitution. Ecosystem migration is the slowest system-level undertaking. A Meta engineer once publicly said: “If you don’t use NVIDIA, the engineering difficulty multiplies.” That’s not going to change anytime soon.
At the same time, NVIDIA is accelerating its push into higher-level AI infrastructure. It is advancing in areas like AI factories, digital twins, and enterprise-grade generative AI platforms. One of Huang’s most oft-repeated ideas is that AI is not a passing trend, but a new industrial revolution. Every business will have its own AI factory, just as every company once needed its own data center. No matter how models evolve or how many competing chips emerge, enterprise AI deployment will always involve steps like training, fine-tuning, inference, deployment, monitoring, and scheduling—and NVIDIA wants to be involved in all of them.
This explains why NVIDIA has recently launched increasingly dense, large-scale, system-level products—like the Blackwell platform, GB300 superchip, NVL72 full rack systems, and AI supercomputer solutions. NVIDIA isn’t just selling GPUs anymore, but “ready-to-deploy AI factory modules,” which cloud providers, national compute centers, and large model companies can purchase outright. It’s a higher-margin, more controllable, and harder-to-replace market.
In the medium to long term, the growth logic for AI infrastructure remains unchanged—and is becoming even clearer. The number of AI models is increasing, inference workloads are exploding, and countries around the world are building local data centers and local AI production capabilities. Training demand remains strong, but inference demand will be an order of magnitude larger—and inference also requires massive amounts of high-performance chips, networks, memory, and software stacks. NVIDIA is clearly well ahead in laying out for this, whether it’s Transformer engines, inference acceleration, low-power cluster designs, or deep collaboration with major cloud vendors.
NVIDIA’s future direction is not to defeat any one specific chip company, but to continue establishing itself as the underlying operating system of the AI world. As long as the industry continues to expand, as long as companies keep building their own AI factories, digital twin factories, and localized model systems, NVIDIA will remain the most reliable foundational layer.
The Real Threat: Outside the Game
Looking back at the whole narrative, we find an interesting paradox: the greatest threat to NVIDIA doesn’t come from AMD, Intel, or cloud providers’ in-house chips—but from wavering market confidence in the future of AI.
Technologically, NVIDIA still has the capacity to respond. Whether it’s the deep entrenchment of the CUDA ecosystem, the network moat built by InfiniBand, or its full-stack capabilities from chips to systems—these all make it difficult to truly replace in the short term. But the logic of capital markets is different—when investors begin to question the ROI of AI investments, when the “bubble” narrative drowns out growth metrics, even the most stellar earnings reports from NVIDIA can’t escape valuation corrections.