NVIDIA delivered outstanding Q3 FY2026 results with revenue of $57 billion, up 62% year-over-year and 22% sequentially, driven by record Data Center revenue of $51 billion (+66% YoY). The Blackwell platform ramp is accelerating, with the GB300 now contributing two-thirds of Blackwell revenue, while networking revenue surged 162% to $8.2 billion. Management raised Q4 guidance to $65 billion (+/- 2%), reflecting continued momentum in AI infrastructure demand, despite geopolitical headwinds in China. The company emphasized three massive platform shifts—accelerated computing, generative AI, and agentic/physical AI—positioning NVIDIA for a projected $3-4 trillion annual AI infrastructure market by the end of the decade.
| Metric | Value | Change |
|---|---|---|
| Total Revenue | $57 Billion | +62% YoY / +22% QoQ |
| Data Center Revenue | $51 Billion | +66% YoY |
| Networking Revenue | $8.2 Billion | +162% YoY |
| Gaming Revenue | $4.3 Billion | +30% YoY |
| Non-GAAP Gross Margin | 73.6% | Increased sequentially |
| Q4 Revenue Guidance | $65 Billion | +14% QoQ (midpoint) |
NVIDIA disclosed massive visibility into future demand, stating they have visibility to $500 billion in Blackwell and Rubin revenue from the start of the year through the end of calendar 2026. This indicates a secured backlog that spans multiple quarters, reducing execution risk and validating the longevity of the current AI infrastructure cycle. Management noted that this figure is likely to grow as new agreements, such as the recent KSA deal for 400,000-600,000 GPUs, are finalized.
The company is aggressively expanding its ecosystem through strategic investments and deep partnerships with leading AI model builders like OpenAI, Anthropic, and xAI. These partnerships are not just financial but involve deep technical co-development to optimize models for CUDA, effectively locking these high-growth workloads onto NVIDIA hardware. For instance, the partnership with Anthropic marks their first adoption of NVIDIA architecture, bringing a major new workload onto the platform.
Networking has emerged as a major growth driver, with revenue up 162% year-over-year to $8.2 billion. NVIDIA is successfully positioning its Spectrum X Ethernet and Quantum-2 InfiniBand solutions as essential components for 'gigawatt-scale AI factories.' The company claims to be the only provider with 'scale up, scale out, and scale across' capabilities, creating a competitive moat that is difficult for rivals to replicate.
Management highlighted 'Agentic AI' and 'Physical AI' as the next significant growth legs, moving beyond generative AI. They cited specific examples like Cursor (coding) and Tesla FSD (robotics) driving demand. The introduction of 'Cosmos' world foundation models and the expansion of Omniverse into digital twins for manufacturing suggest NVIDIA is laying the groundwork for the next wave of AI adoption that interacts with the physical world.
NVIDIA is executing a 'one-year cadence' for product releases (Blackwell to Rubin), maintaining a performance leadership that translates to superior Total Cost of Ownership (TCO) for customers. The Rubin platform, powered by the Vera Rubin chip and 'Vera' CPU, is on track for 2026 and promises an 'x-factor' improvement. This rapid innovation cycle forces competitors to constantly play catch-up and ensures customers remain within the NVIDIA upgrade cycle.
Geopolitical headwinds in China are impacting revenue, with management noting that 'sizable purchase orders never materialized' due to increasingly competitive market conditions and US government restrictions. While NVIDIA is not assuming any data center compute revenue from China in the current guidance, the loss of this market or a shift to domestic Chinese competitors (like Huawei) represents a long-term strategic risk to market share.
Gross margin guidance for fiscal year 2027 suggests potential pressure, as management stated they are 'working to hold gross margins in the mid-seventies' despite rising input costs. While current margins are exceptionally high (73.6%), the admission that 'input costs are on the rise' implies that the pricing power or cost reductions that fueled recent margin expansion may face headwinds in the next fiscal year.
Management acknowledged that the sheer scale of the build-out—requiring gigawatts of power, vast amounts of land, and sophisticated financing—poses execution risks. Jensen Huang admitted that 'none of these things are easy' and that they are all constraints. While NVIDIA is managing these, any failure in the supply chain or power infrastructure could bottleneck the company's ability to meet the massive demand they are forecasting.
The rapid transition from Hopper to Blackwell creates a potential for inventory digestion or transition friction. While Hopper still generated $2 billion in Q3, it is in its 13th quarter. Management noted that the transition to GB300 has been 'seamless,' but such rapid architectural shifts always carry the risk of customer hesitation or unexpected technical hurdles during the ramp of the new Rubin architecture next year.
Overall: Management exhibited extremely high confidence and a visionary demeanor, consistently dismissing concerns about an AI bubble by framing current demand as the early stages of three fundamental platform shifts. Jensen Huang was particularly assertive and articulate, using definitive language to describe NVIDIA's unique position across all phases of AI, while Colette Kress provided precise, data-driven financial updates that reinforced the company's operational strength.
Confidence: HIGH - Management displayed unwavering confidence backed by specific visibility numbers ($500B for Blackwell/Rubin) and strong financial performance. They directly addressed skepticism regarding ROI and supply constraints with detailed explanations of TCO benefits and supply chain mastery.
$65 Billion (+/- 2%)
74.875% (+/- 50 bps)
Mid-seventies (despite rising input costs)
Not assuming any data center compute revenue from China
Hedging & Uncertainty: Management generally used strong, definitive language regarding their technology and market position ('We excel at every phase,' 'The world is undergoing three massive platform shifts'). However, they employed hedging when discussing external factors outside their direct control, such as supply chain constraints and geopolitical issues. Phrases like 'we believe,' 'estimate,' and 'roughly' were used when discussing long-term market sizing ($3-4 trillion) and specific future revenue visibility. Colette Kress used precise ranges for guidance ('plus or minus 2%') but hedged on China revenue assumptions ('we are not assuming any data center compute revenue').
The clouds are sold out, and our GPU installed base... is fully utilized. - Colette Kress, CFO
From our vantage point, we see something very different [regarding an AI bubble]. - Jensen Huang, CEO
The world is undergoing three massive platform shifts at once. - Jensen Huang, CEO
We have visibility to a half a trillion dollars in Blackwell and Rubin revenue. - Colette Kress, CFO
Performance per watt... translates directly absolutely directly to your revenues. - Jensen Huang, CEO
Input costs are on the rise but we are working to hold gross margins in the mid-seventies. - Colette Kress, CFO
Analyst Sentiment: Analysts expressed a mix of enthusiasm and skepticism, probing heavily on the sustainability of the AI capex cycle ('bubble' concerns), the ability of customers to fund massive builds, and the specific mechanics of the Blackwell ramp and margin structure.
Management Responses: Jensen Huang responded to skepticism with a masterclass in reframing the narrative, moving beyond simple 'training' discussions to explain the 'three scaling laws' (pre-training, post-training, inference) and the shift of general-purpose computing to GPUs. Colette Kress provided reassuringly specific details on supply chain visibility and margin management.
Sustainability of AI Infrastructure Spend: Analysts questioned if the $500B+ in projected spending was realistic. Management countered by explaining that AI is replacing traditional CPU workloads (search/recommendation) which creates immediate ROI, and that Agentic AI is a new, net-new revenue source.
Supply Chain and Power Constraints: Multiple questions focused on bottlenecks like power and financing. Management acknowledged these are real constraints but emphasized their 'performance per watt' advantage makes them the best choice for power-constrained environments.
Competition and ASICs: Analysts asked about the threat of custom ASICs. Huang argued that the complexity of AI (requiring scale-up, scale-out, scale-across) and the diversity of models makes NVIDIA's general-purpose architecture superior to single-purpose ASICs.
Capital Allocation: Analysts asked about the use of cash (buybacks vs. ecosystem investments). Management clarified that investments in companies like Anthropic/OpenAI are strategic to expand the CUDA ecosystem and offer high returns, while maintaining a strong balance sheet for supply chain credibility.
NVIDIA remains the undisputed leader in AI infrastructure, with Q3 results proving that demand is not only robust but accelerating across new verticals like Agentic AI and Robotics. The transition to Blackwell is proceeding better than expected, and the visibility into $500B+ in revenue over the next two years provides exceptional earnings visibility. While margin pressure in FY27 and China risks exist, NVIDIA's 'full stack' advantage (hardware, networking, software) creates a widening moat that competitors cannot easily breach. The company is effectively taxing the global transition to AI, making it a core holding for any tech portfolio.
Management noted that analyst expectations for top CSP CapEx in 2026 have increased to roughly $600 billion, up over $200 billion from the start of the year. This indicates that the largest tech companies are doubling down on AI infrastructure spend.
There is a growing trend of nation-states building their own AI infrastructure (e.g., KSA/Saudi Arabia deals mentioned). Management noted that 'each country will fund their own infrastructure,' expanding the customer base beyond just US tech giants.
The availability of power is identified as a primary bottleneck for AI growth. Management emphasized that 'performance per watt' is becoming the critical metric for data centers, as power availability is the limiting factor for new 'AI factories'.