← Back to feed
9

Nvidia Commits $26B to Build Open-Weight Frontier AI Models

Open SourceTop News1 source·Mar 11

Summary

  • • Nvidia commits $26 billion over five years to develop open-weight frontier AI models
  • • Move transforms Nvidia from chipmaker into full-stack AI lab competing with OpenAI and DeepSeek
  • • Nemotron 3 Super (128B params) scores 37 on AI Index, beating GPT-OSS at 33 but trailing some Chinese models
  • • 550B-parameter model has completed pretraining; models are optimized for Nvidia's own hardware
Adjust signal

Details

1.Financials

Nvidia will spend $26 billion over five years on open-weight AI model development

Disclosed in a 2025 financial filing and confirmed by executives to WIRED. This is a structural, balance-sheet-level commitment — one of the largest single-company investments in open-weight frontier model development announced to date.

2.Strategy

Models are tuned to Nvidia hardware, creating a self-reinforcing ecosystem

By optimizing open-weight models for its own GPUs, Nvidia deepens developer and enterprise dependency on its hardware stack. VP Kari Briski confirmed training also serves as a stress test for Nvidia's compute, storage, and networking architecture roadmap.

3.Product Launch

Nemotron 3 Super released: 128B-parameter open-weight model, Nvidia's most capable to date

Roughly equivalent in parameter count to OpenAI's GPT-OSS. Nvidia claims it outperforms GPT-OSS across several benchmarks and ranks first on PinchBench, which assesses control of OpenClaw. This was an internal test; independent verification is pending.

4.Stat

Nemotron 3 Super scores 37 on the AI Index vs. GPT-OSS's 33, but trails several Chinese models

The AI Index aggregates scores across 10 benchmarks. Nvidia's lead over US open-weight peers is meaningful, but Chinese models from DeepSeek, Alibaba, and others score higher — underscoring that Nvidia's open models are not yet at the absolute frontier.

5.Infrastructure

Nvidia has completed pretraining of a 550B-parameter model not yet publicly released

This is significantly larger than the 128B Nemotron 3 Super just released, suggesting a more powerful model is in the pipeline. The completion of pretraining indicates Nvidia's model development program is further along than the public product timeline implies.

6.Market Impact

Top US models are cloud-only and proprietary; Chinese labs dominate open-weight releases

OpenAI, Anthropic, and Google offer no freely downloadable weights for their best models. DeepSeek, Alibaba, Moonshot AI, Z.ai, and MiniMax all release weights openly and for free. Nvidia's investment is the most significant US counterweight to this dynamic.

7.Context

Meta pioneered open AI models with Llama in 2023 but may not keep future models fully open

CEO Mark Zuckerberg recently rebooted Meta's AI efforts and signaled a potential shift away from full openness for future models. If Meta retreats, Nvidia becomes the primary US source of truly open frontier weights.

8.New Tech

Nvidia has already released specialized open models for robotics, climate modeling, and protein folding

These domain-specific releases predate the broader frontier model push and demonstrate that Nvidia's model development capability is not hypothetical — it has been building and shipping models for some time.

9.Insight

Nvidia's model lab is strategically positioned as a US alternative to Chinese open-weight AI

For governments and enterprises concerned about supply-chain dependencies on Chinese AI models, Nvidia's open-weight program offers a credible domestic alternative. VP Bryan Catanzaro: 'It's in our interest to help the ecosystem develop.'

Financials = investment commitment, Strategy = business positioning, Product Launch = new model release, Stat = benchmark data, Infrastructure = compute/training buildout, Market Impact = competitive landscape, Context = background on open-weight ecosystem, New Tech = specialized model capabilities, Insight = strategic implications

What This Means

Nvidia's $26 billion bet on open-weight AI models marks one of the most significant structural expansions in tech in years — a chipmaker with unmatched compute access is now entering the frontier model race with its own integrated stack. By optimizing models for its own hardware, Nvidia creates a self-reinforcing ecosystem while simultaneously offering the global developer community a credible US-based alternative to Chinese open models like DeepSeek and Alibaba Qwen. For OpenAI, Anthropic, and Google, Nvidia shifts from being a key supplier to a potential competitor with deeper infrastructure advantages than any of them. For governments and enterprises concerned about AI supply-chain dependencies on Chinese models, Nvidia's open models may meaningfully reshape sourcing decisions at scale.

Sentiment

Limited public discussion so far, with early positive takes on Nvidia's strategic pivot

@amarmicMichael Amar · Chairman @Parisblockweek @RaiseSummit @machinasummitView post
Impressed

Nvidia just committed $26 billion to building open weight AI models. And released Nemotron 3 Super... The company that sold picks and shovels to every AI lab on earth just decided to start digging itself. Jensen is no longer content being the arms dealer. He wants a seat at the frontier.

@GenAISpotlightGenAI Spotlight · Generative AI news curatorView post
Excited

Nvidia is dropping a $26 billion open-weight bomb on the model labs... massive strategic pivot to own the software ecosystem as aggressively as the silicon... When the hardware king becomes the open-weight king, the traditional lab moats for closed models start to look like very expensive speed bumps.

@AlexWingfield_Alex Wingfield · AI trends commentatorView post
Intrigued

Nvidia just quietly pledged $26 billion to crank out open weight AI models, not just chips, over five years. If that sounds niche, it might rewrite who everyone builds on next.

Sources

Similar Events