Real-time Update | NVIDIA GTC 2026 Conference Highlights Galore
The NVIDIA GTC 2026 conference opened today in San Jose, California, USA, and will run from March 16 to 19. Over 30,000 developers, researchers, and industry representatives from 190 countries will attend the conference, which features over 1000 sessions.
Several important signals have been released prior to the conference: NVIDIA is integrating Groq technology, which it previously acquired, into its product line; Samsung will manufacture AI chips for NVIDIA for the first time; and OpenAI is expected to be one of the first customers for NVIDIA's next-generation inference chip. These series of moves demonstrate that NVIDIA is further expanding into the inference chip market from its position as an AI training chip leader and diversifying its supply chain to reduce reliance on TSMC.
This year, the conference has also established an OpenClaw exclusive experience area called "Build-a-Claw." Attendees can customize and deploy a sustainable AI agent on-site with guidance from NVIDIA engineers.
As one of the most important annual AI industry technology release platforms, NVIDIA CEO Jensen Huang delivered a keynote speech at 2 a.m. Beijing time on March 17th, the full text of which can be found at: "Jensen Huang GTC Speech Full Text: Market demand will exceed $10 trillion by 2027; Everyone should develop an OpenClaw strategy". BlockBeats AI Monitoring Team 1M AI News will provide real-time updates on conference highlights and key points. The latest developments are as follows:
NVIDIA GTC Robot Panorama: Cosmos 3 Unified World Model Released, GR00T N2 Tops Robot Strategy Ranking, Disney Snow Baby Takes the Stage
According to 1M AI News monitoring, NVIDIA has unveiled a series of physical AI new products at the GTC conference and partnered with industrial giants in the global robot ecosystem, humanoid robot pioneers, and surgical robot manufacturers. Jensen Huang stated, "Physical AI has arrived, and every industrial enterprise will become a robot company."
Key product releases:
1. Cosmos 3: The first unified synthetic world foundation model for accelerated general-purpose robot intelligence development in complex environments
2. Isaac Lab 3.0: Early Access version, supporting large-scale robotics learning on DGX-level infrastructure, built on the new Newton Physics Engine 1.0 and PhysX SDK, with added multi-physics simulation and support for complex dexterous manipulation
3. GR00T N1.7: Early Access version, comes with a commercial license, providing advanced dexterous control and other general skills for mass-produced robot deployment
4. GR00T N2 (Preview): Next-generation robot base model based on DreamZero research, utilizing a new World Action Model architecture, achieving over two times the success rate of mainstream vision-language action models in new tasks and environments, currently ranked first on the MolmoSpaces and RoboArena leaderboards, planned for release by the end of the year
On the industrial robotics front, global installed base of over 2 million units from FANUC, ABB Robotics, YASKAWA, and KUKA are integrating the Omniverse Kit and Isaac simulation framework into virtual commissioning solutions, while also integrating Jetson modules into controllers for edge AI inference. In humanoid robotics, companies like 1X, AGIBOT, Agility, Boston Dynamics, Figure, among others, are leveraging Cosmos, Isaac Sim, and Isaac Lab to accelerate development. In the medical robotics domain, CMR Surgical is using Cosmos-H for simulating training of its Versius surgical system, Johnson & Johnson Medical is using Isaac Sim and Cosmos to train workflow for the Monarch urology platform, and Medtronic is exploring IGX Thor for functional safety in surgical robot systems.
One of the highlights of the conference came from Disney: Disney utilized the NVIDIA Warp framework and integrated GPU-accelerated physics simulator Kamino into the Newton Physics Engine to train motion strategies for the Olaf snowman and BDX robot characters, enabling Olaf to learn self-heat management and reduce collision noise. Huang Renxun appeared on stage with the Olaf robot in a keynote speech, and Olaf will make its official debut at Disneyland Paris on March 29.
NVIDIA GTC unveils Nemotron 3 with three new models: Ultra focusing on cutting-edge inference, VoiceChat combining speech recognition, large models, and speech synthesis
According to 1M AI News monitoring, NVIDIA announced the expansion of the Nemotron 3 open model family at GTC, adding three multimodal models for AI Agents:
1. Nemotron 3 Ultra: Positioned as cutting-edge intelligence, achieved in NVFP4 format on the Blackwell platform with 5x throughput gains, targeting programming assistant, search, and complex workflow automation scenarios
2. Nemotron 3 Omni: Integrating audio, visual, and language understanding capabilities for efficient insight extraction from videos and documents
3. Nemotron 3 VoiceChat: Supporting real-time conversations, where AI can listen and respond simultaneously, integrating automatic speech recognition (ASR), large language model processing, and text-to-speech synthesis (TTS) into a unified system
Simultaneously, NVIDIA also released the Nemotron Security Model and Agent Retrieval Pipeline. The former detects unsafe content in text and images, while the latter enhances the relevance and accuracy of Agent outputs. Additionally, on the 11th of this month, NVIDIA preemptively released Nemotron 3 Super, a hybrid Mamba-Transformer MoE model with 1.2 trillion parameters (120 billion active parameters), natively supporting a 1 million token context window. It achieved over 5x throughput improvement compared to the previous generation, scoring 85.6% on the OpenClaw Agent benchmark test PinchBench, becoming the best-performing open model in its class.
Companies like CodeRabbit, CrowdStrike, AI programming tools Cursor, Factory, ServiceNow, and AI search engine Perplexity have deployed Nemotron models for Agent applications. The AI research platform Edison Scientific has integrated Nemotron into its autonomous AI scientist Kosmos, serving over 50,000 researchers. It can concurrently execute hundreds of research tasks, claiming to compress months of research into a single day.
NVIDIA Ventures into Space Computing: Unveils the Vera Rubin Space-1 Module, AI Performance 25x that of H100
According to 1M AI News monitoring, NVIDIA announced its entry into space computing at the GTC conference, launching the Space-1 Vera Rubin module tailored for on-orbit data centers, integrating 2 Rubin GPUs and 1 Vera CPU. The AI inference performance can reach up to 25 times that of H100, enabling large language models and foundational models to run directly in orbit.
Huang Renxun said, "Space computing, the final frontier, has arrived. With the deployment of satellite constellations and the advancement of deep space exploration, intelligence must exist where data is generated." He also admitted that space thermal dissipation is an unresolved engineering challenge: "In space, there is no conduction, no convection, only radiation, and we must figure out how to cool these systems in space."
The Space-1 module is designed for size, weight, and power-constrained environments, supporting on-orbit autonomous analytics, real-time data processing, and scientific discovery. The first batch of partners includes space solar power company Aetherflux, private space station developer Axiom Space, satellite communication company Kepler Communications, Earth observation company Planet Labs, Sophia Space, and cloud computing satellite company Starcloud. The specific launch date has not been announced yet.
Huang Renxun said "Claude Code and OpenClaw Triggered the Agent Inflection Point": NVIDIA Releases OpenShell Secure Runtime, 17 Enterprise Giants Join
According to 1M AI News monitoring, NVIDIA unveiled the Agent Toolkit open platform at the GTC conference, with the core component being the open-source secure runtime OpenShell, providing policy-based security, networking, and privacy fences for autonomous AI Agents. Huang Renxun said at the event, "Claude Code and OpenClaw Triggered the Agent Inflection Point, extending AI from generation and inference to action. Employees will be empowered by a team of cutting-edge, professional, and customized Agents, and the enterprise software industry will evolve into a specialized Agent platform, with the IT industry at the cusp of the next major expansion."
The Agent Toolkit also includes the open-source AI-Q Blueprint co-developed with LangChain, employing a hybrid architecture where cutting-edge models are responsible for orchestration and Nemotron open models for research, with query costs reduced by over 50%. Agents developed by NVIDIA using the AI-Q Blueprint currently rank first on both the DeepResearch Bench and DeepResearch Bench II leaderboards.
On the security front, NVIDIA is collaborating with Cisco, CrowdStrike, Google, Microsoft Security, and TrendAI to make OpenShell compatible with their network security and AI security tools. CrowdStrike simultaneously released the "Secure-by-Design AI Blueprint," embedding Falcon platform's protective capabilities directly into the NVIDIA AI Agent architecture.
17 software platform vendors have onboarded the Agent Toolkit: Adobe, Amdocs, Atlassian, Box, Cadence, Cisco, Cohesity, CrowdStrike, Dassault Systèmes, IQVIA, Palantir, Red Hat, SAP, Salesforce, Siemens, ServiceNow, and Synopsys. Among them, Salesforce will run the Agentforce Agent with Slack as the main interface and orchestration layer, while Siemens introduces the Nemotron-based Fuse EDA AI Agent for end-to-end chip and PCB design automation.
NVIDIA Launches First Groq Chip LPX: Achieves 35x Inference Efficiency Improvement per Megawatt in Combination with Vera Rubin and Showcases Next-Gen Kyber Prototype
According to 1M AI News monitoring, the Groq 3 LPU (Language Processing Unit) is NVIDIA's first chip launched after acquiring AI inference chip startup Groq for around $20 billion in December last year, with shipments expected to start in the third quarter of this year. The Groq 3 LPX rack accommodates 256 LPUs, equipped with 128GB of on-chip SRAM and an interconnect bandwidth of 640TB per second. The company claims that when LPX is deployed with the Vera Rubin NVL72, the peak inference throughput per megawatt can be increased by up to 35 times, unlocking the revenue potential of trillion-parameter and million-token context inference scenarios. Jensen Huang described the two processors as "extremely different yet mutually unifying: one pursuing high throughput, the other pursuing low latency, while LPX's on-chip memory significantly expands the total memory capacity available to models. The LPX rack is planned to be launched in the second half of this year alongside the Vera Rubin platform.
At the conference, Huang also showcased the next-generation rack architecture prototype codenamed Kyber. Kyber will reconfigure the computing tray of 144 GPUs into a vertical layout to increase physical density, reduce latency, and will be deployed on the successor platform Vera Rubin Ultra, expected to be launched in 2027.
NVIDIA Releases DLSS 5: Fusion of Traditional 3D Graphics and Generative AI, Jensen Huang States This Path Will Sweep Across Industries
According to 1M AI News monitoring, NVIDIA unveiled DLSS 5 at the GTC conference, enabling GeForce GPUs to achieve real-time 4K photorealistic rendering locally by combining the structured data of traditional 3D graphics with generative AI models, without the need to rasterize each scene element per pixel. Huang described this approach in his speech as "the fusion of controllable 3D graphics and probabilistic generative AI," referring to the former as "completely predictable" and the latter as "highly realistic," with the combination allowing developers to create content that is "both exquisite and controllable."
Jensen Huang positioned the technical direction of DLSS 5 as the starting point for a broader paradigm shift, stating that "the approach of integrating structured information with generative AI will be replicated in industry after industry." Using enterprise data platforms such as Snowflake, Databricks, and BigQuery as examples, he predicted that future AI agents will simultaneously invoke structured and generative databases for processing tasks.
Jensen Huang's Latest Speech at NVIDIA: Vera Rubin Seven-Chip Lineup in Full Production, Anticipating $1 Trillion Order in Computing Power
According to 1M AI News monitoring, NVIDIA's founder and CEO Jensen Huang officially announced at the GTC 2026 conference the full-scale production of the Vera Rubin platform, integrating seven new chips covering five types of rack systems, designed as a supercomputer specifically for AI.
The core Vera Rubin NVL72 rack integrates 72 Rubin GPUs and 36 Vera CPUs interconnected via NVLink 6. Compared to the previous Blackwell platform, the number of GPUs required for training large-scale hybrid expert models has been reduced to one-fourth, with the highest per-watt inference throughput reaching up to 10 times that of Blackwell and the cost per token reduced to one-tenth.
The five types of rack systems form a complete AI factory infrastructure:
- Vera Rubin NVL72 GPU Rack
- Vera CPU Rack (256 Vera CPUs, twice as efficient as traditional CPUs, with a 50% speed increase)
- Groq 3 LPX Inference Acceleration Rack
- BlueField-4 STX Storage Rack (designed for AI Agent key-value caching, with up to 5 times higher inference throughput)
- Spectrum-6 SPX Ethernet Rack
In terms of power management, NVIDIA also announced the DSX platform: DSX Max-Q can deploy 30% more AI infrastructure within a fixed power limit, and DSX Flex can activate 100 gigawatts of previously unused idle grid capacity.
Cloud service providers such as AWS, Google Cloud, Microsoft Azure, Oracle Cloud, CoreWeave, Lambda, Nebius, as well as system manufacturers like Cisco, Dell Technologies, Hewlett Packard Enterprise, Lenovo, AMD, among others, have all announced plans to launch Vera Rubin products in the second half of this year. Anthropic, Meta, Mistral AI, and OpenAI have explicitly stated that they will use this platform to train larger-scale models.
Huang Renxun stated that he predicts the total orders for the Blackwell and Vera Rubin systems will reach at least $1 trillion between 2025 and 2027, doubling the $500 billion forecast given at last year's GTC.
NVIDIA Launches NemoClaw to Power Minimalist "Shrimp Farming"
Recently, the open-source AI agent commonly known as "Lobster," OpenClaw, has gained popularity, and NVIDIA (NVDA.O) has also announced a minimalist mode to assist users in "shrimp farming." NVIDIA CEO Huang Renxun announced on Monday at the GTC event the launch of NemoClaw for the OpenClaw agent platform, where users can install a deployment toolchain optimized for OpenClaw with just one command. NemoClaw leverages the NVIDIA Agent Toolkit software to optimize OpenClaw with a single command. It installs OpenShell, providing an open model and an isolated sandbox to enhance data privacy and security for autonomous agents.
You may also like

RootData: February 2026 Cryptocurrency Exchange Transparency Research Report

「One and Done SEA」, so OpenSea chooses to wait a little longer

Ray Dalio: The Resolution of the US-Iran Conflict Is In the Strait of Hormuz

In just 70 days, Polymarket easily raked in tens of millions in fees

Matrixdock is launching the Silver Token XAGm, built on the FRS standard as an on-chain silver-backed asset.

a16z: The Hardest Enterprise Software, and the Greatest Opportunity in AI

Polymarket Market-Making Bible: Pricing Spread Formula

Ray Dalio: If the United States loses Hormuz, it will lose more than just a war
How to Earn Up to 40% Rebates on Crypto Futures Trading (WEEX Trade to Earn IV Guide)
WEEX Trade to Earn IV lets traders earn up to 40% fee rebates in real time through a tiered miner system tied to trading activity. With additional boosts from referrals, it offers a more reliable alternative to airdrops as the crypto market gains momentum.

NVIDIA Plays Trillion-Dollar Chess Game | Rewire News Morning Edition

People Behind Pokémon Go: Started with CIA's Money, Now Mapping the World for the Military AI

Huang Renxun GTC Speech Full Text: By 2027, Market Demand Will Exceed $1 Trillion; Everyone Should Develop an OpenClaw Strategy

Stratechery Debunks the AI Bubble Myth: What Should We Do with AI?

Three Charts to Watch at NVIDIA's GTC: Cheaper Compute, Spend More

BTC Eight Green Candles Reach $76K, What Is the Logic Behind Outperforming Gold in the Midst of Battle?

Morning Report | Strategy invested $1.57 billion last week to increase its holdings by 22,337 bitcoins; Abra plans to go public through a SPAC merger; Metaplanet aims to raise approximately $765 million to increase its bitcoin holdings

CB Insights: Nine Predictions for the Fintech Sector in 2026, with Asset Tokenization Already Becoming a Trend
