Nvidia is pushing artificial intelligence beyond its traditional home in the data center, into industrial design software, robotics development pipelines and even the emerging computing infrastructure in space. At the company’s GTC conference in San Jose, where CEO Jensen Huang said demand for its next-generation systems could translate into as much as $1 trillion in purchases through 2027, Nvidia described projects and collaborations aimed at turning its accelerated computing and simulation platforms into a foundation for what it calls “physical AI,” or AI systems that enable autonomous machines to perceive, understand and perform complex actions in the physical world.
Engineering Platforms Add AI Agents and GPU Acceleration
The company announced it is expanding partnerships for industrial design and engineering software, a category that has become a key entry point for applying AI to physical systems. Nvidia said several of the world’s largest engineering software vendors are integrating its accelerated computing stack into their platforms, including Cadence Design Systems, Dassault Systèmes, PTC, Siemens and Synopsys. The integrations are designed to support new forms of AI-driven workflow automation within engineering software. Cadence and several others are developing AI agents that can assist with tasks such as planning design flows, debugging code and coordinating front-end verification steps in semiconductor and system design. These partnerships combine Nvidia’s CUDA-X libraries, Omniverse simulation technology and GPU-accelerated engineering software.
(Image Courtesy of Nvidia)
Nvidia also highlighted how GPU-accelerated simulation is increasingly being applied to industrial engineering problems like automotive aerodynamics and aerospace propulsion. For example, Honda is using Synopsys’ Fluent computational fluid dynamics software on Nvidia’s Grace Blackwell platform to run aerodynamic simulations 34 times faster than CPU-based systems, Nvidia claims. Automakers such as Jaguar Land Rover and Mercedes-Benz are using Siemens Simcenter STAR-CCM+ software on Nvidia infrastructure to analyze vehicle aerodynamics. Aerospace firm Ascendance is running aerodynamic simulations of hybrid electric aircraft using Cadence Fidelity software on GPU infrastructure, enabling large simulation that previously required significant high performance computing resources. In the energy sector, industrial manufacturer Solar Turbines is using the same software on GPU-accelerated systems to simulate combustor designs with billion-cell models.
Simulation and digital twins are also being used in industrial operations. Siemens recently introduced a Digital Twin Composer platform that uses Omniverse libraries to build physics-based simulations of factories, shipyards and production lines. Companies including Foxconn, HD Hyundai, PepsiCo and KION are using these systems to test manufacturing workflows and logistics operations in virtual environments before deployment.
Huang said we’re at the dawn of a new industrial revolution where physical AI and autonomous AI agents are “fundamentally reinventing how the world designs, engineers and manufactures,” adding that Nvidia is “delivering a full-stack accelerated computing platform that empowers every industry to turn this vision into reality at a scale and speed never before possible.”
(Image Courtesy of Nvidia)
A Blueprint for Training Robotics and Computer Vision AI
Another major physical AI announcement at GTC is the introduction of Nvidia’s Physical AI Data Factory Blueprint, a new open reference architecture designed to automate the creation, augmentation and evaluation of datasets used to train robotics, computer vision and autonomous vehicle models. Training these systems often requires large volumes of specialized data, capturing edge cases like unusual lighting conditions, rare objects or unexpected events. Nvidia said the architecture combines its Cosmos world models with automated orchestration tools to generate synthetic data and expand limited real-world datasets.
Physical AI Data Factory Blueprint organizes data production into several stages, including curation, augmentation and automated validation. Cosmos Curator processes and annotates large datasets, while Cosmos Transfer expands them with additional variations. A component called Cosmos Evaluator analyzes the generated data to determine whether it is physically plausible and suitable for training.
Nvidia said cloud providers including Microsoft Azure and Nebius are integrating the blueprint into their infrastructure, allowing developers to run these pipelines at scale. Early users include companies such as ABB Robotics, Teradyne Robotics and Skild AI, as well as autonomous vehicle developers like Uber. The blueprint also incorporates an orchestration system called OSMO that manages distributed computing resources and coordinates stages of the training workflow. Nvidia said OSMO can integrate with coding agents that monitor infrastructure usage and automate operational tasks in the pipeline. Nvidia said the Physical AI Data Factory Blueprint will be available on GitHub in April.
Space: The Ultimate Edge Deployment
While most of the news focused on Earthbound industries, Nvidia also announced plans to extend its AI infrastructure into space. The company introduced the Vera Rubin Space-1 Module, a computing platform designed for satellites and other space-based systems that must operate within strict limits on size, weight and power consumption.
Nvidia CEO Jensen Huang introduces the Vera Rubin Space-1 Module at GTC
The module is designed to run LLMs and frontier models and support real-time data processing directly on spacecraft (or on-orbit analytics, as the company calls it). Nvidia said the Vera Rubin Space-1 Module has a tightly integrated CPU-GPU architecture and high-bandwidth interconnect to manage data from space-based instruments in real time. Companies including space infrastructure developer Axiom Space and satelite imagery firm Planet are working with Nvidia’s hardware for applications like geospatial imaging analysis and satellite network operations. Nvidia also said its existing edge platforms, IGX Thor and Jetson Orin, are also being used for space missions that require on-board inference and image processing. These platforms help process sensor data directly in orbit instead of transmitting raw data back to Earth for analysis.
The data deluge doesn’t stop back on terra firma. Imaging satellites, radar systems and radio frequency sensors produce continuous streams of observations that are added to large geospatial archives used for environmental monitoring, infrastructure tracking and climate analysis. Historically, much of that processing has been done on CPU-based systems, which is a slow process for datasets reaching hundreds of petabytes. Nvidia said its GPU-accelerated platforms such as its RTX Pro 6000 Blackwell Server Edition GPU are being used to speed analysis of these large datasets and support AI models that detect patterns in satellite imagery. The company says the same computing stack can run across cloud infrastructure, ground stations and spacecraft, allowing analysis of data closer to where it is generated.
In his keynote, Huang said challenges remain with building compute infrastructure in space, including cooling and radiation management. “We have to figure out how to cool these systems out in space. But we’ve got lots of great engineers working on it,” he said. Nvidia gave no word on when the Vera Rubin Space-1 Module will be released, only stating it will be available at a later date. For now, Nvidia’s projects and partnerships in physical AI show it is pushing AI beyond the data center and into systems that design products and technology, train robots and other autonomous machines, and analyze data from space. It remains to be seen just how far these systems will reach.
The post Nvidia Maps Its Physical AI Strategy Across Engineering, Robotics and Space appeared first on AIwire.

