Nvidia on Monday unveiled Alpamayo, a new family of open-source artificial intelligence models, simulation tools and datasets aimed at accelerating the development of autonomous vehicles and physical robots, at the Consumer Electronics Show (CES) 2026.
The new platform is designed to help self-driving systems reason through complex and rare real-world situations, such as traffic light failures or unpredictable road behaviour, rather than relying solely on prior training data.
“The ChatGPT moment for physical AI is here – when machines begin to understand, reason, and act in the real world,” Nvidia Chief Executive Officer Jensen Huang said in a statement. “Alpamayo brings reasoning to autonomous vehicles, allowing them to think through rare scenarios, drive safely in complex environments, and explain their driving decisions.”
At the centre of the release is Alpamayo 1, a 10-billion-parameter vision-language-action (VLA) model that uses chain-of-thought reasoning to enable autonomous vehicles to make step-by-step decisions similar to human drivers. The model is designed to handle edge cases without requiring direct prior experience.
“It breaks down problems into steps, reasons through every possibility and then selects the safest path,” said Ali Kani, Nvidia’s vice president of automotive, during a press briefing.
According to Huang, Alpamayo goes beyond simply translating sensor data into vehicle controls. “It reasons about what action it’s about to take, explains why it chose that action, and then executes the trajectory,” he said during his keynote address.
Nvidia has made Alpamayo 1’s core code publicly available on Hugging Face, allowing developers to customise the model for different vehicle platforms. The company said the model can be fine-tuned into smaller and faster versions, used to train simpler driving systems, or adapted for tasks such as automatic video labelling and decision-quality evaluation.
Developers can also use Cosmos, Nvidia’s generative world-model platform, to create synthetic driving data and combine it with real-world datasets for training and testing Alpamayo-based applications.
As part of the launch, Nvidia is releasing an open dataset comprising more than 1,700 hours of driving data gathered across multiple regions and conditions, including rare and complex traffic scenarios. The company is also introducing AlpaSim, an open-source simulation framework available on GitHub, designed to recreate real-world driving environments for large-scale testing and validation of autonomous systems.
Nvidia said the Alpamayo ecosystem is intended to lower barriers to developing safer and more transparent autonomous driving technologies as competition in the self-driving and robotics sectors intensifies.














