Self-driving vehicles have been a promise for a long time. The technology has advanced significantly, but wide-scale deployment has remained perpetually just around the corner. That corner is starting to look a lot closer. NVIDIA announced at GTC this week that four major automakers are now building level 4 autonomous vehicles on its DRIVE Hyperion platform, that a major robotaxi expansion with Uber is planned across 28 cities by 2028, and that a new set of tools and models are making it easier and faster to develop, test, and deploy autonomous driving systems at scale.
Level 4 autonomy means a vehicle can handle all driving tasks within a defined operational domain without any human intervention. It is the threshold that separates genuine self-driving vehicles from the advanced driver assistance systems that most vehicles on the road today are equipped with. Getting there requires not just capable AI models but a full stack of validated hardware, safety architecture, and simulation tools. NVIDIA is building that stack and persuading a growing list of the world’s largest automotive companies to standardize on it.
The Automakers and What They Are Building
BYD, Geely, and Nissan are all developing next-generation level 4 autonomous vehicle programs built on NVIDIA DRIVE Hyperion’s production-ready compute and sensor architecture. Nissan’s program is powered by Wayve software, the UK-based autonomous driving company known for its learning-based approach to AV development.
Isuzu is taking a different application of the platform, collaborating with TIER IV on level 4 autonomous bus development using the NVIDIA DRIVE AGX Thor system-on-a-chip, which is a core component of DRIVE Hyperion. The commercial vehicle angle is significant because autonomous buses and shuttles operating on fixed or semi-fixed routes represent one of the more near-term viable paths to level 4 deployment in real-world conditions.
NVIDIA is also working with Amazon to integrate Alexa Custom Assistant with multimodal edge AI capabilities on NVIDIA DRIVE AGX accelerated compute. The goal is to give automakers a way to deliver ambient in-cabin intelligence with privacy protections and enhanced performance built in, adding a layer of connected assistant functionality to the autonomous vehicle experience.
The Uber Partnership and the Robotaxi Timeline
The most immediately consequential announcement for consumers is the expanded partnership between NVIDIA and Uber. The two companies are planning to launch a fleet of autonomous vehicles powered by the full NVIDIA DRIVE AV software stack across 28 cities on four continents by 2028. The rollout begins in Los Angeles and the San Francisco Bay Area in the first half of 2027.
This fleet will run on DRIVE Hyperion hardware and tap into both NVIDIA Alpamayo open models and the NVIDIA Halos operating system to accelerate development and deployment of safe, scalable robotaxi services. The partnership positions Uber as one of the most expansive autonomous ride-hailing networks powered by NVIDIA’s full stack, and it gives NVIDIA a high-profile, high-stakes deployment that will test its AV platform at real-world scale across major urban markets.
Beyond Uber, mobility providers Bolt, Grab, and Lyft are also leveraging DRIVE Hyperion to accelerate their own autonomous mobility initiatives. The presence of multiple major ride-hailing platforms in the DRIVE Hyperion ecosystem signals broader industry momentum toward software-defined robotaxi fleets rather than vertically integrated AV programs where each company builds its own hardware and software from scratch.
NVIDIA Halos OS: The Safety Architecture Underneath It All
One of the most technically significant announcements in the autonomous vehicle space is NVIDIA Halos OS, a unified safety architecture for AI-driven vehicles that provides a production-ready safety foundation for level 4 autonomy on DRIVE Hyperion.
Safety in autonomous vehicles is not just about whether the AI makes good decisions. It requires a formal, certifiable architecture that guarantees the system behaves predictably within defined parameters even when components fail or encounter unexpected conditions. Halos OS is built on ASIL D-certified DriveOS foundations, which represents the highest level of automotive safety integrity as defined by the ISO 26262 standard. Its three-layer safety architecture integrates safety middleware and deployable safety applications, including an NCAP five-star active safety stack that provides the guardrails enabling reasoning-based AI systems to operate with verifiable, automotive-grade integrity at scale.
To support the rigorous validation requirements of the AV safety ecosystem, a new group of companies including AEye, Flex, Gatik, Hesai, Lucid, MIRA, PlusAI, Qt Group, Saphira, and Valeo are joining the NVIDIA Halos AI Systems Inspection Lab, which is designed to continuously validate safety across the AV development and deployment stack.
Alpamayo 1.5: A Smarter, More Flexible Driving Model
NVIDIA also introduced Alpamayo 1.5, a significant upgrade to its open portfolio of AI models for autonomous vehicle development. The original Alpamayo launched earlier this year and has already been downloaded by more than 100,000 automotive developers worldwide, which gives a sense of how quickly it has become a reference point for AV researchers and engineers.
Alpamayo 1.5 takes driving video, ego-motion history, navigation guidance, and natural language prompts as inputs and outputs driving trajectories along with reasoning traces that explain why the model made the decisions it did. That combination of behavioral output and interpretable reasoning is particularly valuable for development and validation, because it allows engineers to understand what the model is doing and why, not just whether it is producing correct outputs.
The ability to steer model behavior through navigation guidance and natural language prompts is also new and practically useful. It means developers can specify constraints and behavioral preferences directly through text rather than having to retrain the model to adjust its behavior. Vehicles can also learn more effectively from rare or unpredictable events, such as unusual road hazards or complex human behavior, by replaying scenarios, querying the model’s decision-making process, and applying updated guidance through prompts and navigation settings.
Alpamayo 1.5 adds flexible multi-camera support and configurable camera parameters, which simplifies reusing the same AI driving stack across different vehicle lines and sensor configurations. That kind of hardware flexibility is important for automakers managing large and diverse product lineups, where the cost of maintaining separate AI stacks for different vehicles can become significant.
NuRec: Testing Self-Driving AI Without Building Endless Simulations by Hand
Testing and validating autonomous driving systems requires exposing them to an enormous range of scenarios, including many that are rare, dangerous, or difficult to reproduce in physical testing. Building those scenarios manually in traditional simulation environments is time-consuming and expensive. NVIDIA Omniverse NuRec is a set of technologies designed to solve this problem by using real-world data to automatically reconstruct and render interactive driving simulations.
NuRec uses 3D Gaussian Splatting, a technique for representing 3D scenes from camera data, to ingest real-world driving footage and generate high-fidelity interactive simulations from it. Developers can use those simulations to stress-test reasoning behaviors and simulate edge cases without the cost and time of building simulation environments from scratch. NuRec will be generally available on the NVIDIA NGC catalog.
The tool has already attracted integrations from several major AV toolchain providers. 51WORLD, dSPACE, and Foretellix have all integrated NuRec into their simulation solutions. Voxel51 is using NuRec in its Physical AI Workbench for customers including Porsche Research. Parallel Domain is using the NuRec Fixer model to enhance its reconstruction pipeline. Mcity, the autonomous vehicle research facility operated by the University of Michigan, is using NuRec to build a Gaussian-based digital twin of its physical test track for use by the broader AV industry and research community.
What This Means for the Road Ahead
The scale of the DRIVE Hyperion ecosystem that NVIDIA has assembled reflects how the autonomous vehicle industry is evolving. Rather than each automaker and mobility provider building proprietary full stacks from the ground up, a growing number are standardizing on a shared hardware and software reference architecture and differentiating at the application and software layer on top of it.
For consumers, the most tangible near-term outcome of all this activity is the Uber robotaxi rollout beginning in Los Angeles and San Francisco in 2027. That deployment will be the first large-scale public test of what NVIDIA’s full autonomous driving stack looks like in everyday urban conditions, with real passengers and real consequences. It is the kind of deployment that either validates the platform at scale or surfaces the limitations that lab testing and limited pilots cannot fully reveal.
Jensen Huang framed the broader opportunity in terms that reflect how seriously NVIDIA is treating autonomous vehicles as a long-term business. He described autonomous vehicles as the first multitrillion-dollar robotics industry and said everything that moves will eventually be autonomous. Whether that timeline unfolds over five years or twenty, NVIDIA is clearly building its AV platform for the long run, and the automakers and mobility providers signing on suggest they believe the same thing.
Discover more from SNAP TASTE
Subscribe to get the latest posts sent to your email.



