HomeNewsTechnologyAMD sees AI as the next layer of computing

AMD sees AI as the next layer of computing

follow us on Google News

At CES 2026, AMD opened the show with a keynote that made one thing clear: the company sees AI not as a feature layered on top of computing, but as the next phase of the computing stack itself. From massive data center infrastructure to personal PCs, embedded systems, and classrooms, AMD used the stage to explain how its hardware, software, and partnerships are starting to turn AI from a promise into something people can actually use.

- Advertisement -

Chair and CEO Lisa Su framed the moment as a transition point. AI adoption is accelerating quickly, and the demands it places on computing are growing just as fast. Training larger models, running inference at scale, and bringing AI closer to users all require new approaches to performance, efficiency, and system design. AMD’s answer is an end to end portfolio that stretches from the largest systems imaginable down to the devices people interact with every day.

A look at what yotta scale computing actually means

At the top of the stack, AMD provided an early look at Helios, its rack scale platform designed as a blueprint for what the company calls yotta scale AI infrastructure. The idea is rooted in scale. Global compute capacity today sits around 100 zettaflops, but AMD expects that number to climb beyond 10 yottaflops within the next five years as AI training and inference workloads continue to explode.

Getting there is not just about building faster chips. AMD argues it requires open, modular system designs that can evolve over time, scale efficiently, and connect thousands of accelerators into a single, unified system. Helios is built around that philosophy. In a single rack, it is designed to deliver up to three AI exaflops of performance, with the bandwidth and energy efficiency needed to train trillion parameter models.

- Advertisement -

The platform brings together Instinct MI455X GPUs, EPYC “Venice” CPUs, and Pensando “Vulcano” networking, all tied together through the ROCm software ecosystem. The emphasis on openness is intentional. AMD wants Helios to be a foundation that partners can build on rather than a closed system locked to a single generation of hardware.

Alongside Helios, AMD expanded its data center AI roadmap. The company introduced the Instinct MI440X GPU, a new member of the MI400 Series designed specifically for on premises enterprise AI deployments. Unlike massive hyperscale focused accelerators, the MI440X is meant to fit into existing data center environments, supporting training, fine tuning, and inference in a compact eight GPU configuration.

The MI440X complements the MI430X GPUs announced earlier, which target hybrid AI and high precision scientific workloads. Those chips are already slated for large scale systems like the Discovery supercomputer at Oak Ridge National Laboratory and France’s Alice Recoque exascale system.

Looking ahead, AMD also previewed the next generation Instinct MI500 Series GPUs, planned for launch in 2027. Built on the upcoming CDNA 6 architecture, a 2nm manufacturing process, and HBM4E memory, the MI500 Series is expected to deliver dramatic gains in AI performance compared to today’s accelerators. While exact real world numbers will take time to materialize, the roadmap signals how aggressively AMD expects AI workloads to grow.

Bringing AI closer to where people actually use it

While the data center story sets the ceiling, much of AMD’s keynote focused on where AI will matter most to everyday users: PCs and edge devices.

AMD introduced new Ryzen AI platforms designed to make AI a native part of the PC experience. The Ryzen AI 400 Series and Ryzen AI PRO 400 Series deliver up to 60 TOPS of NPU performance, enough to comfortably handle local AI tasks while still supporting cloud connected workflows. Full ROCm support means developers can move workloads more easily between servers and client devices without rethinking their software stack.

- Advertisement -

The first systems based on these platforms are expected to ship in January 2026, with broader OEM availability following in the first quarter. AMD’s goal is straightforward. AI features should feel responsive, efficient, and always available, not gated behind constant cloud access.

AMD also expanded its high end on device AI offerings with new Ryzen AI Max+ processors. With support for up to 128GB of unified memory, these chips can run models as large as 128 billion parameters locally. That capability opens the door to advanced inference, creative workflows, and even high end gaming in thin and light laptops and small form factor desktops without relying on discrete GPUs.

For developers who want a dedicated local AI system, AMD introduced the Ryzen AI Halo developer platform. Halo is a compact desktop PC built around Ryzen AI Max+ processors and designed to deliver strong performance per dollar for AI workloads. The focus here is practicality. Halo is meant to be easy to deploy, easy to develop on, and powerful enough to meaningfully reduce reliance on cloud resources. AMD expects it to be available in the second quarter of 2026.

AI is also moving beyond traditional PCs, and AMD addressed that shift with the introduction of Ryzen AI Embedded processors. The new P100 and X100 Series target edge use cases like automotive digital cockpits, smart healthcare devices, autonomous systems, and humanoid robotics. These processors are designed for environments where power efficiency and reliability matter as much as raw performance, bringing AI closer to the physical world where decisions often need to happen in real time.

Partnerships, policy, and the people building what comes next

Throughout the keynote, AMD highlighted partnerships across research, healthcare, aerospace, and creative industries, including OpenAI, Luma AI, Liquid AI, World Labs, Blue Origin, AstraZeneca, Absci, and Illumina. Each example reinforced the same idea: AI progress depends on collaboration across hardware, software, and real world domains.

That broader view extended into public policy and education. Lisa Su was joined on stage by Michael Kratsios, Director of the White House Office of Science and Technology Policy, to discuss the Genesis Mission, a public private initiative aimed at strengthening U.S. leadership in AI. As part of that effort, AMD is powering new AI supercomputers at Oak Ridge National Laboratory and working with partners to expand access to AI education.

AMD also announced a $150 million commitment to bring AI into more classrooms and communities, supporting hands on learning and early exposure to the technology. The keynote closed by recognizing more than 15,000 students who participated in the AMD AI Robotics Hackathon with Hack Club, a reminder that the next phase of AI will be shaped not just by hardware roadmaps, but by who gets the chance to build with them.

Taken together, AMD’s CES 2026 keynote offered a grounded but optimistic view of where AI is headed. The company is betting that the future of AI computing will be defined by scale, openness, and proximity to users. Whether that vision plays out exactly as planned remains to be seen, but AMD is clearly positioning itself to be part of every layer of that future, from the largest data centers to the devices and people shaping what comes next.

Leave a Reply

More to Explore

Blender 5.1: The Precision Refinement Every Designer Needs

Released on March 17, 2026, Blender 5.1 arrives not as a radical departure, but as a masterclass in refinement. While version 5.0 was the...

How to Set Up Firefox’s New Free Built-in VPN and Use Native Split View

Digital privacy often feels like a full-time job, requiring users to juggle various extensions and subscriptions just to keep their personal data from leaking...

OpenAI Shuts Down Sora as Disney’s $1 Billion Deal Collapses

The sudden closure of OpenAI's AI video platform marks one of the most dramatic reversals in the brief history of generative AI, and leaves...

Sony’s Tokyo Studio Is Where the Future of Filmmaking Gets Made

Sony is bringing its global media production hub network to Japan, opening the Digital Media Production Center Japan (DMPC Japan) inside the company's Group...

Anthropic’s Claude Cowork Lets You Assign AI Tasks From Your Phone and Walk Away

Artificial intelligence is getting better at doing things. The harder challenge has always been getting it to do things without you watching. Anthropic's Claude...

NVIDIA’s Dynamo 1.0 Is Free, Open Source Software That Makes AI Inference Up to 7x Faster

Running AI models at scale is harder than it looks. Training a model is a one-time investment. Inference, the process of actually using that...

Adobe and NVIDIA Are Teaming Up to Reinvent Creative and Marketing Workflows With AI

Two of the most influential companies in creative technology are deepening a partnership that goes back more than two decades. Adobe and NVIDIA have...

NVIDIA Is Trying to Become the Default Platform for Every Kind of Robot

Jensen Huang has a bold prediction: every industrial company will become a robotics company. Whether or not that timeline plays out exactly as he...

NVIDIA and T-Mobile Want to Turn the 5G Network Into a Distributed AI Computer

Most conversations about AI infrastructure focus on data centers, the massive facilities packed with GPU racks that train and run the world's most powerful...

BYD, Nissan, Geely and More Are Building Self-Driving Cars on NVIDIA’s Platform — and Robotaxis Are Coming to Uber by 2027

Self-driving vehicles have been a promise for a long time. The technology has advanced significantly, but wide-scale deployment has remained perpetually just around the...

NVIDIA Wants to Be the Platform That Powers Every Enterprise AI Agent

Autonomous AI agents are moving from experiment to enterprise infrastructure faster than most organizations anticipated. The question is no longer whether companies will deploy...

NVIDIA Is Building a Coalition of AI Labs to Develop Open Frontier Models Together

The race to build the most powerful AI models has largely been a competition, with labs guarding their research, their data, and their techniques...

NVIDIA Is Releasing a Wave of Open AI Models Covering Everything From Robot Brains to Drug Discovery

NVIDIA does not just make chips. It has spent years building a parallel business in AI software and open models, and at GTC this...

NVIDIA’s NemoClaw Brings Security and Privacy to OpenClaw’s Fast-Growing AI Agent Platform

AI agents are getting good enough to actually be useful, and that is precisely when the uncomfortable questions start. If a piece of software...

NVIDIA Is Taking Its AI Chips to Space — Here’s What That Actually Means

NVIDIA has spent the last several years building the infrastructure that powers AI on Earth. Now it is setting its sights considerably higher. The...

Recommended for You

You Might Also Like