LmCast :: Stay tuned in

Amazon launches Trainium3

Recorded: Dec. 3, 2025, 3:04 a.m.

Original Summarized

Amazon releases an impressive new AI chip and teases an Nvidia-friendly roadmap   | TechCrunch

TechCrunch Desktop Logo

TechCrunch Mobile Logo

LatestStartupsVentureAppleSecurityAIAppsre:Invent 2025

EventsPodcastsNewsletters

SearchSubmit

Site Search Toggle

Mega Menu Toggle

Topics

Latest

AI

Amazon

Apps

Biotech & Health

Climate

Cloud Computing

Commerce

Crypto

Enterprise

EVs

Fintech

Fundraising

Gadgets

Gaming

Google

Government & Policy

Hardware

Instagram

Layoffs

Media & Entertainment

Meta

Microsoft

Privacy

Robotics

Security

Social

Space

Startups

TikTok

Transportation

Venture

More from TechCrunch

Staff

Events

Startup Battlefield

StrictlyVC

Newsletters

Podcasts

Videos

Partner Content

TechCrunch Brand Studio

Crunchboard

Contact Us

Image Credits:Usis / Getty Images

Enterprise

Amazon releases an impressive new AI chip and teases an Nvidia-friendly roadmap  

Julie Bort

8:00 AM PST · December 2, 2025

Amazon Web Services, which has been building its own AI training chips for years now, just introduced a new version known as Trainium3 that comes with some impressive specs.
The cloud provider, which made the announcement Tuesday at AWS re:Invent 2025, also teased the next product on its AI training product roadmap: Trainium4, which is already in the works and will be able to work with Nvidia’s chips.

AWS used its annual tech conference to formally launch Trainium3 UltraServer, a system powered by the company’s state-of-the art, 3 nanometer Trainium3 chip, as well as its homegrown networking tech. As you might expect, the third-generation chip and system offer big bumps in performance for AI training and inference over the second-generation chip, according to AWS.
AWS says the system is more than 4x faster, with 4x more memory, not just for training, but for delivering AI apps at peak demand. Additionally, thousands of UltraServers can be linked together to provide an app with up to 1 million Trainium3 chips — 10x the previous generation. Each UltraServer can host 144 chips, according to the company. 
Perhaps more importantly, AWS says the chips and systems are also 40% more energy efficient than the previous generation. While the world races to build bigger data centers powered by astronomical gigawatts of electricity, data center giant AWS is trying to make systems that drink less, not more.
It is, obviously, in AWS’s direct interests to do so. But in its classic, Amazon cost-conscious way, it promises that these systems save its AI cloud customers money, too.  
AWS customers like Anthropic (of which Amazon is also an investor), Japan’s LLM Karakuri, SplashMusic, and Decart have already been using the third-gen chip and system and significantly cut their inference costs, Amazon said. 


Techcrunch event

Join the Disrupt 2026 Waitlist
Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

Join the Disrupt 2026 Waitlist
Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

San Francisco
|
October 13-15, 2026

WAITLIST NOW

AWS also presented a bit of a roadmap for the next chip, Trainium4, which is already in development. AWS promised the chip will provide another big step up in performance and support Nvidia’s NVLink Fusion high-speed chip interconnect technology.  
This means the AWS Trainium4-powered systems will be able to interoperate and extend their performance with Nvidia GPUs while still using Amazon’s homegrown, lower-cost server rack technology.  
It’s worth noting, too, that Nvidia’s CUDA (Compute Unified Device Architecture) has become the de facto standard that all the major AI apps are built to support. The Trainium4-powered systems may make it easier to woo big AI apps built with Nvidia GPUs in mind to Amazon’s cloud.

Amazon did not announce a timeline for Trainium4. If the company follows previous rollout timelines, we’ll likely hear more about Trainium4 at next year’s conference.
Follow along with all of TechCrunch’s coverage of the annual enterprise tech event here.
Sponsored: Watch re:Invent industry streams

Check out the latest reveals on everything from agentic AI and cloud infrastructure to security and much more from the flags

Topics

AI, AI chips, Amazon, AWS, AWS reinvent 2025, Enterprise, trainium

Julie Bort

Venture Editor

Julie Bort is the Startups/Venture Desk editor for TechCrunch.

You can contact or verify outreach from Julie by emailing julie.bort@techcrunch.com or via @Julie188 on X.

View Bio

December 3, 2025
Palo Alto, CA

StrictlyVC concludes its 2025 series with an exclusive event featuring insights from leading VCs and builders such as Pat Gelsinger, Mina Fahmi, and more. Plus, opportunities to forge meaningful connections.

Register Now

Most Popular

The future of deep tech will be explained to you at StrictlyVC Palo Alto on Dec 3

Connie Loizos

Amazon releases an impressive new AI chip and teases an Nvidia-friendly roadmap  

Julie Bort

Apple just named a new AI chief with Google and Microsoft expertise, as John Giannandrea steps down

Connie Loizos

New York state law takes aim at personalized pricing

Anthony Ha

Anduril’s autonomous weapons stumble in tests and combat, WSJ reports

Connie Loizos

This Thanksgiving’s real drama may be Michael Burry versus Nvidia

Connie Loizos

Here are the 49 US AI startups that have raised $100M or more in 2025

Rebecca Szkutak

Loading the next article

Error loading the next article

X
LinkedIn
Facebook
Instagram
youTube
Mastodon
Threads
Bluesky

TechCrunchStaffContact UsAdvertiseCrunchboard JobsSite Map
Terms of ServicePrivacy PolicyRSS Terms of UseCode of Conduct
AWS re:InventApple AIZillowClipbookNvidiaTech LayoffsChatGPT

© 2025 TechCrunch Media LLC.

Amazon has unveiled its latest AI training chip, Trainium3 UltraServer, alongside a strategic roadmap indicating a move toward greater compatibility with Nvidia’s technology. The announcement, made at AWS re:Invent 2025, highlights a significant investment in Amazon’s own AI infrastructure, driven by the Trainium3 chip. This third-generation chip offers substantial performance gains over its predecessor, delivering a 4x increase in speed and a 4x boost in memory capacity for both training and inference. Furthermore, the system demonstrates improved energy efficiency, reducing power consumption by 40% compared to the previous generation.

Crucially, Amazon is positioning Trainium3 as a key element in its strategy to attract and retain prominent AI applications, many of which are currently built upon Nvidia’s CUDA architecture. The development of Trainium4, already in progress, intends to further amplify this approach by supporting Nvidia’s NVLink Fusion high-speed chip interconnect technology. This feature will enable seamless integration and enhanced performance when combined with Nvidia GPUs, while maintaining the use of Amazon’s lower-cost server rack technology. The company’s intention is to facilitate the adoption of these applications within its AWS cloud environment.

Early adopters, including Anthropic, Karakuri, SplashMusic, and Decart, have already experienced significant cost reductions in their inference operations using the Trainium3 system. This represents a deliberate move by Amazon to not only bolster its own AI capabilities but also to provide compelling economic advantages to its cloud customers, a core tenet of Amazon’s business strategy. While a specific timeline for Trainium4 hasn’t been established, Amazon indicated that future announcements are anticipated at subsequent re:Invent conferences, aligning with previous rollout patterns. The development underscores a multi-faceted approach, aiming for both technological leadership and competitive advantage within the rapidly evolving AI landscape, specifically targeting the areas where Amazon can bridge the gap between its own infrastructure and the established dominance of Nvidia’s technologies.