Amazon launches Trainium3
Recorded: Dec. 3, 2025, 3:04 a.m.
| Original | Summarized |
Amazon releases an impressive new AI chip and teases an Nvidia-friendly roadmap | TechCrunch TechCrunch Desktop Logo TechCrunch Mobile Logo LatestStartupsVentureAppleSecurityAIAppsre:Invent 2025 EventsPodcastsNewsletters SearchSubmit Site Search Toggle Mega Menu Toggle Topics Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Government & Policy Hardware Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture More from TechCrunch Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Image Credits:Usis / Getty Images Enterprise
Amazon releases an impressive new AI chip and teases an Nvidia-friendly roadmap Julie Bort 8:00 AM PST · December 2, 2025
Amazon Web Services, which has been building its own AI training chips for years now, just introduced a new version known as Trainium3 that comes with some impressive specs. AWS used its annual tech conference to formally launch Trainium3 UltraServer, a system powered by the company’s state-of-the art, 3 nanometer Trainium3 chip, as well as its homegrown networking tech. As you might expect, the third-generation chip and system offer big bumps in performance for AI training and inference over the second-generation chip, according to AWS. Join the Disrupt 2026 Waitlist Join the Disrupt 2026 Waitlist San Francisco WAITLIST NOW AWS also presented a bit of a roadmap for the next chip, Trainium4, which is already in development. AWS promised the chip will provide another big step up in performance and support Nvidia’s NVLink Fusion high-speed chip interconnect technology. Amazon did not announce a timeline for Trainium4. If the company follows previous rollout timelines, we’ll likely hear more about Trainium4 at next year’s conference. Check out the latest reveals on everything from agentic AI and cloud infrastructure to security and much more from the flags Topics AI, AI chips, Amazon, AWS, AWS reinvent 2025, Enterprise, trainium
Julie Bort Venture Editor Julie Bort is the Startups/Venture Desk editor for TechCrunch. You can contact or verify outreach from Julie by emailing julie.bort@techcrunch.com or via @Julie188 on X. View Bio December 3, 2025 StrictlyVC concludes its 2025 series with an exclusive event featuring insights from leading VCs and builders such as Pat Gelsinger, Mina Fahmi, and more. Plus, opportunities to forge meaningful connections. Register Now Most Popular The future of deep tech will be explained to you at StrictlyVC Palo Alto on Dec 3 Connie Loizos Amazon releases an impressive new AI chip and teases an Nvidia-friendly roadmap Julie Bort Apple just named a new AI chief with Google and Microsoft expertise, as John Giannandrea steps down Connie Loizos New York state law takes aim at personalized pricing Anthony Ha Anduril’s autonomous weapons stumble in tests and combat, WSJ reports Connie Loizos This Thanksgiving’s real drama may be Michael Burry versus Nvidia Connie Loizos Here are the 49 US AI startups that have raised $100M or more in 2025 Rebecca Szkutak Loading the next article Error loading the next article X TechCrunchStaffContact UsAdvertiseCrunchboard JobsSite Map © 2025 TechCrunch Media LLC. |
Amazon has unveiled its latest AI training chip, Trainium3 UltraServer, alongside a strategic roadmap indicating a move toward greater compatibility with Nvidia’s technology. The announcement, made at AWS re:Invent 2025, highlights a significant investment in Amazon’s own AI infrastructure, driven by the Trainium3 chip. This third-generation chip offers substantial performance gains over its predecessor, delivering a 4x increase in speed and a 4x boost in memory capacity for both training and inference. Furthermore, the system demonstrates improved energy efficiency, reducing power consumption by 40% compared to the previous generation. Crucially, Amazon is positioning Trainium3 as a key element in its strategy to attract and retain prominent AI applications, many of which are currently built upon Nvidia’s CUDA architecture. The development of Trainium4, already in progress, intends to further amplify this approach by supporting Nvidia’s NVLink Fusion high-speed chip interconnect technology. This feature will enable seamless integration and enhanced performance when combined with Nvidia GPUs, while maintaining the use of Amazon’s lower-cost server rack technology. The company’s intention is to facilitate the adoption of these applications within its AWS cloud environment. Early adopters, including Anthropic, Karakuri, SplashMusic, and Decart, have already experienced significant cost reductions in their inference operations using the Trainium3 system. This represents a deliberate move by Amazon to not only bolster its own AI capabilities but also to provide compelling economic advantages to its cloud customers, a core tenet of Amazon’s business strategy. While a specific timeline for Trainium4 hasn’t been established, Amazon indicated that future announcements are anticipated at subsequent re:Invent conferences, aligning with previous rollout patterns. The development underscores a multi-faceted approach, aiming for both technological leadership and competitive advantage within the rapidly evolving AI landscape, specifically targeting the areas where Amazon can bridge the gap between its own infrastructure and the established dominance of Nvidia’s technologies. |