Amazon’s in-house chip operation now generates more than $20 billion a year in cloud compute revenue, a run rate that surpasses the entire data-center businesses of Intel and AMD individually. CEO Andy Jassy disclosed the figure during the company’s first-quarter earnings call in late April 2026, offering the most concrete evidence yet that Amazon’s multiyear semiconductor investment is producing returns at a scale few predicted when the effort began.
The disclosure, which analysts and industry observers have been dissecting through early May 2026, pushed Amazon shares higher in after-hours trading. It also sharpened a question the chip industry has been watching closely: whether Amazon will begin selling those processors to outside buyers.
The numbers behind the milestone
Amazon’s custom chip revenue flows from two product families. Graviton, now in its fourth generation, powers a growing share of AWS general-purpose compute instances and competes directly with server processors from Intel and AMD. Trainium, purpose-built for training and running large AI models, targets the GPU-dominated territory Nvidia controls. Both chip lines are designed by Amazon’s Annapurna Labs subsidiary and manufactured by external foundries. Until now, they have been used exclusively inside Amazon’s own data centers.
The $20 billion run-rate figure captures cloud compute revenue generated by instances running on these chips, not chip sales in the traditional semiconductor sense. That distinction is important. For comparison, AMD reported $12.6 billion in data-center segment revenue for fiscal 2024, according to its annual filing. Intel’s Data Center and AI group generated about $12.3 billion over the same period. “Our custom silicon is now at a revenue run rate of over $20 billion,” Jassy told analysts on the call, a statement that, if accurate, places Amazon’s chip effort ahead of both incumbents.
Separately, AWS said its broader AI services revenue reached an annualized run rate above $15 billion in the first quarter. Analyst firm Evercore flagged that AI figure as the standout data point from the earnings report. The two numbers overlap but are not interchangeable: the $20 billion chip figure includes Graviton instances used for non-AI workloads, while the $15 billion AI services number reflects a narrower slice of machine-learning and generative-AI products sold through AWS.
Jassy also described a longer-term aspiration of $50 billion for the chip business, though he did not attach a timeline or detail the assumptions behind it. That ambition, paired with the current run rate, was enough to lift the stock. Analysts will need considerably more detail before they can model the path from $20 billion to $50 billion with any confidence.
External chip sales: the next frontier
What Amazon does with this scale next may matter more than the milestone itself. TrendForce, a semiconductor industry research firm, reported that Amazon is weighing direct sales of its in-house chips to external customers. If the company follows through, it would convert what has been a cost-optimization tool into a standalone product line, placing Amazon in direct competition with Nvidia, Intel, and AMD as a chip vendor rather than just a cloud provider.
That move would also create an uncomfortable tension. Companies purchasing Graviton or Trainium processors would be buying silicon from a firm that simultaneously competes with them in cloud services. Whether enterprise buyers can accept that dynamic is an open question. Google and Microsoft have not had to answer it with their own custom chips: Google’s TPUs and Microsoft’s Maia accelerator remain internal to their respective cloud platforms.
No pricing, partner announcements, or external product roadmap have been confirmed. TrendForce’s language, “weighs,” signals internal deliberation rather than a finalized plan.
The same TrendForce report noted that Anthropic, the AI company in which Amazon holds a multibillion-dollar investment, is exploring its own custom silicon. Anthropic runs significant workloads on AWS today, and a move toward proprietary chip design could eventually reduce its dependence on Amazon’s Trainium accelerators. Whether that effort is coordinated with Amazon or represents an independent hedge is not yet clear, but it adds another variable to the economics of Amazon’s chip business. If one of its largest AI customers begins designing its own processors, the demand picture shifts.
What investors still don’t know
A $20 billion run rate is a striking number, but several gaps remain in the picture Amazon has presented.
Amazon does not break out chip-specific revenue as a line item in its SEC filings. The figure comes from Jassy’s remarks on an earnings call, a setting governed by securities law where material misstatements carry real legal consequences, but it is not an audited number. Investors are relying on management’s characterization of how much AWS revenue is attributable to workloads running on custom silicon, a calculation that involves assumptions about instance pricing, utilization rates, and internal allocation.
The growth trajectory is also difficult to pin down. Industry analysts have estimated that the run rate roughly doubled from a year earlier, consistent with the scale Amazon described in prior periods, but the company has not published a specific baseline in regulatory documents.
On capital expenditure, Amazon signaled confidence that its heavy spending on chip development and data-center infrastructure is generating durable returns. Seeking Alpha noted this capex-returns discussion as a shift in how Amazon frames its hardware investment story for Wall Street. But no specific return-on-investment figures, payback periods, or margin data for the chip business have been disclosed. The gap between “we are seeing returns” and a detailed financial profile is wide.
Then there is the competitive response. Nvidia, Intel, and AMD have not commented publicly on Amazon’s chip expansion. How those companies react, whether through pricing adjustments, accelerated product cycles, or tighter partnerships with rival cloud providers, will shape how much of the addressable market Amazon can realistically capture. That silence is notable in itself.
Why it matters now
Amazon is not the only hyperscaler designing its own processors. Google has deployed TPUs for nearly a decade, and Microsoft unveiled its Maia AI accelerator in late 2023. But neither company has disclosed custom-chip revenue at the scale Jassy described. If the $20 billion figure holds up under scrutiny, Amazon’s silicon operation is not merely competitive with its in-house peers. It is operating at a level that puts it in the conversation with the largest dedicated semiconductor companies in the world.
For cloud customers, the practical takeaway is concrete. AWS pricing, performance benchmarks, and product roadmaps will increasingly reflect the economics of in-house chips rather than third-party processors. As Amazon’s scale advantages in chip production grow, the cost calculus for workloads running on Graviton and Trainium instances is likely to tilt further in their favor.
For the broader chip industry, the stakes are higher still. A $20 billion internal chip business is formidable on its own. A $20 billion chip business that starts selling externally would be disruptive. And a $50 billion chip business, if Jassy’s aspiration materializes, would redraw the competitive map of the semiconductor industry. The current numbers are real. The forward-looking story is still taking shape.