AMD Challenges Nvidia: New MI350 Chips and Helios Server with Open Architecture
At the “Advancing AI” conference in San Jose, AMD introduced its new Instinct MI350 chip series, announced the future MI400 series, and unveiled the Helios server, designed to support 72 GPUs. The company is betting on open architecture and energy efficiency as it strives to compete with market leader Nvidia.
The MI350X chip, based on the CDNA 4 architecture and manufactured using a 3-nanometer process, offers up to a 35-fold increase in inference performance compared to the previous MI300 series. It features 288 GB of HBM3E memory with up to 8 TB/s bandwidth and supports FP4 and FP6 formats, making it especially efficient for AI workloads. Compared to Nvidia’s Blackwell B200, the MI350X demonstrates up to 2.2 times better performance in inference tasks.
The Helios server, scheduled for release in 2026, will be equipped with 72 MI400-series chips, offering an open alternative to Nvidia’s proprietary solutions. It uses open networking standards, ensuring compatibility with third-party hardware and simplifying integration into existing infrastructures.
OpenAI, Meta, Oracle, Microsoft, and xAI have all expressed support for AMD, planning to incorporate its chips into their AI infrastructures. Sam Altman, CEO of OpenAI, stated that AMD’s architecture will become central to future models, confirming close cooperation in the development of the MI450.
AMD has set a goal to improve AI system energy efficiency 20-fold by 2030 compared to 2024 levels. This will allow a 95% reduction in power consumption for training large AI models and cut the number of required server racks from 275 to just one.
Thus, AMD is making a major leap forward in its competition with Nvidia, offering high-performance, energy-efficient solutions with open architecture for artificial intelligence applications.

