Exostellar Enables AI Infrastructure Efficiency On AMD Instinct GPUs

SANTA CLARA, Calif., September 09, 2025 — Exostellar, a leader in self-managed AI infrastructure orchestration, today announced support for AMD solutions, bringing together open, high-performance AMD Instinct™ GPUs and Exostellar’s GPU-agnostic orchestration platform to meet enterprise demands for transparency, choice, and performance.

Why This Matters

As enterprises and OEMs seek more transparent and flexible compute ecosystems, the commitment from AMD to open standards and heterogeneous integration aligns well with Exostellar’s architectural approach. Exostellar’s heterogeneous xPU orchestration platform is designed to be fully GPU agnostic, intelligently decoupling applications from underlying hardware to enable flexible scheduling across mixed infrastructure. This directly addresses a critical, validated industry need: freedom of choice without vendor lock in.

“Open ecosystems are key to building next-generation AI infrastructure,” said Anush Elangovan, Vice President, AI Software at AMD. “Together with Exostellar, we’re enabling advanced capabilities like topology-aware scheduling and resource bin-packing on AMD Instinct™ GPUs, helping enterprises maximize GPU efficiency and shorten time to value for AI workloads.”

Benefits at a Glance

Enterprises stand to realize a range of powerful benefits with this successful enablement of Exostellar’s platform on AMD Instinct™ GPUs.

For infrastructure teams, it delivers centralized visibility across heterogeneous environments, dynamic GPU sizing, and optimized compute utilization—enabled by Exostellar’s fine-grained GPU slicing and the high-bandwidth AMD Instinct GPU architecture.
AI developers will experience reduced queuing times, smarter workload placement, and faster experimentation cycles, thanks to Exostellar’s advanced orchestration and intuitive UI/UX.
For business leaders, these improvements translate into lower total cost of ownership: fewer required nodes, better use of powerful AMD Instinct GPUs, and accelerated model deployment—all supported by Exostellar’s platform automation and hardware efficiency from AMD.
Exostellar’s Technical Differentiation

Unlike blackbox Kubernetes solutions, Exostellar offers:

A superior UI/UX that simplifies cluster management and monitoring.
Workload-aware slicing with Exostellar’s GPU Optimizer on AMD MI300X enables precise resource right-sizing; unlike KAI’s fractional mode, it enforces isolation, while remaining vendor-agnostic alongside NVIDIA’s MIG option.
Offering unique features, some that are unavailable in other open-source alternatives: workload-driven orchestration, resource-aware placement, dynamic scheduling tailored for AMD Instinct GPUs.
These capabilities position Exostellar as a next-generation orchestrator that aligns with the AMD vision and elevates the value of our work together in the compute ecosystem.

AMD Instinct GPUs: Memory Advantage Driving ROI

AMD Instinct GPUs leverage cutting-edge HBM3 and HBM3E technology. For example, AMD Instinct MI300X GPUs deliver up to 192 GB HBM3 with 5.3 TB/s bandwidth, while the MI325X raises the bar to up to 256 GB HBM3E and 6 TB/s, and the current MI355X GPUs deliver up to 288 GB HMBM3e with 8 TB/s bandwidth. This massive memory footprint enables larger model deployment, fewer nodes, and more efficient KV caching—directly benefiting from Exostellar’s fine-grained compute sizing and orchestration capabilities, leading to reduced infrastructure costs and faster time-to-value.

“Our goal has always been to help customers get the most out of their AMD investments. With this collaboration, Exostellar extends that mission—because it’s not just about raw compute, but about next‑level orchestration, utilization, and ROI,” said Tony Shakib, Chairman and CEO of Exostellar.