We tested both processors under real customer workloads for 90 days. The EPYC won on performance-per-dollar, power efficiency, and memory bandwidth — but the Xeon still has its place in our lineup.
Every 18 months we evaluate the next generation of server hardware for our fleet. This cycle, we ran a head-to-head comparison between the AMD EPYC 9654 (96 cores, 384MB L3 cache, 360W TDP) and the Intel Xeon w9-3495X (56 cores, 105MB L3 cache, 350W TDP) using real customer workload profiles captured from our production fleet over the preceding quarter.
We didn't use synthetic benchmarks. Instead, we replayed anonymized I/O patterns, network traffic profiles, and compute loads from actual VPS instances, dedicated servers, and Kubernetes worker nodes. The workloads included web serving (Nginx + PHP-FPM), database operations (PostgreSQL with mixed read/write), container orchestration (500+ pods per node), and ML inference (PyTorch serving on CPU).
Both processors were tested in identical chassis — our custom 1U Supermicro builds with 512GB DDR5-4800, dual Mellanox ConnectX-7 100GbE NICs, and 4x Samsung PM9A3 3.84TB NVMe drives. We ran each test for 30 days minimum per processor to account for thermal throttling patterns, memory controller behavior under sustained load, and firmware-level power management quirks that only manifest over long test windows.
The headline number: the EPYC 9654 delivered 38% more aggregate throughput across our mixed workload profile compared to the Xeon w9-3495X. This isn't surprising given the core count advantage (96 vs 56), but the per-core delta was smaller than expected.
Single-threaded performance was within 4% between the two processors, with the Xeon slightly ahead in integer workloads and the EPYC slightly ahead in floating-point. For the vast majority of hosting workloads — web servers, application servers, databases — single-threaded performance doesn't matter as much as aggregate throughput because customers are running multiple processes and containers simultaneously.
Where the EPYC truly separated itself was memory bandwidth. The 9654 has 12 DDR5 channels per socket compared to the Xeon's 8 channels. For our workloads, this translated to 28% higher memory throughput under realistic access patterns — not the theoretical maximum, but measured under production-like memory allocation behavior with dozens of VPS instances sharing a single host.
Memory bandwidth matters more than most people realize in multi-tenant hosting. When you're running 40+ VPS instances on a single host, each with independent memory access patterns, the memory subsystem becomes the bottleneck long before the CPU cores are saturated. The EPYC's additional memory channels give us meaningfully more headroom before we hit noisy-neighbor effects.
| Metric | EPYC 9654 | Xeon w9-3495X | Delta |
|---|---|---|---|
| Max VPS instances at p99 < 2ms | 48 | 36 | +33% |
| Memory bandwidth (real workload) | 210 GB/s | 164 GB/s | +28% |
| DDR5 channels per socket | 12 | 8 | +50% |
| L3 cache | 384 MB | 105 MB | +266% |
| Performance per watt | 4.37 ops/W | 2.82 ops/W | +55% |
At our scale — over 150,000 servers across 28 regions — power efficiency directly impacts operating margins. The EPYC 9654 consumed 11% less power than the Xeon w9-3495X while delivering 38% more throughput. On a performance-per-watt basis, the EPYC is 55% more efficient.
Over a three-year server lifecycle, the power cost difference for a single server is approximately $840 (at our blended energy cost of $0.08/kWh). Multiplied across a fleet refresh of 12,000 servers, that's $10 million in power savings — more than enough to justify the decision on cost alone, even before accounting for the performance advantages.
We also measured idle power consumption, which matters for servers that aren't always under full load. The EPYC's power management — AMD's "Infinity Fabric" can scale frequency and voltage per-CCD independently — resulted in 34% lower idle power draw compared to the Xeon. For VPS hosts that average 40-60% utilization, this translates to meaningful real-world savings.
The Xeon w9-3495X outperformed the EPYC in two specific areas. First, AVX-512 performance: Intel's implementation delivers 15-20% higher throughput for vectorized workloads. Customers doing heavy numerical computation (financial modeling, scientific simulation) see measurable benefits. Second, single-threaded latency-sensitive workloads: the Xeon's 4.8 GHz turbo vs the EPYC's 4.4 GHz gives it an edge for applications where a single thread is the critical path.
For these reasons, we're keeping Intel Xeon as an option in our dedicated server lineup. But for our VPS fleet, Kubernetes workers, and the majority of our bare metal offerings, EPYC is the clear winner.
Hardware performance is only part of the decision. We also evaluated firmware quality, supply chain reliability, and vendor support responsiveness. AMD's firmware update cadence has improved significantly — security patches now arrive within 14 days of disclosure, compared to 30+ days two generations ago. Intel's firmware quality remains slightly better — fewer BIOS errata, more stable microcode updates — but the gap has narrowed to the point where it's no longer a differentiating factor.
Starting with our Q2 2026 fleet refresh, all new VPS hosts, Kubernetes workers, and standard bare metal servers will use the AMD EPYC 9654. Intel Xeon configurations remain available as a dedicated server option for customers who request them. We expect the EPYC migration to improve our effective hosting density by approximately 25%, which we'll pass through to customers as better price-to-performance ratios over the next two quarters.