After years of waiting, AMD’s Ryzen has finally arrived. The company has spent years in the proverbial desert, struggling with Bulldozer improvements while simultaneously designing its new Zen architecture. It’s no exaggeration to say AMD’s future as a PC company literally depends on Ryzen’s success.
AMD has positioned Ryzen aggressively with price points that match extremely favorably with Intel’s Core i7 family and HEDT desktop parts, but there have always been questions about how well Ryzen would perform. It’s been years since AMD fielded a high performance CPU design and the company doesn’t have the same market share it once commanded. If the company is serious about pushing Ryzen into workstations, consumer PCs, and servers, it’s got a very high bar to clear.
Today’s review will focus on the Ryzen 7 1800X‘s performance, rather than rehashing Ryzen’s design or architecture. Anyone with questions is invited to peruse those articles or ask in comments below. Before discussing Ryzen’s performance, however, we need to talk about its launch. Normally, a manufacturer gives us 7-10 days at minimum; the more significant the product, the longer the review window. AMD bucked this trend by launching Ryzen less than a week after we received our hardware kits. Because I’d previously committed to attend Nvidia’s GTX 1080 Ti launch at GDC, I only had 60 hours to test the Ryzen 7 1800X.
As short as our review window was, I may have gotten lucky. In at least a few cases, reviewers didn’t receive their kits until 24-48 hours before the launch. I’ve run enough benchmarks on the Ryzen 7 1800X to feel comfortable characterizing this article as a review, but it’s also something of a hot mess. I suspect you’ll see wide variation in reported benchmarks and experiences; don’t be surprised if different people have different results.
The limited testing time was exacerbated by the motherboard AMD shipped us for testing. Asus has a well-deserved reputation for quality, but my Crosshair VI Hero threw so many errors, AMD ultimately concluded I might have a bad board, not just a wonky BIOS. To be clear: I was scarcely the only reporter to experience problems or see odd performance, but our board seems to have been at the low end of the bell curve. Both Asus and AMD helped troubleshoot our problems, but the hours we devoted to this cut deeply into test time and required us to repeatedly benchmark certain tests rather than moving on to other hardware. After switching to a Gigabyte Aorus X370 motherboard I retested our game benchmarks, beginning around 12 AM this morning. While the Gigabyte motherboard did improve the situation slightly and was markedly more stable, it didn’t resolve the gaming Achilles heel I referred to in the title (we’ll be exploring that issue in greater detail below).
As a result, our tests are not as thorough as we would have liked. We redesigned our CPU benchmark suite in preparation for Ryzen, but didn’t have time to run every CPU through every new test. Because we needed to test in parallel, not every CPU could be benchmarked with the same cooler or the same SSD. We chose the tests we did partly to ensure that these differences would not meaningfully impact our results, and our future deep dives into the chip will standardize on common hardware once again.
With those caveats in mind, let’s check the numbers.
We’ve expanded our CPU test suite since the Core i7-7700K launched. Our Intel Core i7-6700K, 7700K, 6900K, and 6950X all used 32GB of G-Skill DDR4-3200 (F4-3200C14Q-32GTZ) clocked at that frequency. Neither of the Ryzen testbeds we tested, however, was capable of running four DIMMs at these clocks. Since we’d already tested the Intel systems, we had to make a choice: 16GB of DDR4-3200 or 32GB of DDR4-2133. Since none of our benchmarks require >16GB of RAM and AMD isn’t using quad-channel memory for Ryzen, we opted for 16GB.
All of our GPU and 3D benchmarks were run using a GeForce GTX 1070. All of our game benchmarks are run at 1920×1080. While this isn’t considered an enthusiast resolution anymore, the point of these tests is to stress the CPU, not the GPU. All of our testbeds ran Windows 10 with the latest patches and updates installed. All of our GPU benchmarks were performed with Nvidia’s ForceWare 376.88.
We’re going to split our benchmark results between workstation and application tests and 3D benchmarks with workstation and content creation tests up first. Our test results and analysis are in the slideshow below. As always, you can click on any slide to expand it in a new window.
In workstation and computation tests, Ryzen is a beast to be reckoned with. Even when it doesn’t match Intel in raw performance, its performance-per-dollar gives it a huge advantage over its much larger, more expensive, rival. That said, there’s a narrow case to be made for chips like the Core i7-7700K, particularly in lightly threaded workloads. If you’re still dealing with single-thread or dual-threaded applications, the Core i7-7700K can still deliver the best performance per dollar. In most cases, however, the Ryzen 7 1800X is in a class of its own. That’s also true for gaming — but in very different contexts.
Results like these are guaranteed to raise questions, and we’ve spoken to AMD extensively over the past few days to explore the issue. According to AMD, there are three issues collectively contributing to these problems. First, Ryzen’s SenseMi technology and Precision Boost are extremely fine-grained controls that offer significantly finer granularity than any previous AMD solution, which means BIOS implementations of these features are new and not necessarily working at 100% efficiency yet. Second, AMD has been out of the high-performance market for so long, virtually no software is written explicitly for or optimized to perform well on AMD CPUs. Ryzen puts AMD on a far better footing, but software patches don’t arrive overnight. Third, there are some games that are far more sensitive to the differences between AMD and Intel CPUs than others. We happened to pick a test suite that had more of these slowdowns in it than others, and even we don’t see it every test (Vulkan, for example, runs quite well).
The other reason AMD missed it is because they chose 1440p for a minimum resolution, figuring that no one with a $500 CPU would still be gaming in 1080p. I can understand that argument even if I don’t normally find it persuasive (I prefer to keep a lower resolution to allow CPU performance to shine through). According to AMD, the difference in performance between itself and Intel is much reduced at 1440p and completely eliminated in 4K. I haven’t had the opportunity to verify those figures yet, but it does make sense — as resolution rises, the bottleneck in the system moves from the CPU to the GPU. If you game at 1440p or above, these results may not have much bearing on your experience.
I had a number of conversations with AMD on the game performance question as well as discussions of board stability in general. Having tested a second motherboard, I think many of my stability and performance concerns were driven, at least in part, by faulty hardware. That said, Ryzen’s relative weakness in gaming while being such a vast improvement over the FX-9590 and offering extremely strong application/workstation performance is a bit odd. It’s possible that AMD’s heavy reliance on multi-threading, while effective in non-gaming tests, made it take a whack in game tests. This last, however, is merely speculation on my part.
Last but not least, here’s a touch of icing for the proverbial cake.
There is one caveat to be aware of. Remember, all of our Intel rigs used 32GB of DDR4-3200, while the AMD systems could only use 16GB of the same RAM. While this is undoubtedly having an impact on the results, it’s not going to tilt them in some crazy direction; 16GB of RAM doesn’t consume 42W of power, and that’s how much it would have to draw to bring the 1800X and 6900K into line with one another. Even with this caveat in mind, when was the last time you saw AMD hitting lower power consumption figures than Intel?