Sunday, January 3, 2016

The AMD Radeon R9 380X Review

It has been a while since we’ve seen a new graphics card launch, the last of which was AMD’s capable little Nano. Historically, the time right before and during key events in the retail calendar like Black Friday and the Christmas shopping season is low time for new GPU products but high time for A-list game releases. GPU vendors typically hunker down with their existing wares and avoid launching anything new into an environment that’s rife with heavily discounted merchandise. AMD is bucking that trend by introducing the R9 380X, a $230 card that may prove to be a lynchpin within their lineup in the coming months. 

With the R9 380X, AMD is trying to thread a very thin needle with a product many had expected months ago. A price of $230 for reference-clocked versions and up to $260 for higher performing models means (if everything goes according to plan) it should be able to overcome the lower priced $210 GTX 960 4GB while plugging a gap between the R9 390 and R9 380 in AMD’s product stack. However, overclocked versions come perilously close to the pricing structure of AMD’s R9 390 ($290-$300) and NVIDIA’s GTX 970 ($299 after rebates, with a free game) and that could pose a problem as gamers seek an optimal price / performance ratio for their purchases. This is a pricing segment that has been oddly underserved in the last year or so and with good reason: it is book-ended by extremely capable options. 

On the subject of pricing, we have an interesting situation as well. While AMD’s own PR team has pegged the R9 380X as starting at $230USD and running to $240USD, most board partners and even retailers we’ve spoke to agree with the $230 to $260 range we’ve indicated in the paragraph above. Since they’re ultimately the ones who set the final price and AMD’s numbers use the nebulous “starting at” moniker so we’ll side with the folks on the ground on this one. The ASUS R9 380X STRIX OC we were sampled rings in at a cool $260.


The core being used in the R9 380X hasn’t been seen before in its fullest form, which is a surprise given how much of AMD’s lineup consists rebrands. This is a fully enabled version of the 28nm Antigua architecture or the artist formerly known as Tonga. In this iteration it has four additional Compute Units cores spread across the core’s four shader engines. That results in 2048 cores and 128 texture units. Meanwhile, the back-end operations remain the same as previous Tonga-based cores with 32 ROPs and a 256-bit GDDR5 memory interface. 

In addition to this, Antigua -like Tonga before it- incorporates all of the additional power and rendering efficiency optimizations found in other GCN 1.2 cores. That leads to improved instruction set handling, better tessellation performance, enhanced lossless delta color compression algorithms so memory bandwidth is more effectively utilized and a number of other improvements over the GCN 1.1-based Hawaii generation. Perhaps most importantly for this class of part, it incorporates decode / encode of 4K H.264 video. 


Past the obvious design changes and where this card falls in relation to AMD’s current product stack, there has obviously been a concerted effort to distinguish the R9 380X from previous generations. Currently, the R9 280X occupies the coveted $250 price bracket and it actually has nearly identical specs. However, while the core, Texture Unit and ROP counts are the same, the Tahiti-based card actually has a higher amount of theoretical memory bandwidth and comes with faster clock speeds. 

On the flip side of that equation, the Antigua core incorporates a noteworthy number of design updates which are specifically meant to do more with less while also offering a wider feature set. As a result this card has a TDP envelope that has been drastically reduced in comparison to previous designs and elements like VSR, full DX12 / Vulkan support and FreeSync compatibility have been included. 

From a competitive analysis standpoint, the R9 380X really can’t be compared directly against anything in the NVIDIA stable. The GeForce lineup has purposely avoided wading into the wide segment between $200 and $300 in order to keep some performance separation between the GTX 960 and GTX 970. Obviously, this choice hasn’t impacted their sales in any way. While there are some overclocked GTX 960 4GB SKU’s that edge up to the aforementioned $210 to $215 range, their pricing is generally trending downwards these days. The same can be said of AMD's own R9 380 and R9 390. 


The $260 ASUS R9 380X STRIX OC we received for this review follows the design guidelines of this well-received series to perfection. With a double slot cooler and a length of 11” this certainly isn’t a compact card by any stretch of the imagination but ASUS’ DirectCU II heatsink should be worth the sacrifice in size. Its dual fans are meant to remain at idle during reduced load scenarios and spin up to very low RPM levels due to the inherent efficiency of AMD’s core design. It’s a great cooler and one we’ve raved about in the past. 

Under the heatsink is an 8-phase all digital VRM design with ASUS’ signature Super Alloy Power components. That means upgraded MOSFETS, capacitors and chokes. 


Specifications are right in line with expectations as well; that means a mildly overclocked core and reference-spec memory are being offered. According to ASUS this leads to performance that’s on average 4% better than a reference card in their standard mode and about 1-2% higher than that when using their GPU Tweak software in its OC Mode. 


The STRIX’s underside is covered in a full-length backplate even though there aren’t any mission-critical components located hereabouts. Meanwhile power input needs are covered by a single 8-pin connector and the rear I/O plate houses an HDMI 1.4a, a single DisplayPort connector and two DVI outputs.

Performance Consistency & Temperatures Over Time


The R9 380X's Antigua XT core is rated for just 190W of thermal output while ASUS' DirectCU II cooler is one of the best currently on the market. This should be a match made in heaven and we have absolutely no concerns over the results. May as well get onto them. 


The first indications are pretty good considering the STRIX heatsink only turns on its fans when the core gets to a certain temperature. It's a great design that allows for completely silent computing in lower load tasks while also capping temperatures before they get out of hand. 


Clock speeds are consistent but unlike NVIDIA's Boost, AMD's PowerTune algorithms don't allow the core to take advantage of additional cooling capacity to push frequencies even further. Luckily, the STRIX hits that 1030MHz mark and remains there. 


Performance is what it is. There's nothing really interesting to see here other than proof that we're seeing consistent framerates.

Acoustical Testing


What you see below are the baseline idle dB(A) results attained for a relatively quiet open-case system (specs are in the Methodology section) sans GPU along with the attained results for each individual card in idle and load scenarios. The meter we use has been calibrated and is placed at seated ear-level exactly 12” away from the GPU’s fan. For the load scenarios, Hitman Absolution is used in order to generate a constant load on the GPU(s) over the course of 15 minutes. 


Again, we didn't expect anything but one of the quietest cards on the market and received exactly that. The STRIX goes into silent mode when idling but even when under the highest load, it remains blissfully quiet. 


System Power Consumption


For this test we hooked up our power supply to a UPM power meter that will log the power consumption of the whole system twice every second. In order to stress the GPU as much as possible we used 15 minutes of Unigine Valley running on a loop while letting the card sit at a stable Windows desktop for 15 minutes to determine the peak idle power consumption. 


Truth be told, these results are a bit surprising but likely stem from the fact that ASUS has overclocked their card to relatively high frequencies. Remember that according to AMD's specifications a good 60W separates the R9 380X from its predecessor, the R9 280X but we are only seeing about 26W here. This also leads to the STRIX OC coming within 20W of the GTX 970. AMD still has some room to grow in the performance per watt area with these slightly older architectures.

Test System & Setup



Processor: Intel i7 4930K @ 4.7GHz
Memory: G.Skill Trident 16GB @ 2133MHz 10-10-12-29-1T
Motherboard: ASUS P9X79-E WS
Cooling: NH-U14S 
SSD: 2x Kingston HyperX 3K 480GB
Power Supply: Corsair AX1200
Monitor: Dell U2713HM (1440P) / ASUS PQ321Q (4K)
OS: Windows 8.1 Professional 


Drivers: 
AMD 15.11.1 Beta 
AMD 15.10 Beta (for Far Cry 4) 
NVIDIA 358.91 WHQL


*Notes: 

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 2 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings


The Methodology of Frame Testing, Distilled


How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking. 

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen. 

FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation. 

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods. 

We are now using FCAT for ALL benchmark results, other than 4K.

Assassin’s Creed: Unity



While it may not be the newest game around and it had its fair share of embarrassing hiccups at launch, Assassin's Creed: Unity is still one heck of a good looking DX11 title. In this benchmark we run through a typical gameplay sequence outside in Paris. 




Battlefield 4



In this sequence, we use the Singapore level which combines three of the game’s major elements: a decayed urban environment, a water-inundated city and finally a forested area. We chose not to include multiplayer results simply due to their randomness injecting results that make apples to apples comparisons impossible. 


Far Cry 4



The latest game in Ubisoft’s Far Cry series takes up where the others left off by boasting some of the most impressive visuals we’ve seen. In order to emulate typical gameplay we run through the game’s main village, head out through an open area and then transition to the lower areas via a zipline. 




Grand Theft Auto V


In GTA V we take a simple approach to benchmarking: the in-game benchmark tool is used. However, due to the randomness within the game itself, only the last sequence is actually used since it best represents gameplay mechanics. 


Overclocking Results



Overclocking this particular R9 380X certainly wasn't easy since ASUS already pushed it well past the reference speeds. In the end, by maxing out the voltage to 150mV we were able to hit a constant speed of 1149MHz. That may not be quite impressive in relation to the STRIX's 1030MHz but remember reference speeds are only 970MHz. 

Memory overclocking proved to be a bit more challenging since our sample topped out at 6340MHz. Nonetheless, there is additional performance left in the tank. 



Conclusion; A Day Late & A Dollar Short?


AMD’s R9 380X’s launch is an interesting one from a number of different perspectives. Not only is this card being parachuted into a slim segment which is bookended by some extremely capable alternatives on both the high and low end but its success or failure will be ultimately determined by how well AMD threaded that proverbial needle. To make matters even more interesting, the 380X is hitting at a nightmarish time; retailers will be in discount mode and as a result it will face an epic uphill battle for relevance over the next few months. But that doesn’t mean the R9 380X will fail. Quite the opposite actually. 

On paper at least AMD’s latest card won’t offer all that much more performance than an R9 285 since its Antigua XT core simply adds a quartet of additional compute blocks and a memory capacity / speed bump to the legacy Tonga architecture. However, those deceptively simple factors add enough processing power to leverage the R9 380X past the GTX 960 4GB and into the niche it needs to be in. This was also accomplished while retaining an acceptable power consumption envelope. Adding up those elements results in a wholly appealing $230 product for gamers who are currently on a 1080P display but want enough spare horsepower for a 1440P monitor. 

Unfortunately, evaluating the R9 380X at a $230 price point is a bit of a red herring since AMD sampled us with the STRIX OC, for which ASUS is demanding a not-so-insignificant $30 premium. Those thirty bucks may not seem like much but they push the 380X dangerously close to a completely different market segment, one that’s dominated by the R9 390 and GTX 970. Luckily ASUS has added an amazing heatsink, super quiet acoustics, upgraded components and higher frequencies, all of which should help soften the STRIX OC’s financial blow.


Now before I get into the raw numbers you see above, it’s important to note that ASUS claims their clock speed improvements translate into a framerate uplift of between 3% and 10% depending on the game being played. For argument’s sake, let’s average that out to 5% and you can likely get a relatively accurate view as to where a reference-clocked R9 380X would stand against the other stock cards in this review. 

The R9 380X STRIX OC fairly dominates the GTX 960 even when that card is equipped with 4GB of memory, and it should given the $30 to $40 price spread. Truth be told that memory extra capacity likely amounted to a negligible difference since NVIDIA’s GM206 core will become a bottleneck long before memory bandwidth shuts things down. Essentially, the GTX 960 is a card tailor made for 1080P gaming whereas AMD’s latest addition has the capability to become a competent entry-level 1440P option. 

Looking at the rebrand side of this equation we come to the R9 280X, a card that’s been around in some form or another for the better part of four years. While that old timer can’t compete in the features department it gives up nothing to the R9 380X, actually gaining ground at 1440P. Considering overclocked versions of this card have been at the $250 price point for tabout 16 months, it becomes obvious the price / performance yardsticks haven’t moved all that much. 

The GTX 970 and R9 390 meanwhile aren’t even in the same dimension on the performance front yet only cost about $40 more. This doesn’t necessarily point to something “wrong” with the R9 380X or its positioning but rather it highlights how low the prices have gone for significantly higher end alternatives. It also goes to show that even a small premium for a pre-overclocked card can have some serious repercussions. 

NVIDIA certainly isn’t doing themselves any huge favors by leaving a yawning gap between the GTX 970 and GTX 960 4GB since it gave AMD a perfect opening. However, there seems to be a method to that madness. Both have plenty of built-in cost flexing without running face first into a performance per dollar battle against one another. Not so with the R9 380X and R9 390 since the latter seems like an insanely good purchase given its relative dominance in our charts and 8GB framebuffer. The $40 premium you’ll need to pay for a baseline 390 or GTX 970 over the STRIX OC would be money well spent. For the record, I’d be saying the same thing had this 380X’s price been $20 less; the $300 cards are that far ahead.

NVIDIA seems to have come to the conclusion they don’t need a card to bridge the $100 chasm between more affordable offerings with their enthusiast-oriented product stack. Meanwhile, AMD feels like there is a ready and willing market around this bracket just waiting to be tapped by new blood. I’m actually on the fence about which approach is best. At its current $260 the ASUS R9 380X STRIX OC will likely just push would-be buyers to the GTX 970 and R9 390 but those reference-clocked (yet still custom cooled) $230 380X’s could be quite appealing for anyone who wants very good performance on a budget. 

Another thing we have to wonder is what took AMD so long to introduce a fully enabled Tonga / Antigua core. The R9 280X was long in the tooth nine months ago, the Tonga architecture certainly isn’t new and folks have been actively looking for a lower wattage $250 option from AMD. In addition, the current crop of GPUs are seeing some pretty dramatic price reductions as of late which causes an unenviable situation for a card like the R9 380X. 

So where does this leave things? Unfortunately, all over the place. On one hand the R9 380X is a step in the right direction but its placement within the current scheme of things gives AMD’s board partners very little room to work with. A few bucks higher than $230 and they’re competing against cards that look completely overpowered by comparison. Any lower and they have to offer up R9 380 margins like a sacrificial lamb. Personally, I am going to recommend everyone wait to see what kind of prices the next few weeks will bring before jumping onto the R9 380X bandwagon. Once things settle down a bit this could become a very compelling graphics card but right now questions of value will understandably dominate the conversation.

No comments:

Post a Comment