The Intel Core i9-7980XE and Core i9-7960X CPU Review Part 1: Workstationby Ian Cutress on September 25, 2017 3:01 AM EST
The buzz since Intel announced it was bringing an 18-core CPU to the consumer market has been palpable: users are anticipating this to be Intel’s best performing processor, and want to see it up against the 16-core AMD Threadripper (even at twice the cost). Intel is the incumbent: it has the legacy, the deep claws in software optimization, and the R&D clout to crush the red rival. However, a jump as large as this, moving from 10-core to 18-core in consumer, is a step Intel has been reluctant to make in the past. In this first analysis, we’ve run a few tests on the new 18-core (and 16-core) from Intel to find out the lie of the land.
Dissecting the new Core i9-7980XE and Core i9-7960X
Intel’s high-end desktop (HEDT) platform is designed to be the hard-hitting prosumer (professional consumer) platform providing all the cores without the extras required by the enterprise community. Up until this new generation of 2017 parts, we were treated to three or four CPUs each cycle, carved from Intel’s smallest enterprise silicon, slowly moving from 6 cores in 2009 to 10 cores in 2015, usually aiming for the top CPU to carve the $999 price point (usually at $999, anyway). With the 2017 HEDT platform, called Basin Falls, that changed.
The first launch of Basin Falls earlier this year had three parts. The new socket and chipset were expected as Intel updates every other generation, and this upgrade provided substantially more connectivity than before. The second part was the first three Skylake-X processors, built from Intel’s smallest enterprise silicon (like before), ranging from 6 cores at $389 to 10 cores at $999. Again, this second part was par for the course, albeit with a few microarchitecture changes in the design worth discussing (later). The third part of the initial launch was a bit of a curve ball: Intel configured two processors using their latest consumer microarchitecture, Kaby Lake-X. This is the curveball: normally the prosumer platform is a microarchitecture generation behind, due to development cycles. These two parts are also only quad-core, using repurposed ‘mainstream enthusiast’ parts but set at higher frequencies and higher power budgets, aiming to be the fastest single-threaded processors on the market.
The second launch of Basin Falls is basically what is happening today, and this is the new step from Intel. To add to the three Skylake-X processors already in the stack, using the smallest enterprise silicon, Intel is adding four more Skylake-X processors, this time using the middle-sized enterprise silicon. These new processors build on the others by significantly increasing core count, which comes at the cost of extra power requirements.
|Cores / Threads||6/12||8/16||10/20||12/24||14/28||16/32||18/36|
|Base Clock / GHz||3.5||3.6||3.3||2.9||3.1||2.8||2.6|
|Turbo Clock / GHz||4.0||4.3||4.3||4.3||4.3||4.2||4.2|
|L3||1.375 MB/core||1.375 MB/core|
|Memory Freq DDR4||2400||2666||2666|
All seven processors are listed in the table above. The four new parts are on the right, under the ‘HCC’ (high core count) silicon:
- The Core i9-7980XE, with 18 cores at $1999
- The Core i9-7960X, with 16 cores at $1699
- The Core i9-7940X, with 14 cores at $1399
- The Core i9-7920X, with 12 cores at $1199 (technically launched August 28th)
As with other product stacks, moving higher up step will cost more than the step previous. Intel (and others) are taking advantage of the fact that some consumers (and especially prosumers) will buy the best part because it can be offset against workflow, or just because it exists.
These four processors are almost identical, aside from core count: all four use the same base design, all four support DDR4-2666 memory out of the box, and all four will support 44 PCIe 3.0 lanes (plus 24 from the chipset). The top three are rated at 165W TDP (thermal design power), while the 12-core part is 140W. There is some variation in the frequencies: while all four parts will support 4.4 GHz as their top TurboMax clock (also known as ‘favored core’, more on this later), the Turbo 2.0 frequencies are all 4.3 GHz except the top two processors, and the base clock frequencies in general decrease the higher up the stack you go. This makes sense, physically: to keep the same TDP as cores are added, the processor will reduce in base clock frequency to meet that same target.
In our initial review of the Skylake-X processors, we were able to obtain the per-core turbo frequencies for each processor.
Despite the low base frequencies, each processor (when all cores are working) will still be above 3.4 GHz. The ‘base’ frequency number is essentially Intel’s guarantee: under normal conditions, this is the highest frequency Intel will guarantee. When AVX or AVX2/AVX512 instructions are being used, the frequencies will be lower than those listed (due to the energy density of these compact instructions) but still above the base frequency, and offering higher overall performance than using the same math in non-AVX formats.
Shown in the table are the turbo frequencies without TurboMax. TurboMax is a new feature first implemented with Broadwell-E, whereby the most efficient cores (as measured during manufacturing and embedded in the processor) can achieve a higher frequency. For Skylake-X, this feature was upgraded from one loaded core to when up to two cores are loaded. This means that the first two columns, labeled 1 and 2, will move up to 4.4 GHz for the top four processors. TurboMax also requires BIOS support, although we had some issues with this, mentioned later in this review.
Intel’s Competition: AMD
Before dissecting the processor, it is important to know what Intel is up against with the new processors. Arguably this is new territory for the consumer space: before this year, if a user wanted more than 10 cores, they had to invest in expensive Xeon processors (or even two of them), and a platform to support it.
Speaking directly for consumer lines, the obvious competition here is from AMD’s Threadripper processors. These are derived from their new Zen microarchitecture and offer 16-cores at $999 or 12 cores at $799.
|AMD vs Intel|
|TR 1900X||TR 1920X||TR 1950X||7920X||7940X||7960X||7980XE|
|Silicon||2 x Zeppelin||HCC|
|Cores / Threads||8/16||12/24||16/32||12/24||14/28||16/32||18/36|
|Base Clock / GHz||3.8||3.5||3.4||2.9||3.1||2.8||2.6|
|Turbo Clock / GHz||4.0||4.0||4.0||4.3||4.3||4.2||4.2|
|XFR / TBM3||4.2||4.2||4.2||4.4||4.4||4.4||4.4|
|L2||512 KB/core||1 MB/core|
|L3||32 MB||64 MB||1.375 MB/core|
|Memory Freq DDR4||2666||2666|
From a performance perspective, Intel is expected to outright win: AMD’s 16-core processor was pitched against the previous generation’s 10-core processor and usually won, especially in multithreaded benchmarks. The single core performance of the AMD parts were a little behind Intel, but the core count made up for the difference. With Skylake-X adding both single thread performance as well as 8 more cores in the design should give Intel an easy lead in raw performance.
However, AMD has positioned that 1950X at $999, which is half the price of the i9-7980XE. AMD also cites more PCIe lanes from the CPU (60 vs 44), and no confusion over chipset functionality support. Intel’s rebuttal is that the performance is worth the cost, and that it has more chipset PCIe lanes for additional functionality beyond PCIe co-processors like GPUs.
Intel’s Competition: Intel
Intel’s enterprise Xeon platform is still a direct competitor here, in two different ways.
The ‘traditional’ multi-socket enterprise parts will cost substantially more than these new consumer parts, in exchange for some extra features as well, although even moving to a dual socket system with two $999 processors will not be much of a comparison: a Core i9-7980XE compared to a 2P Xeon Silver system will have advantages in core frequency and a unified memory interface, in exchange for maximum memory support and potential 10 gigabit Ethernet or Intel’s QuickAssist Technology.
|Ten+ Core Intel Xeon-W Processors (LGA2066)|
|Xeon W-2195||18/36||2.3 GHz||4.3 GHz||24.75||1.375||140 W||TBD|
|Xeon W-2175||14/28||TBD||TBD||19.25||1.375||140 W||TBD|
|Xeon W-2155||10/20||3.3 GHz||4.5 GHz||13.75||1.375||140 W||$1440|
|Core i9-7980XE||18/36||2.6 GHz||4.2 GHz||24.75||1.375||165W||$1999|
|Core i9-7960X||16/32||2.8 GHz||4.2 GHz||22.00||1.375||165W||$1699|
Intel also launched Xeon-W processors in the last couple of weeks. These occupy the middle ground between Skylake-X and the enterprise Xeon-SP parts. Xeon-W uses the same socket as Skylake-X, but requires a completely new chipset, so the motherboards are not interchangeable. These Xeon-W parts are still up to 18 core, almost mirroring the Skylake-X processors, and support quad-channel memory, but support up to 512GB of ECC of it, compared to 128GB of non-ECC. The Xeon-W set of processors cost an extra 10-20% over the Skylake-X parts (add some more for the motherboard too), but for any prosumer that absolutely needs ECC memory, but does not want a dual-processor or does not have double the budget, then Xeon-W is going to be the best bet.
This Review: The Core i9-7980XE and Core i9-7960X
This review is titled ‘Part 1: Workstation’, as for the most part, this review will tackle some of the new processors in workstation type workloads including some initial data using SpecWPC, a standardized industry-standard workstation benchmark suite, as well as our more intense workloads. The review will take a nod towards usability with single-threaded workloads and responsiveness (because it really does matter how fast a PDF opens if this CPU is the main processor in a work system).
The main comparison points for this review will be AMD’s Ryzen Threadripper processors, the 16-core 1950X and the 12-core 1920X. On the Intel side, we retested the 10-core i7-6950X, as well as using our 10-core Core i9-7900X numbers from the initial Skylake review. Unfortunately we do not have the Core i9-7940X or Core i9-7920X yet to test, but we are working with Intel to get these parts in.
[Speaking directly from Ian]: I know a lot of our readers are gamers, and are interested in seeing how well (or poorly) these massive multi-core chips perform in the latest titles at the highest resolutions. Apologies to disappoint, but I am going to tackle the more traditional consumer tasks in a second review, and which will mean that gaming will be left for that review. For the users that have followed my reviews (and Twitter) of late, I am still having substantial issues with my X299 test beds on the gaming results, with Skylake-X massively underperforming where I would expect a much higher result. After having to dedicate recent time to business trips (Hot Chips, IFA) as well as other releases (Threadripper), I managed to sit down in the two weeks between trips to figure what exactly what was going on. I ended up throwing out the two X299 pre-launch engineering samples I was using for the Skylake-X testing, and I received a new retail motherboard only a few days before this review. This still has some issues that I spent time trying to debug, which I think are related to how the turbo is implemented, which could either be Intel related or BIOS-specific. To cause insult to injury to everyone who wants to see this data, I have jumped on a plane to travel half-way around the world for a business trip during the week of this launch, which leaves the current results inconclusive. I have reached out to the two other motherboard vendors that I haven’t received boards from; just in case the issue I seem to be having is vendor specific. If I ever find out what this issue is, then I will write it up, along with a full Skylake-X gaming suite. It will have to wait to mid-late October, due to other content (and more pre-booked event travel).
I also wanted to benchmark the EPYC CPUs that landed in my office a few days ago, but it was not immediately playing ball. I will have to try and get some Xeon-W / Xeon Gold for comparison with those.
Pages In This Review
- 1: Dissecting the Intel Core i9-7980XE and Core i9-7960X
- 2: New Features in Skylake-X: Cache, Mesh, and AVX-512
- 3: Explaining the Jump to HCC Silicon
- 4: Opinion: Why Counting ‘Platform’ PCIe Lanes (and using it in Marketing) Is Absurd
- 5: Test Bed and Setup
- 6: Benchmark Overview
- 7: Workstation Performance: SpecWPC v2.1
- 8: Benchmarking Performance: PCMark 10
- 9: Benchmarking Performance: Office Tests
- 10: Benchmarking Performance: Rendering Tests
- 11: Benchmarking Performance: Encoding Tests
- 12: Benchmarking Performance: System Tests
- 13: Benchmarking Performance: Legacy Tests
- 14: A Few Words on Power Consumption
- 15: Conclusions and Final Words
- The Intel Skylake-X Review: Core i9-7900X, i7-7820X and i7-7800X Tested
- The Intel Kaby Lake-X Review: Core i7-7740X and i5-7640X Tested
- Intel Announces Basin Falls: The New High-End Desktop Platform and X299 Chipset
Post Your CommentPlease log in or sign up to comment.
View All Comments
mapesdhs - Tuesday, September 26, 2017 - linkIn that case, using Intel's MO, TR would have 68. What Intel is doing here is very misleading.
iwod - Monday, September 25, 2017 - linkIf we factor in the price of the whole system, rather then just CPU, ( AMD's MB tends to be cheaper ), then AMD is doing pretty well here. I am looking forward to next years 12nm Zen+.
peevee - Monday, September 25, 2017 - linkFrom the whole line, only 7820X makes sense from price/performance standpoint.
boogerlad - Monday, September 25, 2017 - linkCan an IPC comparison be done between this and Skylake-s? Skylake-x LCC lost in some cases to skylake, but is it due to lack of l3 cache or is it because the l3 cache is slower?
IGTrading - Monday, September 25, 2017 - linkThere will never be an IPC comparison of Intel's new processors, because all it would do is showcase how Intel's IPC actually went down from Broadwell and further down from KabyLake.
Intel's IPC is a downtrend affair and this is not really good for click and internet traffic.
Even worse : it would probably upset Intel's PR and that website will surely not be receiving any early review samples.
rocky12345 - Monday, September 25, 2017 - linkGreat review thank you. This is how a proper review is done. Those benchmarks we seen of the 18 core i9 last week were a complete joke since the guy had the chip over clocked to 4.2GHz on all core which really inflated the scores vs a stock Threadripper 16/32 CPU. Which was very unrealistic from a cooling stand point for the end users.
This review had stock for stock and we got to see how both CPU camps performed out of the box states. I was a bit surprised the mighty 18 core CPU did not win more of the benches and when it did it was not by very much most of the time. So a 1K CPU vs a 2K CPU and the mighty 18 core did not perform like it was worth 1K more than the AMD 1950x or the 1920x for that matter. Yes the mighty i9 was a bit faster but not $1000 more faster that is for sure.
Notmyusualid - Thursday, September 28, 2017 - linkI too am interested to see 'out of the box performance' also.
But if you think ANYONE would buy this and not overclock - you'd have to be out of your mind.
There are people out there running 4.5GHz on all cores, if you look for it.
And what is with all this 'unrealistic cooling' I keep hearing about? You fit the cooling that fits your CPU. My 14C/28T CPU runs 162W 24/7 running BOINC, and is attached to a 480mm 4-fan all copper radiator, and hand on my heart, I don't think has ever exceeded 42C, and sits at 38C mostly.
If I had this 7980XE, all I'd have to do is increase pump speed I expect.
wiyosaya - Monday, September 25, 2017 - linkPersonally, I think the comments about people that spend $10K on licenses having the money to go for the $2K part are not necessarily correct. Companies will spend that much on a license because they really do not have any other options. The high end Intel part in some benchmarks gets 30 to may be 50 percent more performance on a select few benchmarks. I am not going to debate that that kind of performance improvement is significant even though it is limited to a few benchmarks; however, to me that kind of increased performance comes at an extreme price premium, and companies that do their research on the capabilities of each platform vs price are not, IMO, likely to throw away money on a part just for bragging rights. IMO, a better place to spend that extra money would be on RAM.
HStewart - Monday, September 25, 2017 - linkIn my last job, they spent over $100k for software version system.
In workstation/server world they are looking for reliability, this typically means Xeon.
Gaming computers are different, usually kids want them and have less money, also they are always need to latest and greatest and not caring about reliability - new Graphics card comes out they replace it. AMD is focusing on that market - which includes Xbox One and PS 4
For me I looking for something I depend on it and know it will be around for a while. Not something that slap multiple dies together to claim their bragging rights for more core.
Competition is good, because it keeps Intel on it feat, I think if AMD did not purchase ATI they would be no competition for Intel at all in x86 market. But it not smart also - would anybody be serious about placing AMD Graphics Card on Intel CPU.
wolfemane - Tuesday, September 26, 2017 - linkHate to burst your foreign bubble but companies are cheap in terms of staying within budgets. Specially up and coming corporations. I'll use the company I work for as an example. Fairly large print shop with 5 locations along the US West coast that's been in existence since the early 70's. About 400 employees in total. Server, pcs, and general hardware only sees an upgrade cycle once every 8 years (not all at once, it's spread out). Computer hardware is a big deal in this industry, and the head of IT for my company Has done pretty well with this kind of hardware life cycle. First off, macs rule here for preprocessing, we will never see a Windows based pc for anything more than accessing the Internet . But when it comes to our servers, it's running some very old xeons.
As soon as the new fiscal year starts, we are moving to an epyc based server farm. They've already set up and established their offsite client side servers with epyc servers and IT absolutely loves them.
But why did I bring up macs? The company has a set budget for IT and this and the next fiscal year had budget for company wide upgrades. By saving money on the back end we were able to purchase top end graphic stations for all 5 locations (something like 30 new machines). Something they wouldn't have been able to do to get the same layout with Intel. We are very much looking forward to our new servers next year.
I'd say AMD is doing more than keeping Intel on their feet, Intel got a swift kick in the a$$ this year and are scrambling.