The RTX 4090 is finally upon us, and it’s glorious! People will rightfully grumble about the shoddy power connectors and eye-watering price tag. Still, there’s no denying the GPU’s unbridled excellence. Purchasing one gets you 4K at 60+FPS on Ultra details with raytracing enabled even without DLSS. The best CPU for RTX 4090 should have no trouble keeping up, and we’ll help you decide which to choose.
As impressive as the card’s 4K capabilities are, that’s only part of the equation. Some gamers prefer insane frame rates at lower resolutions, where CPU bottlenecking is a reality. Others will leverage the card’s titanic strength to model sprawling 3D worlds or train AI & need the support of more processing horsepower.
Regardless of your use case, we’ve put together a list of current CPUs that synergize well with the card. Explore their strengths, consider their prices, and put together the PC that will power your imagination for years to come.
Intel Core i9-13900K
Socket: LGA 1700 | Cores & threads: 8+16 / 32 | L3 Cache size: 36MB | Base Clock Speed: 3.0 / 2.2GHz | Boost Clock Speed: 5.8 / 4.3GHz | Base Power: 125W | Turbo Power: 253W | iGPU: Yes
- Outstanding overall performance
- Substantial multi-threaded gains
- Works on existing LGA 1700 motherboards
- Runs hot
- Needs lots of power
Alder Lake marked Intel’s return to glory as the 12900K regained the performance crown from the then 1yo 5950X. Its successor continued the streak, wrestling with and outperforming AMD chips that were released mere two weeks prior! It’s the undisputed desktop CPU champion and technically the best RTX 4090 CPU. Whether you need one is another matter.
The 13900K leads the 13th-gen Core-series processor generation AKA Raptor Lake. It’s the pinnacle of Intel’s monolithic chip designs as Meteor Lake will adopt an overhauled architecture. Unless Intel comes out with another KS version, it’s also the most advanced processor you can get for the LGA 1700 socket since it, too, will be changing in the latter part of 2023.
Refinement wasn’t Intel’s recent strong suit, but Raptor Lake sets the company back on track. A doubling of small-scale efficiency or E-cores is the most apparent upgrade, yet less visible changes abound. Said E-cores didn’t undergo any changes other than a hefty 400MHz increase to their boost core clock.
Performance or P-cores are the meat & potatoes of Intel’s heavy single-threaded workload strategy. The 13900K comes with eight second-gen or Raptor Cove cores. These boost by 600 more MHz, have a larger L2 cache, and feature several hardware improvements. The L3 cache went up from 30 to 36MB as well. Combined, these enhancements lead to a respectable 15% IPC uplift.
The CPU Apex Predator
We’ve come to expect a focus on single-threaded dominance from Intel, so the proliferation of E-cores is a welcome surprise. Sixteen are clustered in packs of four, each with access to 4MB of L2 cache. The resulting 40% improvement in multi-threaded performance is good enough to outpace even the mighty 7950X in many cases.
Sounds fabulous, right? It is, but the changes come at the cost of increased power draw and worse thermals. The 13900K guzzles electricity like no other desktop CPU. It can draw as much as 400W when all cores are stress-tested, regardless of Intel’s 250W turbo power claims.
These are more in line with consumption while gaming. Considering the RTX 4090 is equally power-hungry, you’ll want to spring for a 1,000W PSU or more to be on the safe side. Intel recommends at least a 240mm AIO to tame the accompanying thermals. Undervolting will alleviate some of the stress with small performance losses. We’d still hesitate to use even elite air coolers with this CPU.
A look at our 13600K and 7800X3D overviews begs the question of whether you need one at all. RTX 4090 owners who use their card to make content, work on 3D renders, or do complex simulations will reap significant benefits from the improved E-core count. If you’re only after the card’s extraordinary gaming ability, the 13900K brings little more to the table than the 13600K except better futureproofing.
AMD Ryzen 7 7800X3D
Socket: AM5 | Cores & threads: 8 / 16 | L3 Cache size: 96MB | Base clock speed: 4.2GHz | Boost clock speed: 5.0GHz | TDP: 120W | iGPU: Yes
- Beats the 13900K in gaming
- Extraordinary power efficiency
- Reasonable heat output
- Mediocre productivity performance for the price
The 5800X3D was a bold proof of concept that blew everyone away with gaming results that overshadowed the then-almighty 12900K. In a case of repeated history, the 7800X3D wrestles the gaming crown away from Intel once more!
The 7800X3D doesn’t come alone this time yet remains the most compelling Zen4 CPU if you prefer stellar framerates at ridiculously low power levels over more rounded processing capabilities.
Lackluster productivity performance was the 5800X3D’s most valid criticism. Luckily, AMD learned enough from the first desktop processor that used 3D V-cache to introduce equivalents to the 7950X and 7900X.
Their 3D versions shore up the 7800X3D’s repeatedly unimpressive productivity performance. They’re also considerably more expensive and don’t always boost games as expected due to core scheduling quirks operating systems have yet to iron out.
The 7800X3D is a refinement of the original and not as daring as a result. It retains 8 cores and 16 threads, rivaling the 13900K’s P-core count. The amount of additional L3 cache is 64MB again, indistinguishable from the 32MB present on the die itself.
There’s no 7800X this time. Still, the 7800X3D clocks in at lower frequencies than the 7700X since the added cache more than makes up for the difference in gaming.
The World’s Absolute Best Gaming CPU
The chip fully embraces all of AM5’s technological advances. It’s built using TSMC’S newly refined 5nm and 6nm production processes. Its single CCD houses the cores and uses the former, while the iGPU, memory management, and 3D V-cache can get away with the latter to reduce fabrication costs.
You couldn’t overclock the 5800X3D since its stability depended closely on maintaining exact voltages. Its successor is a bit more flexible, allowing you to eke out a few more frames through PBO.
Even then, its power consumption remains astoundingly low. Expect no more than 60W during average gaming sessions and not even 100W when using the PC for more power-intensive tasks.
AMD also lowered the chip’s maximum running temperature to 89°C. Temperatures while gaming are much tamer, allowing you to use air coolers or smaller AIOs without the need to undervolt or settle for Eco mode.
If you’re laser-focused on gaming, the 7800X3D is now the best CPU for RTX 4090 you can get. It allows the card to pump out more frames than the 13900K at 1080p. How much more depends on the specific game, but we’re talking about high single-digit percentage points most of the time.
The RTX 4090 isn’t just a gaming GPU, though. Users who harness its overwhelming power to create 2D & 3D art or produce video may find the 7800X3D’s single-mindedness a deal breaker.
The 7950X3D performs on par with the non-3D variant for multithreaded tasks and can even best its younger brother by a frame or two in some games. It’s also close to $200 more expensive than the 13900K and not worth the investment until that changes.
AMD Ryzen 9 7950X
Socket: AM5 | Cores & threads: 16 / 32 | L3 Cache size: 64MB | Base Clock Speed: 4.5GHz | Boost Clock Speed: 5.7GHz | TDP: 170W | iGPU: Yes
- Outstanding performance uplift compared to 5950X
- Better platform longevity than 13900K
- Comparatively power efficient
- Expensive, higher platform costs
Zen3 is regarded as one of the finest desktop CPU generations in PC history. Even so, Alder Lake overshadowed all but the 5950X’s multi-core capabilities, and AM4 was getting long in the tooth. The new flagship came just in time to briefly take the crown, only to be relegated to second place with Raptor Lake’s arrival.
Nevertheless, the 7950X is the best CPU for RTX 4090 if you’re a serious content creator or streamer, or leverage the card’s awesome power for work more than play. Associated adoption costs are higher, but AM5 affords fresh opportunities, while LGA 1700 will become “dead” with the release of Meteor Lake.
AMD hinges its bets on sixteen P-core equivalents, each with two threads to rival the 13900K’s total count. While Intel’s frequency boosts are noteworthy, the 7950X takes it to another level. Compared to the highly-regarded 5950X, its base and boost clocks are higher by a staggering 1.1GHz and 800MHz, respectively.
The role reversal isn’t complete, as AMD brings more to the table than mere clock boosting Intel got away with for ages. Much of the chiplet-based architecture has received an overhaul. The dies holding eight core clusters each use TSMC’s tiniest 5nm node, while the I/O part that holds the RAM controller and iGPU needs only 6nm for fabrication.
Spearheading a New Ryzen Generation
The 7950X benefits from a 23% single-threaded and 44% multi-threaded performance uplift. These numbers are higher on paper than Intel’s, but keep in mind that Raptor Lake didn’t have as much catching up to do.
AM5’s current flagship succeeds in beating the 12900K in productivity and comes very close in games. However, that’s not the generation it’s up against. The 7950X isn’t a gaming CPU, and it shows at 1080p. Its multi-tasking and productivity prowess is top-notch yet contested by the 13900K in more than one metric.
If there’s anything the 7950X is decidedly better at, it’s power consumption. The 5950X was the most power-efficient processor by a long shot, so its successor is noticeably worse off in comparison. However, that’s still markedly better than the 13900K’s inefficient display.
It’s hard to gauge which manufacturer handled cooling better since the results are disappointing overall. AMD says that 95°C is the new normal. Your 7950X will frequently reach this target when taxed and remain there, even if you have a 360mm AIO. The thermal output is much more palatable when gaming, and you can enable Eco mode to drop temperatures even more with a minimal performance loss. Now, there’s also the non-X version that puts out much less heat.
Intel Core i5-13600K
Socket: LGA 1700 | Cores & threads: 6+8 / 20 | L3 Cache size: 24MB | Base Clock Speed: 3.5 / 2.6GHz | Boost Clock Speed: 5.1 / 3.9GHz | Base Power: 125W | Turbo Power: 181W | iGPU: Yes
- World-class gaming performance
- Exceptional value for the money
- Lower platform costs than the competition
- Still too hot
We’ve been stressing how it doesn’t make sense to buy either manufacturer’s top processors if you’re a pure gamer, so let’s move on to the second reason why. The 13600K is the best CPU for RTX 4090 if you’re putting together the ultimate value-focused rig in 2023. It’s affordable, good for multi-tasking even though that’s not its focus, and as competent at gaming as the 13900K.
There’s a pattern in Intel’s generational upgrade. The 13600K has double the 12600K’s E-cores, and its base clocks received a small cutback while boost clocks are higher. It benefits from a 4MB L3 cache increase as well. Unlike the two other Raptor Lake CPUs, this one has only six enabled P-cores.
The 13600K goes for around $300, significantly cheaper than all other CPUs on our list. A BIOS update makes it compatible with all 600-series motherboards. That means you can get an affordable B660 board (B760 isn’t out yet at the time of writing) and not miss out on anything since the RTX 4090 still uses PCIe 4.0.
There’s also the matter of RAM compatibility. Even though prices are on a steady decline, DDR5 has yet to demonstrate clear advantages for gaming. So, if you recently bought optimized DDR4 RAM, you can use it in your new 13600K rig without any major impediments.
As with the flagship, the higher frequency and increased E-core count make for impressive multi-threading gains. The 13600K is overall the fifth-best desktop CPU in the world in that regard, coming behind both manufacturers’ top two chips. It’s better than last gen’s 12700K and AMD’s new 7700X, and that’s just its secondary use case!
A Step or Two Short of Perfection
13900K-like gaming capabilities will cement the 13600K’s place as one of history’s most accomplished CPUs. It’s most evident in modern titles like Far Cry 6 or Elden Ring, where the $300 CPU scores within a frame or two at 1080p.
The results skew in the 13900K’s favor in CS: GO, where it’s around 10% faster. Not that even 360Hz monitor users will be able to tell the difference with the 4090 pumping out 700+ frames.
Power consumption predictably scales downward with ten fewer active cores. Expect 50% more usage than with the 12600K, and even the 12700K fares slightly better. A stock 13600K isn’t much easier to cool than the flagship, so make room in your budget for appropriate cooling.
AMD Ryzen 7 7700X
Socket: AM5 | Cores & threads: 8 / 16 | L3 Cache size: 32MB | Base Clock Speed: 4.5GHz | Boost Clock Speed: 5.4GHz | TDP: 105W | iGPU: Yes
- Good mix of performance and price
- Better PCIe 5.0 support than Raptor Lake competitors
- Excellent power efficiency
- High platform costs
Before the 7800X3D stole the show, AMD had the 7700X! It’s a major step up from its Zen3 counterpart, boasting sky-high frequency clock upgrades and numerous small improvements to various parts of its eight cores. Despite being the second best CPU for RTX 4090 from Team Red, the 7700X would have been a hard sell at its $400 MSRP. It looks like holiday prices are holding, which makes it easier to recommend to die-hard AMD fans.
With only 50% of the 7950X’s cores, the 7700X ships with a single core die and I/O intact. All Zen4 chips also have an integrated GPU based on the RDNA3 architecture. That doesn’t make them the next generation of APUs. Rather, the iGPU is there to provide a signal for people who use these chips for work and discrete GPU owners who want something to tide them over if their card malfunctions.
The shift to AM5 means getting a new motherboard is mandatory. The associated costs make adopting the 7700X less appealing, but it’s not all bad. You can now get 32GB of DDR5 RAM for $150 or less, and your old cooler will fit the new CPU just fine despite AM5’s transition to a land grid-based layout.
You could think of the 7700X as an excellent foundation for a futureproof build. Its eight physical cores will remain relevant for several years if you’re mostly into gaming. PCIe 5.0 support is more advanced than on LGA 1700.
For the Next-Gen Value-focused AMD Gamer
Not only can you slot a PCIe 5.0 card once they become available, but Zen4 lets you simultaneously use up to two PCIe 5.0 SSDs without digging into the GPU’s lanes.
Low power requirements may prompt some users to choose the 7700X. It consumes 50-60W less than the 13600K under heavy load. Temperatures while gaming are only slightly in its favor, meaning that cooling has become an important consideration regardless of your CPU preference.
There’s no getting around it – the 7700X isn’t as good as the 13600K, whichever test you run. It’s either slightly behind or tied with Intel’s chip when testing for productivity or content creation competence. The gap in potential framerates is evidently in the 13600K’s favor, at least at 1080p. Games are GPU bound at 4K anyway, so it makes sense to go with Intel’s cheaper chip.
AMD Ryzen 7 5800X3D
Socket: AM4 | Cores & threads: 8 / 16 | L3 Cache size: 96MB | Base Clock Speed: 3.4GHz | Boost Clock Speed: 4.5GHz | TDP: 105W | iGPU: No
- Outstanding gaming improvements over base model
- Works on all AM4 systems
- Cheap platform costs
- No multi-threading improvements over 5800X
Ryzen’s arrival in 2017 marked a tectonic shift in the complacent CPU landscape. Early adopters and 2000-series owners are overdue for an upgrade, but should they back another untested generation?
The answer is yes if they want a long-lasting platform that supports technologies that have yet to come into their own. However, AMD left us a parting gift that makes staying on AM4 equally tempting. The 5800X3D is the platform’s swansong and the best CPU for RTX 4090 if you’re on an older Ryzen chip and looking for an accessible yet meaningful upgrade.
The original 5800X offered an enticing mix of gaming and productivity chops. Only the 10700K outclassed it in 2020. That one CPU turned into almost an entire generation that outpaced it with Alder Lake’s arrival. Still, the 5800X’s baseline specs made it the ideal springboard for innovative experimentation.
The 5800X3D debuted 18 months later, so it was far too late to make core changes to Zen3’s architecture. The 3D V-cache responsible for the CPU’s name didn’t require significant alterations. It interfaces directly with the central ring bus, adding another 64MB to the L3 cache your PC will recognize as an integrated whole with the 32MB L3 base.
Going vertical required more space, which the 5800X3D got thanks to a thinner die. Its extra L3 cache takes up only the central part, so tie sides are shored up with silicone with the added benefit of increasing the IHS’s thermal conductivity.
AM4’s Blaze of Glory
None of this is evident on paper, apart from triple the cache. If anything, the 5800X3D seems like a sidegrade since its frequencies are slightly lower. You can’t even overclock the 3D version since it’s sensitive to voltage changes. Cutting the frequencies back also leads to marginally worse productivity results.
And yet, it still makes sense for forward-thinking gamers to get the 5800X3D. If you have an aftermarket cooler, some good DDR4 RAM, and any halfway decent AM4 motherboard, this CPU is all it takes to put your aging machine within high single-digit percentage points of Intel’s most advanced gaming rivals.
Yes, the 5800X3D will bottleneck the 4090 at 1080p. This affects only gamers focused on competitive multiplayer who don’t tolerate inefficiencies and are looking for every edge they can get. If you’re more into the 4K AAA experience at max settings with raytracing on, the 5800X3D won’t be a hindrance.
Other than being a worthwhile processor in its own right, the 5800X3D represents what we can look forward to in AMD’s immediate future. Regular Zen4 was clearly built to outperform Alder Lake. It’s now up to the next generation of 3D-endowed chips to challenge 13th and 14th-gen Core CPUs.
Intel Core i7-13700K
Socket: LGA 1700 | Cores & threads: 8+8 / 24 | L3 Cache size: 30MB | Base Clock Speed: 3.4 / 2.5GHz | Boost Clock Speed: 5.4 / 4.2GHz | Base Power: 125W | Turbo Power: 253W | iGPU: Yes
- Excellent for gaming and productivity alike
- Beats the 12900K regardless of metric
- Matches the 13900K in gaming at 1080p
- Runs hot, caters to a small customer base
We end our overview with the second most powerful Raptor Lake CPU, the 13700K. It’s more suitable for a PC you work and game on than the 13600K without the 13900K’s insane power requirements and added cost. It’s better than the 12900K in all respects and has what it takes to be the best CPU for RTX 4090 once its price falls a bit below MSRP.
The 13700K shares many similarities with Alder Lake’s top chip. They have the same amount of E-cores and L3 cache, and the difference in core frequencies is only 200MHz in favor of the newer model. Still, we’d be comparing apples to pears by saying they’re the same.
The Raptor Lake chip benefits from all the new architectural improvements, Raptor Cove P-cores being the most instrumental in its gains. It has two disabled E-core clusters, while the remaining two draw on 4MB of L2 cache. Every P-core can now use 2 instead of 1.25MB of the same.
We haven’t talked about Intel’s integrated Xe graphics yet, mostly since there’s little to say. It’s the same technology we’ve seen on Alder Lake CPUs with a modest clock speed increase. It’s disabled altogether in the KF series, so you should consider going for the 13700KF if you want to save around $25 and don’t care for a GPU backup.
95% of the 13900K’s Power at 75% of the Price
Value-wise, the 13700K is the most appealing productivity chip to build your PC around. Whether you spend your days encoding videos or running databases, only our two top choices do better. The Ryzen 9 7900X steals third place in some workloads, but it’s around $100 more expensive and thus not as attractive.
Gaming on the 13700K unsurprisingly follows a similar pattern. It’s sandwiched between the two other raptors, sometimes occupying the halfway frame rate point but often matching the others’ results. And that’s at 1080p – CPU performance at higher resolutions plays an ever-decreasing role.
The 13700K is an objectively excellent CPU, so why is it last? Basically, it’s a victim of the 13600K’s brilliance. You’re getting no tangible gaming gains right now if you buy it and might not for a while if ever, so why spend the $100 – $115 more? Conversely, someone looking for the most raw power to tackle their workload more efficiently won’t mind dropping some extra cash on a 13900K.
That being said, the 13700K has a niche to fill. It’s great for anyone who doesn’t mind their simulations or video projects completing a few seconds later than on the best desktop CPU. Meanwhile, they get to save on the upfront cost & cooler now, not to mention electricity costs in the long term.
you can set the 13900k power limit to say 105w and its still gets around 13k points in cpuz and uses around 111w
Am gonna pair my 12400F with the 4090, coz for 4K gaming, it’s GPU bound for AAA games; and 12400F still provides 60+fps. Don’t care about low resolution. Don’t care about the hundreds of fps on simpler games.
That’s a perfectly valid strategy. You’re mostly GPU bound at 4K anyway, so the excellent little 12400F is gonna be all you need.
Who knows, the 13400F has a good chance of replacing the 13700K in an updated version of the guide once the lower tier Raptors drop.
Oh yes! 13400F makes even more sense. Their 6 P-cores can be stressed out @100% without system instabilty, coz their new 4 E-cores will take care of the background system tasks when CPU bottlenecked.
Thanks for the heads-up! Will wait another month to get the 13400F instead.
ps: Too bad the 13100F isn’t configured as 4P+2E(or 4E). 😥
I’m really torn between the 13600k and the 13700k. I have money to get either/or but I would like to use that extra hundred+ to buy some more storage. Money is very tight right now after getting the RTX 4090. I broke the bank sold my arm and my liver.
In my mind I’m just stuck on the fact that I like 8 cores more than six. I do play at high 4k resolution so the 4090 GPU will be doing most of the work.
That is a pickle!
Are you building this system to last you a long while, like 5+ years? If so, you should probably go with the 13700K due to potentially better futureproofing and better multi-threaded performance right now. Those eight P-cores will come in handy sooner or later, plus you’ll have the extra E-cores for support. Besides, if you have an older PC and can’t do much with it right now, storage is the least of your worries. You can already get decent PCIe 4.0 SSDs for cheap and add more later. Unless you also do stuff like video or photo editing or anything else that requires you to move a lot of stuff from one drive to another all the time, you can get away with any well-regarded modern NVMe SSD for gaming since the loading time differences are so small it doesn’t really matter.
On the other hand, you could get the 13600K now and upgrade to a 13900K in a couple of generations when it gets cheap. We can’t predict whether it will scale better with the newest games then, so you may not be getting much of a benefit when the 13600K is already all you need for 4K. Like that other commenter said, you could technically wait for the 13400F and will likely see little to no performance difference for current games at 4K. That would make the 13900K a more tangible upgrade down the line as well. Also, keep in mind that getting the 13700K means you’ll want a better cooler – either a high-end air cooler or a decent AIO, so you have to budget for that too.
Not really helpful, eh? xD