NVIDIA RTX 4090 vs. RTX 5090: Which GPU is Better?

NVIDIA RTX 4090 vs. RTX 5090: Which GPU is Better?

For technology enthusiasts and industry professionals alike, staying updated on the latest GPU advancements is not merely a hobby – it's essential for maintaining competitiveness and optimizing performance and computing power.

Today, we’re focusing on two specific GPUs: the impressive NVIDIA RTX 4090 and its much-anticipated successor, the yet-to-be-released RTX 5090. While the RTX 4090 has established a strong reputation for performance, the RTX 5090 remains the subject of much speculation.

As we delve into what we know and what we expect, it’s important to remember that our understanding of the RTX 5090 is still evolving. Let's examine the facts, early insights, and emerging rumors.

In-Depth Look at the NVIDIA GeForce RTX 4090

The RTX 4090 is recognized as the fastest consumer-grade GPU on the market. It is built on the AD102 chip with Ada Lovelace architecture and represents the pinnacle of high-end gaming and professional performance.

The RTX 4090 features a triple-slot design and boasts an impressive 128 third-gen RT cores and 512 fourth-gen Tensor cores, surpassing its closest competitors. This significantly enhances ray-tracing performance and accelerates AI-driven tasks like DLSS 3 in games.

Additionally, it is equipped with 16,384 CUDA cores and 24GB of GDDR6X graphics memory, offering a maximum bandwidth of 1,008 GB/s and a 384-bit memory bus interface. While error-correcting code (ECC) can be enabled, it reduces performance without affecting GPU rendering, so it's generally best left off when speed is a priority.

With its immense processing power, the RTX 4090 requires substantial energy, drawing 450 watts to deliver peak performance. An 850-watt power supply and robust cooling solutions are necessary, naturally increasing operational costs.

Released in September 2022, the RTX 4090 remains NVIDIA’s top offering for enthusiasts seeking unparalleled graphics and computational power.

However, this dominance is poised to evolve with future advancements.

In-Depth Look at the NVIDIA GeForce RTX 5090

The RTX 5090 is anticipated to launch later this year. Its innovative features and next-gen architecture are set to revolutionize high-end performance. Moving beyond NVIDIA’s Ada Lovelace, we're entering the Blackwell architecture era.

As part of the new Blackwell GPU lineup, the flagship GeForce RTX 5090 will reportedly use the "physically monolithic" GB202 die. Notable features include a potential 512-bit memory bus and up to 32GB of next-gen GDDR7 memory, succeeding the GDDR6/X memory used by the RTX 40-series GPUs. Early indications suggest the RTX 5090 could reach memory bandwidths of up to 1,532 GB/s, significantly higher than the RTX 4090’s 1,008 GB/s.

Expectations include up to 192 streaming multiprocessors (SMs) in the RTX 5090, featuring an unprecedented 24,576 CUDA cores, assuming the GB202 chip retains the 128-cores-per-SM design from the RTX 4090. Additionally, the RTX 5090 is rumored to have 192 third-gen RT cores and 768 fourth-gen Tensor cores, compared to 128 and 512 in the RTX 4090.

The RTX 5090 is expected to deliver a 50-70% performance improvement over the RTX 4090, particularly excelling at 4K resolution. With 4K 240FPS gaming on the horizon, the RTX 5090 will be ideally suited for the most demanding gaming scenarios.

Also, the RTX 5090 is anticipated to support next-gen DLSS 4, NVIDIA’s AI-powered upscaling technology that will enhance performance and image quality even further. Regarding power consumption, the RTX 5090 is expected to maintain the trend of its predecessors, with a total graphics power (TGP) ranging from 450W to 600W, similar to the RTX 4090.

To quickly compare some of the anticipated specs of the RTX 5090 with the RTX 4090, based on the latest speculation and leaks, here's a table highlighting the upgrades:

Comparison Chart: RTX 4090 vs. RTX 5090

SpecRTX 4090RTX 5090
Streaming Multiprocessors128192
CUDA Cores16,38424,576
Ray-Tracing Cores128192
Tensor Cores512768
Boost Clock2.52 GHz2.9 GHz
L2 Cache72MB128MB
Memory Bandwidth1,008 GB/s1,532 GB/s

GeForce RTX 5090 May Be Reserved for AI

Since last year, the Nvidia GeForce RTX 5090 graphics card has been the subject of numerous rumors. Recent reports have revealed two surprising details about this next-gen graphics card. Expected to be based on the Nvidia Blackwell architecture, the RTX 5090 could be up to 70% faster than the current-gen RTX 4090, according to the tech YouTube channel Moore’s Law is Dead. This significant performance boost would easily handle any of the best PC games on the market. Earlier rumors suggested the RTX 5090 might be nearly twice as fast as the RTX 4090, adding another layer to this speculation. However, we must remain skeptical until we can measure the performance ourselves.

The anticipated performance increase may come from up to 192 streaming multiprocessors in the RTX 5090 (a 50% increase over the RTX 4090's 128), resulting in 24,576 CUDA cores, 192 ray tracing cores, and 768 tensor cores. If these rumors are accurate, this card will be a true powerhouse.

However, this performance jump would come at a high cost. Reports indicate the 5090 could be priced between $2,000 and $2,500, with other potential models including a $1,000 RTX 5080, a $700 RTX 5070, a $400 RTX 5060, and a budget RTX 5050 Ti for around $300. Despite the anticipated performance and costs, these might not be Nvidia's best offerings. Predictions suggest the card may not have a fully enabled die, as Nvidia is likely to reserve its most powerful chips for the rapidly growing AI market, the same market that propelled the company to trillion-dollar earnings last year.

Conclusion

As we eagerly await the release of the RTX 5090, it's essential to acknowledge that the RTX 4090 already offers outstanding performance, effortlessly handling 4K tasks. While the RTX 5090 will undoubtedly be impressive, it may be considered more of a luxury upgrade rather than a necessity for gamers and professionals who already benefit from the RTX 4090's power.

Cost is another crucial consideration. The RTX 5090 will certainly be a high-end machine with a corresponding price tag, while the RTX 4090 remains expensive nearly two years after its launch.

As the demand for GPU resources continues to surge, especially for AI and machine learning applications, ensuring the security and ease of access to these resources has become paramount.

Spheron’s decentralized architecture aims to democratize access to the world’s untapped GPU resources and strongly emphasizes security and user convenience. Let’s unpack how Spheron protects your GPU resources and data and ensures that the future of decentralized compute is both efficient and secure.

Interested in learning more about Spheron’s network capabilities and user benefits?Review the whitepaper in full.

Join Spheron's Private Testnet and Get Complimentary Credits for your Projects

As a developer, you now have the opportunity to build on Spheron's cutting-edge technology using free credits during our private testnet phase. This is your chance to experience the benefits of decentralized computing firsthand at no cost to you.

If you're an AI researcher, deep learning expert, machine learning professional, or large language model enthusiast, we want to hear from you! Participating in our private testnet will give you early access to Spheron's robust capabilities and receive complimentary credits to help bring your projects to life.

Don't miss out on this exciting opportunity to revolutionize how you develop and deploy applications. Sign up now by filling out this form: https://b4t4v7fj3cd.typeform.com/to/Jp58YQB2

Join us in pushing the boundaries of what's possible with decentralized computing. We look forward to working with you!