Real Talk: Gaming Hardware Is About Maximizing Efficiency Not TFLOPs
The discussion on which next gen console is more "powerful" has been heating up lately with most believing the Xbox Series X to be more powerful solely on higher spec counts in certain categories. Yet some folks counter that with how the custom hardware in PS5 will alleviate some of it's relative performance deficit and the difference will be minimal.
Before I proceed, let's really think about what we mean by "powerful" in this context because it could mean several different things. People tend to just toss that number around and say "system X has more TFLOPs so it's more powerful" or "System Y can run at higher framerates so it's more powerful". It is an important distinction in the context of the next generation consoles since both system have advantages in different areas.
For this discussion, I want to focus on actual game performance as the goal. Meaning which system can actually process the most data in the shortest amount of time. This will yield richer worlds at higher framerates. Thus, I am getting away from the theoretical and the TFLOPS and high level specs and focusing on which system ultimately runs games with same or higher details and higher framerates.
Now of course let me state the obvious: at this point, nobody really knows which system is more powerful between the Xbox Series X and PS5. Why? Because nobody has seen both running in final hardware form up close with the same game side by side to do a comparison. So I'm not here to declare either one as more "powerful" but just to check some folks on claiming one as superior solely based on numbers on paper or video streams.
Now many people in the know including developers have said this but let me reiterate: virtually no real world game running on any system does so in a manner which utilizes 100% of that system's capability at all times. As beautiful as TLOU2 or God of War looks on PS4, it is completely incorrect to think that either of those games are extracting the maximum 1.8 TFLOPs of GPU power for any sustained period. Yes, even if the clock speeds are fixed the actual utilization is based on the software running on it. For example, I can have a 5 Ghz CPU and a 2Ghz top of the line GPU running a simple 'for' loop or simple binary search algorithm. Does that mean that the system is running at it's theoretical 14 TFLOPs while running those few lines of code in that for loop simply because it's frequencies are locked? Theoretically, I could build a 15 PetaFlop machine (15000 TFLOPS) that is several orders of magnitude more powerful than anything on the market today. But if all it could play were Wii games by design, would that be a system which is utilizing it's full potential? Would that be next gen?
The point here is something that I've mentioned several times in this forum and I think a lot of people miss. When we really think about "next gen" gaming and transitioning to a new generation it really isn't the hardware that achieve those milestones. It's the actual software/games that truly define a new generation. We don't remember the specs of the N64 and how much more horsepower it had over the PS1, but we remember how seeing Super Mario 64 for the first time took our breath away. Try as we might, few people could look at Mario 64 in motion and translate that to exactly what hardware specs made that possible and how any theoretical advantages over competing hardware is showing up in the images being rendered before in front of them. The same could be said in moving to PS2: it was seeing GT3, Metal Gears Solid 2, and GTA III that defined what "next gen" really meant for that generation. It was not a GFLOP count or marketing buzz words like "Emotion Engine". We could go on with seeing Halo for the first time, Gears of War, Uncharted 2, and Killzone Shadowfall in later generations but you get my point. But here is the question: if you didn't know the hardware specs of the system running those games, would that change how you looked at that system? In other words, if Kojima today mentioned that MGS2 on PS2 only used < 1 GFLOP of performance, would you now look at the PS2 as being "weaker" than the Dreamcast (capable of a theoretical 1.4 GFLOPS) even though it clearly looked better than anything on the Dreamcast at that time?
In thinking with that, we should realize that all of this talk about TFLOPs and theoretical numbers is really moot at the end of the day and misses the point. If we understand that maximum theoretical numbers are quite meaningless in determining actual real game performance and we agree that the real world performance or demonstrative power is actual more meaningful to evaluate, then we should be focusing on which system will actually be able to deliver it's theoretical performance best to the screen. There are indeed a tremendous number of system components and variables that all have to play nice and align perfectly for a system to operate at it's maximum capacity. In truth, it almost never happens with real workloads but the systems that are perceived to be the most "powerful" are generally the ones that have come closest to it's theoretical maximums…meaning the ones that are most efficient. That truly is the name of the game…trying to remove bottlenecks and create a balanced system that can work together effectively is the really the art of designing a game console ( or any system).
I recently got into a back and forth with someone who shouted to me: Xbox Series X is clearly more powerful because "The numbers don't lie". I literally LMAO and shouted back "LOL. YES THEY DO!" There are countless examples of this and many on this forum have posted PC GPU comparisons demonstrating the lower TFLOP GPU outperforming (in real games) a higher TFLOP GPU etc. But there are 2 examples I want to remind people of in particular:
- The first and more recent example of "numbers telling lies" is with the PS3 and Xbox 360 comparison. Now on paper, there is no denying that the PS3 had a much higher theoretical performance ceiling when you factored in the Cell, it's SPUs, along with the RSX GPU. Yet, most multiplatform games ran better on the Xbox 360. Why? Because the X360 was a much more balanced system that allowed developers to extract more performance with less effort than the PS3. In other words, it's "power" was much more accessible and the system more efficient. It's unified memory, symmetrical CPU design, and larger GPU with more parallel pipelines meant there was more power on tap in the X360. This was evident in many third party games throughout the generation but was very evident in the first few years (Anyone remember Madden 07 running at 60fps on X360 vs only 30fps on PS3). But other big titles such as Read Dead Redemption, Skyrim, Assassin's Creed and many others ran at lower resolution and/or lower framerates on the PS3. One way to categorize this at a high level of abstraction (not literal figures, just an example to illustrate the point) is that 70% of the Xbox 360 was better than 40% of the PS3.
- For those old enough to remember, the second major example of this was with the original PS1 vs the Sega Saturn. People may not remember but on paper the Sega Saturn was superior to the PS1 in almost every respect! More polygon pushing power, higher pixel fillrate, more RAM and VRAM, better sprite processing, higher maximum resolution and more! Yet and still, the vast majority of 3rd party multiplatform games looked and ran better on the PS1. Games like Tomb Raider, Resident Evil, and Wipeout are just some example where the Saturn version had poorer performance or was missing visual elements altogether. Why was this? Again, the Saturn was notoriously difficult to develop on and particularly to harness it's max potential. It featured dual CPU processors that was very tricky to code and in fact most developers literally ignored the 2nd processor altogether reducing the theoretical performance of the system by a tremendous amount. The PS1 on the other hand was well balanced and easy to get the desired level of performance out of it. For developers, you got much more out of it with less effort. Again, high level abstraction description: 60% of the PS1 was a lot better than 30% of the Saturn
We've heard things mentioned about the PS5 such as it's one of the easiest systems ever to develop on, it's very easy to get the power out of it, it removes many of the bottlenecks that have existed for many years, it frees developers from design constraints that they have been working around for decades etc. These kinds of statements all point to a system that will be extremely efficient and allow developers to harness more power for less time and effort. The fact that we haven't heard the same sorts of statements around Series X lead me to believe that the PS5 is in fact the more efficient between the two.
This means that you can get much closer (still not likely 100%) to that 10.28 TFLOPs of GPU power more consistently in actual workloads. This means that you can utilize much more of those 8 Zen 2 cores to doing meaningful work that that the player will see as opposed to "under the hood" tasks around data management, audio processing etc. This means that can actually achieve near 100% of the theoretical SSD read/write speeds without the traditional bottlenecks that have existed with HDDs in games for years. This means that you can get much more efficient use out of the physical RAM allotment because there is less wasteful or unnecessary assets taking up space.
The people that truly follow what I'm saying in this thread will realize that these things are much more exciting to both a developer and end user than some higher numbers on a spec sheet. These are the things that can make a meaningful difference in the quality of games we play in the next few years. These are things that will directly improve the quality of the software, which is really what delivers the next gen experience. This is absolutely cause to sing the praises of the PS5 as many developers have done.
Unfortunately for Cerny and the team at Sony, most of the real work and genius in the PS5 design is not easy to communicate to end users. It's also not something that end users can really appreciate since it's not something they can truly understand until they see the results. And that of course will not happen right away at launch in 2020. But ultimately, there is much to be excited about with the innovations Sony is bringing in the PS5 and the level of efficiency they could have possibly achieved.
So while I am not saying the PS5 is definitely more powerful (meaning more performance) than the Series X, I am saying that it is absolutely inaccurate to say that the Series X is more powerful solely based on TFLOPs ratings and other theoretical specs. In other words, despite what the numbers say it is entirely possible that we may see many cases where games are performing better (i.e. more complex scenes and/or higher framerates) on PS5. To use my analogy above: 85% of the PS5 maybe better than 60% of the Series X (for example). It wouldn't be the first time that the numbers did not tell the whole truth
Comments
Post a Comment