Top Reasons Why Next Gen Specs Do Not Need To Match Top End PCs

One of the hottest topics around the web since the unveiling of the next generation consoles, the Playstation 4 and Xbox One, is how underpowered they are compared to past console generations. Both consoles use a low powered 8-core x86 CPU and a DX11 capable GPU both from AMD. Generally speaking, the CPU is of the type that one would find in a tablet or low powered laptop and is said to be about as powerful as an IvyBridge Intel core i3. The GPUs are relatively powerful with the PS4 GPU on paper sitting in between the AMD 7850 and 7870XT cards and the XB1's GPU sitting somewhere between the 7790 and the 7770. However, when you consider that the fastest GPU made by AMD, the 7970Ghz, is roughly 2x as powerful and that the fastest single GPU on the market today, the Nvidia Geforce Titan, is well over 2x as powerful then you can see how the raw specs of the next generation systems are far from state-of-the-art.

While many enthusiast gamers may scoff at the idea of next generation systems not being able to deliver the best gaming experience (on paper) at their time of release, I would say that people are not realizing just how things have changes over the past decade or so. While I also love the idea of getting cutting edge technology with the purchase of a brand new console, I realize now that "next generation" has less to do about hardware as it ever has with regards to gaming. In a world where everything is going mobile, smartphones and tablets rule the computing markets, and the most played games include Angry Birds, Candy Crush, and Minecraft, other aspects such as the OS, user experience, convenience, and accessibility have taken priority. Luckily, Sony and Microsoft have realized this and so have many game developers. This doesn't mean that we won't still be some amazing cutting edge game experiences. It's just that they will be delivered in other ways that we are not accustomed to.

Here are my top reasons why next gen console specs do not need to match top end PCs to deliver cutting edge game experiences:

3.) Current Consoles Still Look Good To Most Consumers

Most of the enthusiast crowd and those that use high end PC components tend to think that we are the only people in the word. Since we tend to be the most passionate and the most willing to spend money, we want the biggest and most cutting edge games all the time. The fact is that we are simply a "vocal minority". To the majority of people, the current Xbox 360 and PS3 HD graphics look more than adequate. They do not dissect the scene and notice the lack of anti-aliasing, low resolution textures, or simplified post processing when they see games like God Of War Ascension and The Last Of Us. In fact, I recently had a high school student tell me that The Last Of Us was the most realistic graphics he had ever seen! It's hard for someone who works in graphics to believe that a 7 year old console can still amaze people to that degree. Yet, I'm sure millions of people share his sentiment.

What this is essentially saying is that graphics are not as important as they used to be. On one hand, the consumer isn't as interested in the absolute cutting edge as they once were. This can be seen by the success of the outdated Wii console as well as the obsession with mobile products today which are far from state-of-the-art in terms of hardware specs. Another perspective in which this applies is that we are definitely approaching diminishing returns when it comes to graphics technology. Even on the top end PCs running the absolute best looking games today like Crysis 3 or Metro Last Light, the average person will not see a drastic different between those titles running on an Xbox 360 vs. a Radeon 7970. Think about it like this: given that the gap in tech between the Xbox360 and 7970 is roughly 7 years, is the difference similar to the difference of a game running on PS1 in 1995 compared to a game running on PS2 in 2002? Not even close which is largely due to the diminishing returns as the tech advances. All this being said, it wouldn't be wise for Sony or Microsoft to prioritize graphics HW as the biggest investment for next gen tech.

2.) Delivering A Console With Today's Cutting Edge Tech Is Not Feasible

Consider that when the Xbox 360 came out in November of 2005, the top end graphics card on the market was the Nvidia 7800 GTX which cost $599 and had a power TDP of only ~80W. As of this writing, the top end graphics card today is still the Geforce Titan which is $999 and has a power TDP of ~250W! That's more power consumed in just that single card than the entire PS3 and Xbox360 systems while running a game. It is simply not feasible to squeeze something of that size, at that price, and that consumes that much power into a consumer living room friendly console that generally runs at 200W or less for a price of $300-$500.

The other thing to consider here is that even though putting in the highest end graphics at the time of the Xbox 360 release was much more practical it still hurt both Sony and Microsoft to do so. While the Xbox 360 and PS3 GPUs did rival that of high end PCs at the time and their CPUs were much faster at games than standard PC CPUs of the time, it came at a steep price. For Microsoft, it costs the company billions of dollars as a key contributor to the infamous Red Ring of Death, which was typically the result of internal parts being damaged from overheating. For Sony, it was a contributor to steep launch prices of $500 & $600 as well as the relatively bulky console design. Both companies are making a point to not repeat the same mistakes and are using smaller, more power efficient components in the core boxes. A wise move indeed.

1.) Game Quality Is Software Limited Not Hardware Limited

If you've been paying attention to developers over the past several years, you may have noticed a pattern that was different than those of the past. If a developer was complaining about the Xbox 360 or PS3, it was never about them not having the power to materialize their ideas. Instead it was about the cost of making games, needing to develop better tools to take advantage of the HW, or the inefficiencies of the development SDK (mostly on Sony's side). In fact, most developers have said that as recently as this year that they have still not maximized the Xbox 360 and PS3 even after 8 and 7 years respectively. The high end components combined by the fundamental shift to multicore processors meant that the effort to develop efficient and optimized software was a more time consuming and arduous process than ever before. John Carmack, one of the pioneers in computer graphics and creator of the PC classic DOOM, said just a few weeks ago at QuakeCom 2013:

"Really, we could be doing great, innovative work even on the current generation for many more years yet. It's not like anybody's seen everything that you could do."

Mark Cerny, lead system architect for PS4, said that during Sony's R&D efforts for PS4, the key developers' wishes were for more memory and better tools to enable them to develop richer games in less time. They specifically did not want a radical CPU or GPU that is uber powerful because that would mean it would take more time to ramp up the software. Besides, as I mentioned in item #3 above, the amount of hardware it would take to make a dramatic difference in the graphics is simply too much to be feasible. Instead, the demand was for a more simplified and streamlined architecture to allow the system and hardware to be less of a barrier. The quantity and quality of titles on the PS3 particularly was fundamentally limited by the difficulty required to develop cutting edges titles for it. While games like Killzone 2, God of War III, and Gran Turismo 5 were technically leaders when they were released, it would have been preferred that they didn't take over 4 years each to develop. They also were the first releases for their respective developers on the system. In other words, by the time those games finally made it out, the PS3 generation was nearly 5 years in, a traditional console cycle.

So taking all of this into consideration and listening to developer wishes, it's clear that the bottleneck in game development is not the hardware like it used to be. Diminishing returns and lowered consumer standards means that pushing for pure graphics technology does not make much business or practical sense for either company. In addition, the increase in HW cost and power combined with the diminishing returns means that it would take monumental leap in HW specs to deliver an equivalent jump in quality as what we saw in 2005.  Today's consumer is more interested in the UI, feature set, ease of use, and responsiveness of their computing devices than they are of the raw specs. Thus, the next generation will be defined by the improvements made to the controllers, the OS, the UI, input mechanism such as touch, voice, and motion, connectivity, response time to get in and out of games, second screen experiences, cloud computing, and virtual reality. Sure the HW technology will increase as well but in such a way that we will see roughly the same worlds as on PS3 and Xbox 360 but bigger, denser, and more alive with the additional memory and better physics processing both locally and via the cloud. Perhaps once developers are free from the shackles that the complicated and unique PowerPC architectures brought in the previous generation, they can truly let the software shine and speak for itself. Maybe for PS5 we may be ready for a true leap in gaming technology. We can only wait and see.


Popular posts from this blog

Xbox Scorpio vs PS4 Pro: The Truth About What To Expect

Will An Amplifier Improve The Sound Quality In Your Home Theater?

Predicting the Features Of the PlayStation 5