The Google Pixel 7 and Google Pixel 7 Pro were released earlier this month, amid a wave of fanfare. Amongst a host of new software features, the duo featured the first public appearance of the Tensor G2 chip, Google's latest in-house design.
Before the phones were released, the Tensor G2 got a lot of heat from commentators online. Why? Benchmark tests, which seemed to leak a rather weedy performance upgrade had leaked online.
Weeks before any members of the general public would have a chance to get their hands on these devices, many had written them off altogether based on the score it achieved in an online test. It's fair to say it felt a little premature.
Now, Google's Senior Director of Product Management, Monika Gupta, has said that Google is "perfectly comfortable" not winning benchmark tests. On a podcast with 9to5Google (opens in new tab), she said, "I think classical benchmarks served a purpose at some moment in time, but I think the industry has evolved since then. They may tell some story, but we don’t feel like they tell the complete story."
It's a statement that makes sense. Google's launch of the Pixel 7 range saw a host of AI-powered functionality. This kind of software isn't tested by a benchmark, but can improve the quality of the overall experience for users.
So, do we as consumers need to pay less attention to benchmarks? I think so, and here's why.
A good benchmark doesn't make a good phone
Benchmark testing is very specific. In essence, the software will give your device a handful of tasks and time how quickly it completes them. This gives it a score, for both multi-core and single-core performance. But it's hardly perfect.
Benchmarks are designed to test what the CPU can do, but it does so by testing the extreme limit for a short period of time. Real-world usage requires sustained performance at varying levels of power. No consumer is going to run their device at the absolute peak of it's processing capability for a few seconds at a time, so testing a device like that isn't very useful. It's like testing how good a marathon runner is by making them run a 100m sprint.
Because Gupta is right: the industry has evolved and raw CPU power is no longer the defining characteristic of the best phones. AI and Machine Learning mean that devices can be more efficient and adapt to make better use of the processing power they have.
It's not entirely useless though. Benchmark testing is a great way of comparing raw CPU performance. That can be useful, particularly as more and more people buy tech products online without using them first.
But we need to reshape the narrative around what a benchmark means. Rather than treating it like the gold standard of whether or not a phone is worth your time, we need to treat it as one method of testing one aspect of a device.