A long time ago, when narcissists could only stare at their mirrors for hours without being able to post thousands of self-portraits on Facebook, when every thought that crossed your mind was likely to die there without being tweeted the world over, the power consumption of graphics card was not deemed important. End-users cared about the operating temperatures of their devices, and their noise levels, but little more. Some of them did, however, engage in overclocking, and thus applications such as Furmark and OCCT were born. These made it easy to test the stability of overclocked cards, by pushing them to their functional and thermal limits.
But gradually, consumer computing became more mobile, just as high-end graphics cards became ever more power-hungry, reaching and, sometimes, even exceeding 300W. Naturally, end-users started caring about power, and reviewers began searching for ways to better inform their readers. They turned to commonly used stress tests (e.g. Furmark and OCCT) and measured the power consumption of graphics cards while they were running them. For a while, this proved useful: it gave consumers an upper bound for power draw (give or take a few watts, to account for the natural variability from sample to sample).
But hardware vendors were well aware of the increasing importance of power, and therefore started adding increasingly sophisticated control mechanisms meant to limit power to a certain level. When these were first introduced, reviewers noted that they did in fact cap power to the specified level, without apparently giving it much thought. By now, however, most of them have realized that power control mechanisms such as AMD's PowerTune effectively make stress tests irrelevant, since they do not stress GPUs anymore. At best, they still provide readers with an upper bound for power, but it happens to be, give or take a few watts, the card's thermal design power, which does not make this information very helpful.
In reaction, most reviewers decided to test power consumption in real video games instead, thus giving a more realistic idea of what cards may draw in real-world scenarios. But as Damien Triolet showed, the power draw of different cards relative to one another may be significantly different from one game to the other. More specifically, AMD cards seem to consume more power in Anno 2070 than in Battlefield 3, relative to their NVIDIA counterparts. A careful observer will further note that AMD cards perform better in Anno. One can therefore suppose that they reach higher occupancy of their computing units in this game, which leads to higher performance, but also higher power consumption. Finally, and even though their power consumption increases in Anno, their power-efficiency (relative to GeForces) increases as well. This makes sense, as it is generally better to have busy transistors than idle ones, sitting around and doing nothing more than leaking current. So higher performance in a given game tends to lead to higher efficiency as well (relative to the competition). In other words, performing well remains a good thing. That is reassuring, but there remains a problem: how can we determine the real power-efficiency of graphics cards over a broad range of games?
Perhaps there are trends and certain engines, certain types of games tend to favor one architecture more than the other, and perhaps there exist good representatives for each "class" of games, which may be used for power testing. But to my knowledge, no one has yet identified them, if they do indeed exist. And that is not the only problem. Indeed, while most reviewers do not specify the exact nature of the power figures they present, I believe they generally give the maximum instantaneous power draw recorded. This may be considered somewhat useful, as it gives some idea of the kind of power supply unit necessary to feed a given card, but it does not guarantee that no game will ever require more power. More importantly, it does not tell us which card consumed more energy over the length of the benchmark. Indeed, a card X may have drawn an average of 150W with a peak of 170W while a card Y drew an average of 130W, with a peak of 185W. Card Y may require a slightly more powerful PSU, but it is nevertheless more energy-efficient than card X.
The only possible conclusion is that reviewers ought to measure the total energy of each card they are testing, for each game they are testing, or their power consumption figures only give a very approximate, and possibly misleading picture of reality. This does, of course, increase their workload significantly, but the previous observations lead me to believe that it is necessary.
PS: Scott Wasson from The Tech Report has been using a rather innovative methodology for performance testing since last September. It is detailed here, it is very good, and I think every reviewer should adopt it. I do not know how he might feel about this, but he should welcome it—after all, imitation is the sincerest form of flattery, and good ideas are meant to be spread.
I should also note that while this entry only mentions graphics card, that is only because they can draw up to 300W, and sometimes even more with some of the crazier, low-volume models. Most of what I said holds true for CPUs as well, or really just about any component.
No comments:
Post a Comment