The Bloat of Computer Hardware

Posted on Posted in Eng/Sci/Tech, Opinions


A few weeks ago, I was ridding my graphics card of an ingested dust bunny. An enormous dust creature had taken up residence in the aluminum heat fins that were used to cool the graphics processor of my video card (GTX 460). While it was primarily an annoyance and wouldn’t have done my video card much harm, the thought occurred to me that modern computing paradigms are rather… well, American.

Graphics cards themselves are relatively small beasts. Just thin slabs of printed circuit boards and an assortment of chips like memory, the graphics processor, and assorted power-conditioning bits. But what makes their design problematic in my opinion is the fact that they would be virtually useless without enormous radiators bolted to them in order to cool power-slurping processors that sometimes use more power than a halogen floor lamp.

In the perpetual war between AMD and nVidia to win the hearts and wallets of gamers and video professionals around the world, video cards have been getting more and more powerful. The processors they employ, which are essentially millions of silicon filaments packed into a tiny rectangle, are being pushed harder and faster, while being made smaller and smaller so that even more filaments (transistors) can be packed into the same block of semiconductor. Although advances are always being made to allow processors to run on less power, thus emitting less heat, the rate at which chip-designers scream “MORE POWER” dominates the computer engineering landscape. The one place where this rule generally doesn’t hold true is in mobile devices, otherwise iPhones would turn into iHeatCoffeeCoasters. Laptops will still scald you though.

While I’m not saying that high-end computer technology doesn’t work (it does, stunningly well), I am saying that it’s an unsustainable design paradigm. It’s like trying to breed bigger tomatoes. You can do it, but you’re creating a a plant that would die without life-support in the form of a tomato cage. GFX Card manufacturers are having to get creative in order to keep their products functional when pushed to their limits. They already commonly employ multi-heatpipe heatsinks to dissipate heat. Heatpipes are essentially copper straws with a volatile liquid inside that vaporize on contact with a hot GPU. That gas is free to flow anywhere in that tube, carrying heat away far faster than conduction through metal plates. In some cases, they stick multiple fans on these heatsinks. And if they’re feeling super-unique they’ll make the radiator a single, giant heatpipe which resembles a flattened metal mushroom filled with heat-sucking liquid goodness.

Yes, these engineered designs do work, and they keep your expensive slabs of silicon from melting, but it’s a very brute-force solution, thermodynamically speaking. It doesn’t solve the problem that the graphics processor(s) in these chips that nVidia and AMD design run HOT. And in the long run, thermal performance will degrade, as dust invades your computer. So is there a solution?

In the near future, probably not. Our ability to manufacture more power-efficient chips using tinier transistors is limited. And it doesn’t solve the problem that GPU’s generate a lot of heat in a small area. Manufacturers have to pack the cores of the graphics processors close together to facilitate rapid communication, and also conserve silicon/money. But eventually, advances in optical computing may one day allow cores to be separated, naturally dissipating heat in a more efficient way and leading to graphics cards that aren’t getting more and more obese as time goes on… Or it’ll give nVidia another excuse to scream “MORE POWER!”

3 thoughts on “The Bloat of Computer Hardware

  1. You should compare why CPU’s have much higher clocks and generally function at much lower temperatures. Most of this lies in the architecture differances between the two, as GPU’s can take huge loads of data and execute it in parallel while the CPU is not mean’t to do this. For example CPU’s nowadays have 4-6 cores while GPU’s have had hundreds of cores(Your GTX 460 has 336 cuda cores). Of course they are a lot weaker and can only do a set number of things but this is why the GPU produces more heat, they have more transistors at about the same manufacturing size. The GTX 460 has 3,000,000,000 transistors while an i7 has 1,170,000,000. Although the CPU’s clock is about 3 times higher it still does not compares to how power hungry the transistors truly are. Example: (Modern CPU(stock) @ 45nm- 3.4 GHZ -Load temp ~60C vs Modern GPU(stock) @ 40nm-800Mhz- Load temp ~80C.). They each have their own advantage, GPU’s are regarded as better candidates for supercomputers because of their vast multi-core design.
    Rather looking for manufacturing to reduce heat many companies have been looking towards a better architecture and instruction sets. The CPU and GPU are slowly but surly, in the near future, going to combine into one chip or (SoC). As you can see with the Intel chips, the new I-core series places a “Graphics processor” on the CPU die and can really pack a punch by using parts of the CPU to optimize performance in graphics processing. While AMD has taken a different approach when it introduced their Fusion chip which is a true CPU/GPU hybrid. We can also see contenders from ARM chips which their architecture was specifically designed to be low-power devices.
    GPU architecture is not as developed as CPU but to be honest…there are huge changes in how graphics processing is being handle by Intel,ARM,AMD, and Nvidia. Only time will tell but I am sure everything we know about the GPU will change very soon.

  2. That’s a very good point about how they have completely different process execution styles, and it’s largely a function of how parallelizable their tasks are. And that in turn affects the ultimate transistor count, 3 billion or so i a modern GPU, and 1 and a half or so in a CPU (I think). It will be interesting to see just how well Intel, AMD, and nVidia can position themselves in the future… Unless ARM pulls a fast one.

    And yea, the state of hardware is liable to change in the near future. There are computer manufacturers now who already consider integrated graphics “good enough” for mainstream consumption (“psht”). But the fact that nVidia is doing pretty well with GPGPUs, Intel is improving its HD x000 graphics with every generation, and the iPad is driving more pixels than most consumer computers have… it will be interesting.

Leave a Reply