Ironically, Intel's goal is to put half the power required to run a Star Trek holodeck, on a 7w chip, they just slap a little piece of metal on and call it a day. What these figures really indicate, is how Dr Frankenstein's newest AI require ridiculous amounts of power. Intel, and everyone else, is creating little crystal goblets on a chip, they can resonate at high frequency. The situation has gotten so bad, with the demand outstripping the supply so fast, that Intel is now selling "de-lidded" chips, so you can do this at home, with a lot less effort. I tell people, we are still living in Dr Frankenstein's lab, and electricity is actually the hard way to do most of the things we use it for.
MSN This article is along the same lines, and illustrates how Moores' Law has become a race to put more intelligence on chips, rather than merely shrinking the transistors. Already, its becoming possible to put a serious AI on a thumb drive and, even without using optical circuitry, these guys can get a 1,000x faster by simply hardwiring the slowest software. Jenson Waung has nightmares about someone popping Nvidia's bubble, but the issue really is becoming how much intelligence can you add to the system, and not how fast can you crunch the numbers.
What a lot of people don't realize is how little clock frequency impacts performance. If you have a 1GHz CPU, you can perform a multiplication every few nanoseconds (each multiplication takes more than 1 clock cycle). To go twice as fast you can jump to 2GHz or just perform two simultaneous multiplications. The second solution doubles your energy consumption, but doubling the clock cycles increases energy consumption by about 4x. So going 9GHz is more of a publicity stunt than an actual engineering choice. For AI especially that's stupid because you can go so much faster by performing concurrent operations (so-called SIMD instructions), instead of cracking that clock frequency up
Its all about multiplexing everything, because that's what the market demands. Current processors use expensive transistors, that are so expensive, you can add a lot of cheap memory instead, made with cheaper transistors. However, new kinds of cheap memory are on the way, that will be fast enough to begin to keep up with the processors. Because it really is all about multiplexing, someone recently found a way to potentially double the performance of any existing computer. Likewise, everyone will eventually switch to MX2 topological motherboards, with 2d ballistic electrons, ensuring the chips themselves don't slow each other down.
Nice. I am interested in how it works. But generally speaking, if you look at publications, you see a lot of new architectures coming out every day on how to drastically increase performance. But they rarely get to commercial use, because they often either require so much change that the benefits are not worth the cost of implementing them, or by the time they are viable commercially the "traditional" ways have caught up. It is a bit frustrating, but such solutions are rarely actually implemented. A good example of this would be in-memory processing...
The guy merely figured out how to multiplex the chips on a motherboard better, so they play nice. By the time its possible to download a program that does that for your computer, it could already be incorporated into the next generation chips. You can bet your sweet ass the chip manufactures will employ it, because its the idiots they sell the chip to who keep producing crappy computers. They know people will buy whatever has "Intel" inside, never realizing it has really slow crappy memory or whatever, and they're just slapping it together as fast as they can. Electronics are among the most deregulated of markets, so Intel and everyone else has been working towards producing chips that are idiot proof, containing even their own power supply regulator, and putting them on motherboards you can't fuck up either. The processor might cost $300.oo or more, and the rest of the box $50.00. It took them forever just to get the power supply manufacturers to make decent cheap power supplies. Everyone is gouging each other, as much as their customers. With any luck, the newest organic circuitry will make consumer electronics dirt cheap and reliable, crap you can simply print 40,000 at night like so many newspapers. Still, that's a decade away at least. AMD and Intel are going for their own "ALL-IN-Wonder!" chip. AMD's newest design is three chips in a row, with two "winglets" on the outside, to keep more of the heat from the input and output off the damn thing. Their newest supercomputer chips, contain as much as 256gb of memory or more and, once the price of memory comes down, they'll slap cut down versions into out computers. Its a stupid chip, that might require 90w in your laptop, but can run the Star Trek holodeck. All the measurements seem to indicate, 260tf is just right for doing whatever the hell you want but, at the same time, everyone is constantly chipping away at that, reducing the processing demands. I'd say about 120tf is what you might want in the near future, and software can easily cut that in half. Hell, half the damn crap is already being hardwired onto laptop chips. The technology is changing so fast, it would be pointless for the government to try to speed it up any further, and all they do is make sure Intel and everyone doesn't slow down. Throw enough money at anything, and it gets done.