https://semiaccurate.com/2018/10/22/intel-kills-off-the-10nm-process/ This is just a rumor at this point, but from a widely respected rumor mill, and the rumor is too detailed and believable to be easily ignored. Intel has delayed their 10nm chips for at least three years now, while other manufacturers are now beginning to produce them. Sony's 7nm is notable for being particularly cheap and they only call it 7nm because they all play with the numbers a bit, but they're all working at around the same scale. EUV is extreme ultraviolet light and it was every engineer's dream in the business to be able to cheaply and easily use EUV to make the smallest features on a chip imaginable without getting down the single molecular scale. The problem was it turned out to be a bitch to get to work and, apparently, the Japanese have beaten Intel to the punch line for a change. That's huge news, because even the Japanese are generally behind Intel and companies like AMD and Nvidia by at least three years in terms of fabrication technology. However, they are now reaching the known practical limitations of Moore's Law concerning silicon, and we should see more competition for prices due to the fact that Intel is no longer quite so far in the lead and has shifted all their investments into AI technology for the most part. The Chinese are the ones to watch for sure in a race to dominate the low cost silicon chip market, but everyone is waiting with baited breath for the other shoe to fall, and someone to announce they have next generation technology. IBM, HP, and who knows who else, have all been working on producing the next generation computer architecture, but chip stacking and, perhaps, optical wave-guides are where the real money is about to be made in the near future, but we'll have to wait and see. There is no reason whatsoever, for example, that you can't theoretically stack a bunch of dirt cheap chips that are made on even the older 40nm nodes or whatever, and create a dirt cheap supercomputer the size of a magic marker or whatever. But, the ability to add optics and AI to modern computers will decide the future of the internet.
According to this source, Intel has denied it is giving up on its 10nm process. What remains undeniable, that the rumor mills may be deliberately stirring up, is that Intel, Nvidia, and Microsoft have been steadily attempting to totally dominate the market and their prices have never been higher. Half the industry is taking the opportunity to dump on them, for attempting to gouge customers even worse than Apple. Make no mistake, this is a huge fight that is building up to the release of the next generation consoles in 2020 at the latest. This video also covers the newest rumors about AMD's Navi, which appears to be shooting for at least the power of a 1080 gtx on a chip for around $250.oo, which is an ideal video card for 1440p resolutions and a lot of VR applications. Basically, you need around 14tf for a rasterized engine powerful enough to drive even 4k applications at high frame rates, and it seems like Nvidia, Intel, and AMD are all working on squeezing as much on one chip as they can for now, and combining them later for double the power. Within a year or two, we could see consumer graphics cards with over 20tf of raw compute power, or enough to run just about anything imaginable. The Tensor cores and other fpgas will add roughly the power of a Star Trek holodeck on top of the rasterizing and compute engine. Once that kind of raw power is cheap and ubiquitous, prices will have to come down even further, because people will become more reluctant to spend more. AMD's long term goal will, no doubt, include putting Navi on a chip with a ryzen processor to see what kind of power they can get out of them at 30 watts. Within five years we should see even laptops start to come out with roughly the power to run just about any small business.