MSN For quite some time now, Microsoft has been steadily revealing their new windows requirements for AI computing. For example, Windows 12 is expected to require 16gb of ram. At the same time, they're rolling out simple bandwidth improvements for gaming and video. These are all planned improvements, and illustrate how badly multiplexed our current computers are. For example, Nvidia's newest graphics cards have their own co-processors, and can spool data off the hard drive or out of ram without asking permission from the cpu. What they're all building up to is known as "unified memory". The more crap they can shove on a chip, the more they have to multiplex all the memory that goes with it, and the more distributed computing they have to use. For example, we could have four distinct types of memory, one specifically for geometry, that also does common repetitious calculations in geometry, and another for tensors, etc. Its the architecture that matters, and not the hardware. AMD is leading in the geometry department, Nvidia owns tensors, and what they all require is the math I'm still working on. The trick is to treat each datum like an autonomous agent, and introduce virtuous feedback loops, but its scalar logic they can't begin to comprehend.