NVIDIA Quietly Adds Experimental Multi-GPU Rendering Mode To Its GeForce Drivers | HotHardware This is for SLI, but the same technology can easily be used for a multi gpu SoC, which is what Nvidia claims is coming next and Intel is working on as well. So far, multi gpu SoCs have not been produced for consumer products because of the cost of the high speed memory required. Currently, about 11gb of memory is as much as anyone could ask for in a graphics card, but if you gang four gpu chips together about 64gb would be ideal. That's a lot of expensive memory right now, but you can expect the cost to go down considerably over the next six years. Note that you can also use this method to gang two gpu chips together with two arithmetic accelerators such as Nvidia's tensor cores. You could get maybe 32tf of rasterization and 160tf ray tracing capacity on a single SoC, which is way more than anyone has any clue what to do with. Increasing the size of the gpu rasterization circuitry means it can more often do calculations that the ray tracing circuitry might do, such as complex physics. Like this checkerboard rendering approach, the idea is to spread the load around on demand and things like physics can also be done with ease on something like an 8 core processor, which is certain to eventually become a standard for graphics, simply for its ability to do such calculations and chips these days holding up to 30 billion transistors. All those billions of transistors are there just to make your computer go a little bit faster because its basically cheap to make. That this checkerboard rendering can also provide anti aliasing is a big plus, because the need goes up along with the resolution.