This was an April Fools joke, but illustrates some of the potential deep learning capabilities of Nvidia's "Tensor Cores", which are scheduled to be released this summer in their Ampere video cards, the first affordable consumer graphics cards to come with the technology. Using a usb 3.0 thumb drive would be an incredibly slow way to implement AI on existing home computers, but having it integrated into the graphics card itself means the results can be calculated without slowing down the graphics card rendering pipeline, and sent to the cpu for processing over the much faster 16x pcie lanes. However, the basic PC architecture is currently being overhauled for higher bandwidth capacities and AI applications, and future PCs should be able to use thumb drives to enhance their already existing AI circuitry integrated into their cpu processors and graphics cards. Simplified fpga circuitry like that of the Nvidia tensor cores are also being integrated into virtually all of the cpu processors coming on the market today and, among other things, they are intended to eliminate issues such as bugs in programs. Some programming bugs are caused by just plain bad programming, but a lot of them have to do with issues trying to get a program to run on just about any computer hardware on the market today. Using machine learning, your computer can figure out the best way to compensate for a bug and, then, share that bit of information with anyone who happens to own a computer with the same hardware. AMD has already taken the first step in this direction, with their Vega graphics cards deciding for themselves how to run any video game. The question seems to be what is the most affordable way to integrate AI into consumer products, including video games, and how to make it easier for game developers to adapt their games to the individual. If you want a game to be more challenging, you can simply dial the AI up in the menu. These same circuits can be used for things like physics calculations for explosions or whatever, that won't cause your frame rate to drop because it doesn't involve the circuitry dedicated to actually rendering the scene. Where the video game developers might take all this remains to be seen, because it is a fundamental change to video games along the same lines as the first rasterized engines developed, and the mystery is part of what makes PC gaming so exciting, because it is PC gamers who are likely to be among the first to test the technology.
This is a quick example of how Nvidia's tensor cores "guess" as to what a ray traced image is supposed to look like, and helps to fill in the picture significantly. The machine has to learn, or figure out, how to fill in a picture like this, but then that knowledge can be passed on to other machines, making it possible for one person's computer to teach another person's identical computer how to run programs better. While the tensor cores required to render something like this fast can be quite modest, its all about speed, efficiency, and cost with consumer products, and we should see a significant increase in the speed of these tiny circuits within a few years.