One Graphics Card To Rule Them All

Discussion in 'Computers and The Internet' started by wooleeheron, Sep 20, 2018.

  1. wooleeheron

    wooleeheron Brain Damaged Lifetime Supporter HipForums Supporter



    With the embargo lifted for benchmarks on Nvidia's new RTX 2080ti lifted the criticism is flying from every direction about Nvidia gouging their customers. Right on cue, someone at AMD conveniently leaks information about their upcoming graphics cards, designed to provide all the decent graphics almost anyone could ask for at 1080p resolutions at the lowest price imaginable. The new Navi cards won't be coming out until next year, my guess would be around February at the earliest, but that's when we should see whether AMD can push Nvidia into a price war and we should get a better idea of what we can expect with future graphics cards.

    At the end of the video the narrator discusses Nvidia's new patent for "Infinite Resolution Textures" which is, perhaps, more exciting for many gamers than their new ray tracing technology. Instead of everyone downloading complete texture files, which can make up 80% of a download, they can download and "Infinite Texture". The developer programs the infinite texture so that your machine can crunch the numbers for exactly what textures you require, and these textures use the same sort of vector technology that Nvidia's tensor cores use.

    Downloads already get as big as 60gb which can take forever, while using this system they could cut that down to possibly 15gb, and the same system could also be incorporated into browsers and the next generation internet, which looks like to adopt the new Linux 3D font standard. This is huge news for PC gamers who have been watching the downloads rapidly grow in size, while dreading the thought of having to use cloud gaming simply to avoid enormous downloads.

    The lines are rapidly being drawn between AMD and Nvidia for who decides the future of video games. Nvidia should stay at least three years ahead of AMD in some of the technology, but AMD should rapidly set the stage next year for what will be the most cost effective. By all accounts, their next generation Ryzen 7nm chips will be outrageously fast, low power, and their new B400 motherboards will allow people to even upgrade from 8 core processors to more cores if they sudden have an interest in doing rendering or whatever. That's just unheard of low cost modular design and everyone is waiting with anticipation for the next generation HBM4 to come down in price.

    AMD was too successful with creating their new HMB standard, so successful, that every server and whatnot is pushing the price of the stuff through the ceiling, when AMD originally developed it to keep their memory prices down for consumers. Everyone has been waiting with anticipation for the prices on memory to come down, because GDDR6 is nearing the end of its useful speeds, while HBM can be ridiculously faster. At any rate, what all these advances in faster memory and tensor cores and infinite textures come down to is next generation standards for the basic PC architecture that will make future laptops and desktops almost indistinguishable in performance and ease of customer use to that of any console.

    We should see an increase in competition between Nvidia's higher tech solutions and AMD more open source ones, while where Intel fits into all this is anyone's guess right now. Next year should be interesting to say the least, and the dust should not even begin to settle until the end of 2020, because all of our current computers are about to become dinosaurs. AMD's long term goal is to put roughly enough power on a 30w chip to do anything most people might want in video gaming, using a low cost opensource approach, and Steam has just updated the Linux Wine files, which now can theoretically play just about any video game imaginable if the developers compile them.

    One stupid chip to rule them all eventually, while next year we should if Nvidia and Intel have any more surprises up their sleeves.
     
    Last edited: Sep 20, 2018
  2. wooleeheron

    wooleeheron Brain Damaged Lifetime Supporter HipForums Supporter



    This is a technical explanation of one of the more exciting applications for Nvidia's tensor cores. The reviewers appear to agree can make a 1440p monitor preform better and look almost identical to a 4k screen. Among other things, this means that a single high end video card is now enough to run games at high frame rates on a 1440p display, and we should see the prices of both the displays and the graphics cards come down fast over the next three years. This video is a technical breakdown, but we are still waiting on the technology to be widely implemented and tested, and it would be nice to see how it looks at 1080p as well. Although this is brand new technology and Nvidia specializes in such algorithms, there is nothing fundamentally unique about the technology and we shall have to wait for AMD to come out with their own alternatives next year.

    Anti-aliasing, bloom effects, and motion blur in particular have remained serious banes for gamers everywhere due to the best software solutions requiring about a third of the power required to render the image in the first place. Using arithmetic accelerators this way can provide shorter downloads, a much better image, and higher frame rates by simply using specialized hardware such as tensor cores, which act as a sort of Goldilocks processor that handles numbers of a specific size range.
     
    Last edited: Sep 22, 2018
  3. wooleeheron

    wooleeheron Brain Damaged Lifetime Supporter HipForums Supporter



    This video starts out with a history lesson, and you may want to skip the first ten minutes or so. But, its a nice history lesson from someone who played the first PC video games and is an expert on the subject. In the second half of the video he goes into details about ray tracing and the new "path tracing" that Nvidia is promoting as the long term solution. The end of next year or beginning of next AMD should release their answer to Nvidia's RTX series and we'll have a better idea of how fast path tracing will be adopted.

    Ray traced images can be somewhat unrealistic, and their brilliant lighting only makes their flaws stand out even more, while path tracing is a more reasonable compromise assuming it can be done economically. That's really the only thing holding back all of this technology, is that it way over-priced, and people have been holding their breath waiting for AMD to come out with competitively priced graphics cards. One thing he does not mention in the video, is that AMD's new hybrid architecture is capable of reconfiguring its ALUs for either rasterization or compute functions, which is a huge advantage. Nvidia is stuck with their own hybrid architecture, while AMD has created their first one designed for both consumer and commercial applications, including AI and ray tracing.

    AMD is attempting to squeeze as many as four gpu chips on a single transponder, and their latest test gpu produced about 21 teraflops, but only 14 are required for great rasterizated video gaming at 4k. That means the extra space on the chips could be occupied by their equivalent of tensor cores, which are analog circuitry making them roughly 1/3 the size of digital circuitry, which means AMD could produce a super-ray tracing gpu next year, but don't hold your breath. A year is a long time and doing something totally new like that always takes longer.

    Nvidia comes out with all the bells and whistles early, but AMD is the one to watch for cutting the price of all this crap down to size, often about three years after Nvidia releases the new technology first. The release of the new consoles in two years should be when we can expect more stiff competition and get an even better idea of how quickly any of this will become more affordable. Memory prices and tariffs aside, these cards are so powerful that prices will have to come down the minute the competition heats up. Right now, nobody has any real choice but to buy Nvidia if they want more performance for things like higher frame rates or rendering, and that's just no way to do business.
     
    Last edited: Oct 2, 2018
  4. wooleeheron

    wooleeheron Brain Damaged Lifetime Supporter HipForums Supporter



    This video is a short explanation of AMD's new P47 server rack, with roughly twice the efficiency and half the cost of currently ones on the market. What is interesting about the video is that Xinlinix is partnering with AMD to promote their new fpga circuits, which can be used for the equivalent of Nvidia's tensor cores. Unlike Intel, which prefers to develop their own hardware, AMD has to rely on partnerships with other companies for their survival. Anyway, this is the first sign they may be able to produce the goods next year or the year after, with a kick-ass video gaming card that includes their own fpgas. What is particularly interesting, is that Xinilinx has a complete scalable package to offer, meaning that AMD can scale their fpgas to add as many as they can on a chip, and will have all the help they require to adapt them to their consumer video cards.

    AMD's only strategy is to basically stack as many chips as you can on a substrate and hook everything together using what they call an "infinity fabric", or coherent fabric, that conveys most of the power and some of the main data buses as well. But, they are dramatically changing their architecture in the process and creating the first seriously high bandwidth, low cost, home computing system on a chip. Their current raven ridge chips have roughly the power of a PS4 and I expect serious gaming performance out of their next generation Ryzen 2 made on the 7-10nm scale. A single 30 watt chip capable of rocking even 1440p gaming and running VR programs is to be expected and, within ten years, discrete gaming video cards should become all but history for the consumer markets.
     

Share This Page