Real Time Ray Traced Geometry

Discussion in 'Science and Technology' started by Wu Li Heron, Jan 16, 2017.

  1. Wu Li Heron

    Wu Li Heron Members

    Messages:
    1,391
    Likes Received:
    268
    Real time ray traced geometry is a modern technology coming to a cellphone and virtual reality near you within the next three years at most. Two of the more popular companies responsible for countless gpus used in cellphones and cloud technology already are preparing to introduce real time ray tracing that allows for ultra realistic lighting, shadow, and ai effects that can be added onto existing games and built into new ones. Ray tracing means a character in a story can actually look you in the eye and display other types of situational behavior as well as literally have reflections in their eyes and realistic shadows without all the picket-fencing we've all become used to over the years. This is coming out on cellphones first and should expand into consoles and computer immediately afterward. This first video explains hybrid rendering that can make cheesy current graphics look better, while the second one provides some demonstrations of the technology which include 3D windows where moving your head around you can peek around inside of any window.

    https://www.youtube.com/watch?v=LyH4yBm6Z9g

    https://www.youtube.com/watch?v=uxE2SYDHFtQ

    https://www.youtube.com/watch?v=ND96G9UZxxA

    https://www.youtube.com/watch?v=mx5whvrMORw


    These kinds of improvements highlight increases in efficiency that are related to the entire industry now moving towards analog software and hardware changes that support higher efficiency and better ai. Hardware wise, next year at the lastest one of the graphics manufactures should obtain the neighborhood of 24 terabytes of compute power which is enough to do anything you can imagine in virtual reality in real time anywhere without even using cloud technology.

    Mindmaze is one of the more interesting VR companies working on zero latency VR where the headset you wear reads your brain waves and can not only predict what you will do next such as move your arms or your eyes, they are working on providing the sensation of touch as if you are actually touching a virtual object or can feel things like a car's acceleration pushing you backwards in your seat. Having zero latency means the beautiful picture and action can keep up with however you might move around and your computer can be busy with other things more often. An entire video game made using real time ray tracing would be object oriented along the lines of Mine Craft allowing people to interact with environments and characters in totally unique ways. For example, walls can be made of bricks and boards that fly apart and you can punch as many holes in them as you want and they stay there.

    It'll be another ten or twenty years before they even work out most of the basic fun stuff that can be done with these kinds of new graphics engines, and the fun should start within a year or two at most. Currently AMD and Nvidia and Intel and others are fighting to create the next standards for such things which means, with any luck, one of them will adopt a standard that is good enough the others will adopt it as well. All of them are working on foveated vision and, basically, reproducing the same efficiency that the human visual system produces.
     
    2 people like this.
  2. Wu Li Heron

    Wu Li Heron Members

    Messages:
    1,391
    Likes Received:
    268
    https://www.youtube.com/watch?v=GwaUQPNsyUE

    This is another good example of the analog logic being developed within the entire industry. The architecture of the chip they use for processing ray traced lighting effects simultaneously processes geometry allowing an otherwise less powerful to chip to produce impressive cinematic results. FPAA circuitry looks like it will eventually make the process even more efficient and, you might say, we are learning the architecture of nature attempting to squeeze more onto an average chip. Analog takes up maybe 1/3 less space than digital and can easily be a thousand times more efficient when dealing with larger numbers it becomes the only viable alternative. Its the architecture of the brain with the subconscious resembling the analog, or context of what's missing from this picture, and the conscious mind the assertive digital content that provides greater error correction. It obeys the principle that tools come from what exists, but usefulness from what does not exist making your choices in tools sometimes more of a question of what we do not and cannot know.
     
  3. Ged

    Ged Tits and Thigh Man.

    Messages:
    7,006
    Likes Received:
    2,988
    I don't need this shit.
     
  4. pensfan13

    pensfan13 Senior Member

    Messages:
    14,192
    Likes Received:
    2,776
    Gimme some pong.
     
  5. Wu Li Heron

    Wu Li Heron Members

    Messages:
    1,391
    Likes Received:
    268
    In real time ray tracing and VR you can play pong with any balls you like.
     
  6. pensfan13

    pensfan13 Senior Member

    Messages:
    14,192
    Likes Received:
    2,776
    I don't need no real time trace bla bla bla
    Pong has been around for almost 45 years. It's fine with the balls they have.
     
  7. Wu Li Heron

    Wu Li Heron Members

    Messages:
    1,391
    Likes Received:
    268
    A pair of real time ray tracing sunglasses can easily allow you to walk into a virtual movie theater, sit anywhere, and play pong on anything you like. Hollywood is beginning to invest heavily in the technology and, personally, I'd rather play pong against Scarlett Johansson in virtual reality.
     
  8. pensfan13

    pensfan13 Senior Member

    Messages:
    14,192
    Likes Received:
    2,776
    Why would I go to a movie theater to play a video game.

    It has movie in the name. I am there to see a movie.
     
  9. NoxiousGas

    NoxiousGas Old Fart

    Messages:
    8,382
    Likes Received:
    2,385
    24 terabytes of computing power?
    on one GPU or a bank of them?

    ???
    so they are learning to implement existing technologies into mobile devices. Cool, but not as much as you are making it sound.
    guy going on about 8x AA, my pc has been able to run as high as 16x in some apps. I understand the challenge of pulling it off in a mobile device, but it's not Earth shattering news.

    and where did you get the idea of "analog logic" in computing.
    please explain the tech being employed and specifically how it differs from digital.
     
  10. Wu Li Heron

    Wu Li Heron Members

    Messages:
    1,391
    Likes Received:
    268
    The newer chips on the market use what is called a "coherent fabric" which is a superscalar approach required due to how many parts they can now squeeze onto a chip and the requirements for turning different parts of the chip on and off on demand and for shuttling power and information around as rapidly and efficiently to all these different parts. They are replacing the north bridge functions by hardwiring them into the chip architecture itself and Intel's newest chips come with their own power supply regulars built in. Such a superscalar architecture can be described as heterogeneous inside and out and allows everything to reconfigure on the fly to place varying emphasis on whatever. For example, AMD and Intel are currently working on new processors for graphics and other things that do some of their computing inside the main system memory simply for the sake of efficiency gains in not having to send everything back and forth to the cpu or gpu all the time.

    Ideally, what you would want is every transistor being capable of at least a thousand distinct states and being capable of doubling as memory on the fly. That's the kind of thing that FPAA circuitry should be capable of producing for as little as 1/10,000 the power requirements and a thousand times faster than digital circuitry. Intel's next generation server processors will all come with FPGA circuitry that does this same trick in a more limited fashion and cellphones should soon commonly have their own FPGA circuits of roughly 164 transistors that can accomplish routine ai tasks such as voice recognition, facial recognition, etc. using virtually no power. Nvidia's new architecture is similar to foveated vision that stresses applying more detail to the center of the image and this merely reflects the same analog logic of everything else I've mentioned where everything they design revolves around what's missing from this picture as much as what it contains. Some things need to be more hardwired than others simply because it is much more efficient and all these approaches are moving towards a superscalar architecture that can hard wire what has to be hardwired for efficiency and, yet, can remain infinitely expandable.

    AMD is the first to announce a gpu capable of 22 terabytes in single precision and half that in double precision. At the end of the month they will be releasing their Ryzen processor which is a milestone in this direction of superscalar architectures and should pack the power of a PS4 in a single chip capable of running on a laptop. By next year at the latest we should see seven nanometer products with dramatic efficiency gains in more than just shrinking the size of everything. Perhaps, in another five years they will work out the basics and we could see graphics engines in general all adopt heterogenous superscalar architectures in both hardware and software. Theoretically, there is no reason you shouldn't be able to run any video game on the market today at over sixty frames per second using less than a watt of power at 4k resolutions.
     
    1 person likes this.

Share This Page

  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice