Neuromorphic Computing Coming To Cellphones

Discussion in 'Science and Technology' started by Wu Li Heron, Apr 2, 2017.

  1. Wu Li Heron

    Wu Li Heron Members

    Messages:
    1,391
    Likes Received:
    268
    https://www.inverse.com/article/27902-neuromorphic-computers-ibm-truenorth-true-north-machine-learning-neural-network

    Although HP appears to have possibly dropped out of the race to produce a neuromorphic computer using memristors, IBM's Truenorth appears to be coming on strong with aspects of the technology being incorporated in the next generation Nvidia graphics cards coming out at the end of this year. Google has their own tensor processor that can slapped into any machine with a pci slot, and Intel's newest server processors will contain FPGA circuitry aimed squarely at machine learning applications meaning it can be programmed to do facial recognition or any number of things including how to teach or train a much more complex neuromorphic chip how to solve any number of problems that it would otherwise be too time consuming and expensive to program.

    This particular article is pretty much spot on as to what we can expect in the near future with neuromorphic inspired chips already invading our cellphones and graphics cards, but with the notable exception of the author's conclusion that quantum computers can't replace neuromorphic ones which simply expresses an outdated western biased view that incorrectly assumes the human brain is fundamentally causal. In recent years research has shown that the brain is actually quantum mechanical and incorporates causal Bayesian probabilities that vanish into indeterminacy in which our synapses and axions appear to play every bit as much as an active role as our neurons. The implication is that the causal digital aspects of the brain often merely provide invaluable error correction for its quantum computing capacity which is far faster and more efficient.

    What all that means is industries are discovering more profound insights into the hierarchical architecture of the brain, which resembles a distributed gain amplifier, and these insights are being applied to anything related to machine learning, video processing, and building cheaper and more efficient computers and networks in general. Taking both the assertive digital approach and more receptive analog one they are all converging on a single architecture that can do both but, in the meantime, we will increasingly see every kind of cpu incorporate more complex forms of rudimentary machine learning and every kind of gpu incorporating arithmetic accelerators with both being combined on SoCs and people being able to dramatically upgrade the AI in their computer using both software downloads and circuitry such as Google's tensor processor that you can slap into any pci slot and, perhaps someday, a phase transition memristor from IBM all coexisting on a single chip. Sometime around the end of this year, and over the following three years, the crap should hit the fan good as Intel, AMD, Nvidia, IBM, and everyone else come together to standardize the first commercial post Von Neumann architecture and produce the first chips that approach the limits of Moore's Law and can cram all of these billions and trillions of parts into an incredibly small space.
     

Share This Page

  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice