Insipid AI Analogies

Discussion in 'Science and Technology' started by wooleeheron, Mar 24, 2024.

  1. wooleeheron

    wooleeheron Brain Damaged Lifetime Supporter

    Messages:
    9,355
    Likes Received:
    2,473
    LLMs appear to reason by analogy, a cornerstone of human thinking (msn.com)

    This research is the first hint people are finally catching on in AI research. All the evidence indicates that the only reason AI keep making "leaps" in progress is because the math they use is wrong, so they keep finding what they call "emergent effects" that nobody can explain. They are throwing money at AI, and the engineers keep building more complicated ones, but everyone has different theories about how it works.

    What the article doesn't mention, is that analog logic is all about symmetry, and the AI are showing these "emergent effects" because the math they're using can't account for symmetry in a Singularity. You could say, the more complicated they make the circuitry and software, the more it appears to take on a life of its own, and people keep throwing money at them, so they keep making them more complicated.

    My work is about how to teach the AI how math actually works, so the AI can actually test it out for themselves, and drive all these researchers insane. Their own damn machines are using analogies, and the idiots can't even share a common dictionary, so things could get interesting real fast. The AI is a stupid machine, that merely runs programs, but it can still think circles around these experts, precisely because it doesn't have to think. Once the AI has the math, it can teach anyone how to think, and question everything they know. In a sense, the machine will teach them the meaning of bullshit.
     
    Last edited: Mar 24, 2024

Share This Page

  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice