Just Not Possible...

Discussion in 'Science and Technology' started by Ged, Jun 23, 2018.

  1. guerillabedlam

    guerillabedlam _|=|-|=|_

    Messages:
    29,419
    Likes Received:
    6,296
    We've already bred dogs and cloned sheep, so human intervention in the process of fostering conscious entities (going with the assumption dogs and sheep are conscious) wouldn't even be the breakthrough. I think the difficulty in grasping conscious AI conceptually is the jump from carbon to silicon lifeforms and the novel way(s) they would reproduce.
     
  2. Deidre

    Deidre Follow thy heart

    Messages:
    3,623
    Likes Received:
    3,126
    @tumbling.dice

    Consciousness - Wikipedia

    Using that as a general guide for explaining or defining consciousness, I don't see how AI will ever be capable of intentionality, or feelings? These are also subsets/traits of consciousness, in addition to just the overall sense of awareness. Why is it necessary to apply the term consciousness to machines? It could be called something else, but it won't be consciousness, in my opinion.Humans build machines, and create the programs that are in essence, the AI. The created will never become the creator. It could imitate the creator, but it will never be the creator. Not independently, on its own, anyway.

    Hmm but, then there's Westworld...


    [​IMG]
     
  3. Deidre

    Deidre Follow thy heart

    Messages:
    3,623
    Likes Received:
    3,126
    But, it can't do anything on its own. It's pre-programmed to do anything at all...that humans program it to do. It's not naturally adjusting, it's artificially adjusting, based on the programming. To me, if consciousness were to enter into things, it would simply be part of the program. There's really no right or wrong answers, considering that many scientists can't agree as to where human consciousness forms, or how...let alone how this could be replicated in machines. lol
     
  4. guerillabedlam

    guerillabedlam _|=|-|=|_

    Messages:
    29,419
    Likes Received:
    6,296
    The particulars of what consciousness is renders this discussion kind of pointless in a sense, but I think it can render the discussion pointless if we try and talk about consciousness in a hierarchy of animals from humans to primates to mammals to reptiles to amphibians to insects etc.

    But I digress..

    This thread is about the possibility of the phenomena occurring, so at the moment machine learning might be fairly rudimentary adjustments to data in learning mostly programmed by humans but let that unfold over decades, to centuries, to millennia... Even if Moore's Law slows down it would seem like If we were to develop AI in the likeness of humans, that at some point, on a relatively current trajectory, it would pass a Turing Test of sorts insofar that it would appear to have fully developed emotions, thoughts and behaviors. Perhaps some of that we chalk up to humans tendency towards pattern seeking but I feel like at some point it would become indistinguishable enough to where we treat those attributes as real.
     
  5. Irminsul

    Irminsul Valkyrie

    Messages:
    62
    Likes Received:
    111
    How long ago do you think that would have been? I mean, when people said you couldn't do that? Cause I always find it funny that the Sumerians knew their gods could swim and fly and clone and somewhere from the 6000 years ago, someone's come and out and gone, you can't do that! Lol
     
  6. Noserider

    Noserider Goofy-Footed Member

    Messages:
    9,578
    Likes Received:
    6,215
    Uh...I don't know. 1673?

    Let's go with that ;)
     
  7. Are you sure about that? It may make no difference what it's made out of. It may make ALL the difference. I see no reason to believe that a silicon based machine would ever be able to feel anything at all. Perhaps it could compute, but human beings do more than mere computation. More than half the time we aren't even reasonable.

    I also disagree that a feeling "computer" would ever be in competition with a strictly rational computer. In essence, the rational computer has already lost the game, because it can't truly experience anything. Not in the visceral way a human being can. All of the cyborgs can take a rocketship they've built to the next star system for all I care. That's not "winning" and it certainly doesn't make those who are capable of feeling and being irrational obsolete.

    It couldn't possibly be determined whether having all of the semblances of emotion is the same as having the real emotion we know a protein based "machine" can induce. So I would think very carefully before ever trading my body in for silicon.
     
  8. Mountain Valley Wolf

    Mountain Valley Wolf Senior Member

    Messages:
    2,589
    Likes Received:
    945
    I don't think that time travel would ever be possible---particularly to the past. While I think the past is recorded in the quantum information that flows around the universe, it has no physicality, and is nothing more than recorded information. All of the quantum collapses that made any past moment a physical reality are now gone forever. Any collapse only represents a present point in space-time.

    As far as AI, the question is not one of carbon or silicon, but one of a nonphysical reality. The mind, by definition, is nonphysical (if you don't believe me, look up the definition of physical). The mind can remember the past, it can anticipate, and even see a future, even while it lives in the present. Physicality only represents the present, and as I said, any collapse only represents a present point in space-time. In other words, physical reality, which is to say, material reality, only exists in the present. If physicality cannot escape the present, than it cannot equate to the nonphysical reality of mind----mind therefore transcends physicality. AI is dependent on the operation of a physical thing, and is therefore trapped within the present. For example, it cannot process what the philosopher Husserl referred to as 'retention' the way a sentient mind does. In other words, it can only work with the data, or information, currently being processed within the program---even if that data/information is based on a historical precedent, it is still limited to that present information/data. It cannot understand or perceive beyond that present thing.

    Even if we can develop a quantum computer where multiple states are simultaneously possible, each set of states represents a single present.

    Therefore AI will never attain sentience. It is possible that on other planets, or what have you, there is a silicon based life form. But in such a case, we are not dealing with computers or electronic chips but instead an organic silicon based structure that reflects the nonphysical mind that gives it life.
     
    Ged likes this.
  9. Driftrue

    Driftrue Banned

    Messages:
    7,860
    Likes Received:
    6,354
    Isn't the mind dependent on the operation of a physical thing, the brain? When we remember something, the brain recreates the state it was in when that thing was happening, often incorrectly. So I'd say we are trapped in the present too, only able to work with information we are currently processing.
     
  10. Mountain Valley Wolf

    Mountain Valley Wolf Senior Member

    Messages:
    2,589
    Likes Received:
    945
    I disagree from a philosophical standpoint. I believe that the physical brain is a reflection of mind and therefore the brain is dependent on the mind---just as much as quantum information determines the where when and how a particle will most likely manifest. There is evidence that we remember far more than what we seem to remember. Psychologists speak of this more intense memory as being something that is buried within the subconscious----therefore we may remember an event the way we consciously perceived it to have happened. I would say in this case, that we consciously remember an event the way our physical brains recorded it in terms of neural pathways---we could also refer to this in terms of a memory based on our ego (I use ego in a Jungian sense as a manager/filter aspect of conscious mind with the purpose of maintaining a consistent personality). The memory would be shaped or altered in terms consistent with the ego perception therefore it could be inaccurate.

    However we can access a deeper memory of the same thing which is much more lucid and detailed, and is more accurate to the actual occurence. An example of this would be a memory accessed through hypnosis. The problem is, why would the brain have two memories? Nature always moves towards the easiest path, so it doesn't make sense that we would maintain two separate memories within our physical brains. I would argue that this deeper memory is nonphysical, and is normally hidden to us by the ego which filters out all perceptions and information that is not pertinent to our concious state of mind, our focus on physicality, and maintaining a consistent personality. This is also why you have cases of people who have lost memories due to brain injury, surgery, senility, and so forth, and then to regain them in later years.

    As far as retention, let me give you an example similar to how Husserl understood it (though he did not have computers at that time to compare to): If you listen to a song, you enjoy that song as a procession of notes. With each new note, you do not have to re-remember the song up to that note to enjoy it, rather you enjoy each note as they play out, and fit within, the whole song, even anticipating notes to come. A computer on the other hand, if it is playing music, and was programmed to recognize it against a data bank of songs, would have to compare it note by note and would not retain the song in the same way a sentient being would. Let's say for example that the computer was trying to identify a song and that it had the same 15 notes as another song and both songs were stored in the computer. There are only 2 ways the computer could identify which one of the songs it was: 1.) It would have to compare songs note by note from the beginning until it found a match, or, 2.) it would have to have the songs stored by notes such that both songs would be stored together and once there is no match, it would move to the next song, which it would know to do because they had been stored in order based on the fact that the first 15 notes were the same. This is unlike a human which would know the difference based on the difference in the entire melody. A human can immediately recognize songs that are similar, a computer would have to compare note by note.
     
  11. scratcho

    scratcho Lifetime Supporter Lifetime Supporter

    Messages:
    22,613
    Likes Received:
    14,821
    If we survive another 100 years or so, AI will be programmed /imbued with replications so close to representing humanity, (the good parts) that myriad trips will be undertaken by them (for us) that do not pose the problems of biological entities attempting space travel. No oxygen requirements, waste removal requirements, no food requirements, etc.
     
  12. If you were to record everything, it's possible to create a time capsule of the past to which you could "travel." Not a good idea, though. It is possible to communicate with people in the past. As an observer, you influence everything you see, whether it's new information or old. I suppose it could all just be a recording, but it is possible for one's spirit in the present to influence the behaviors of people in the past. You just have to be open to it.

    Just consider that when you see someone in a recording, they believe it's really happening. One always believes the past is really happening, because it's the same thing as the present for a while, which is definitely happening. Why would it all seem so enduring if it was disappearing from one moment to the next? It wouldn't.

    Everything is like a treasure. Someone could find it. It's eternal, enduring. It seems to move, but at the same time it kind of moves sideways. Like a film. From one end of the timeline to another it has real substance, as substantial as "the material" because it is the material. As much a substance as gasoline is a substance.

    To be thoughtful is to be human, and to be thoughtful is also to be capable of doing jobs. I'm not sure a robot could ever be thoughtful towards a human being. How could it ever relate to us like we relate to each other? Should it? It seems first we have to decide upon a way for robots to generally behave. Should they be aware that they are robots, or should they believe that they are human?

    Another problem is that nobody knows what it means to be a robot. So it's hard to say what a robot should believe. I suppose it should believe all of the laws of physics. But that's just it, how do you get it to believe something? All sentient beings can believe something. To have, like, a concept of something accompanied by feelings surrounding whatever it is. Everyone knows implicitly what it means to be a human (because it must mean whatever you are experiencing.) It seems all we can do is hope that it means something implicitly to be a robot. I don't think you can just program meaning.

    Our senses perform all of these functions, but on some level we don't relate to ourselves as our senses. That is to say that if all of your senses were stripped, there would still be some part of you left. What is that part of you and how do we create it electronically? Designing a personality... Is that even a possibility? You could program a robot to say lots of smart-alecky things, but could you also design it so that it subtly understood jokes and laughed at the appropriate times? How do you design a personality?

    If you don't design it, you have to just hope that one appears in your robot, and where there is hope there is spirit, because it's a leap of faith. There is nothing about your programming that should give your robot a personality. You could give it lots of rust spots and say, "It's got personality, though, doesn't it?" But it would just slowly pick up leftover pop cans and throw them in the trash.

    Or if it moved quickly, it would just march around your house picking up trash. Hopefully it would be equipped with good motion sensors so that it wouldn't march right over you. When it stopped maybe you could choose whether or not to program it to ask you a question. Like, "How are you doing?" It would need really good human detecting sensors, though. I can picture it cleaning a house quickly and just running over an infant sitting on the floor.

    But I don't think it's possible to create an artificial intelligence on par with a human, so that it has a complex emotional existence. Maybe even an existential existence. I think the essence of a person is spiritual. Can the robot be granted access to the spirit is the question. I would say yeah. And to some extent a robot is only as alive as you believe it to be alive.
     
    Last edited by a moderator: Jul 19, 2018

Share This Page

  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice