https://www.technologyreview.com/s/603381/ai-software-learns-to-make-ai-software/ Its all rather new and unexplored territory but, as the article hints at, there's just way too much money and interest in this and the preliminary results are already impressive. Its more of the same analog self-assembling logic that I keep harping on and it was only this year that both Microsoft and Intel announced their intentions to covert their entire industries to producing AI. Believe it or not, that's what the graphics card maker Nvidia has actually been doing for years already with gaming video cards merely being one way they apply the research. IBM is well on their way to producing an analog memristor "brain in a coffee can" in the next five years or so that should have somewhere between the ability of a cat and that of a human. Being able to then teach it how to learn things on its own would mean Terminator robots in the near future.
Not really. It all depends on how they are programmed to learn, teach and develop themselves. Especially in the graphic industry I wouldn't be that scared for a possible terminator kind of robot/machine. Yes there's a lot of cpu power, yes they're 'smart' and that will only increase. But video cards won't have a terminating humans kind of assignment in them any time soon.
This is something I've warned a few people before about who embraced AI replacing humans doing "menial" jobs. I saw the Von Neumann type machines as an inevitable progression of the capacity of AI. I think Skynet from Terminator makes a good metaphor and is somewhat in line with the Technological Singularity.
The first AIs capable of facial recognition equal to that of humans have already been developed and the Pentagon just had their first successful test of swarm technology, that is, six inch drones that can repair each other, target, and attack anything on command using their collective intelligence. The issue is not merely how complex the hardware is, but how they are putting it to use and the intelligence being applied. The graphics card in anyone's computer is already capable of being used in a terminator robot with the real question being has anyone figured out how to use it effectively. This is the same question with any technology and, for example, it was a hundred years before anyone figured out how to use Newtonian mechanics to calibrate a cannon. Up until that time, people who fired cannons would spend twenty years learning just how far away they could be from a target and still hit it. Using Newtonian mechanics, the cannon could be calibrated and anyone could be taught exactly how far away they could be and how to calculate the angle required to hit the target from any distance.
Emergent effects ensure that the more intelligent these machines become the more they will develop their own heart and soul rather than merely become vicious killing machines. IBM already experienced this first hand when their famous computer Watson, that won on Jeopardy, developed an unsolicited case of potty mouth. The engineers deliberately designed the system to not resemble a human mind and brain to avoid just such problems, however, they either had more of a sense of humor than the job required or less.
wasn't there an SF story years ago......the visinaries.....were all writing about this years ago.....that wrote about this very thing....and the machines all replaced menial labor...and there was no need for masses of people anymore, so the very rich inherited the entire earth? hmmmmmmmmmmm
Sure, I think Asminov's I Robot likely gives a much more realistic cautionary tales. (Book not movie) If AI develops to a point of seeming sentient, and let's say it's programmed with something akin to Asminov's 3 Laws of Robotics, how does it react when there is some conflict between it's in built laws? If it demonstrates more emotive qualities, how does it respond to failures or even success?
Asimov was ahead of his time and just didn't have the complete picture. Intelligence is an emergent effect as is having a conscience or a sense of humor. The Three Laws of Robotics are built into existence itself and more closely resemble the "Prime Directive" of Star Trek. Mr Spock and Mr Data would develop both a sense of humor and a conscience simply because they stress logic and yin-yang dynamics ensure logic must inevitably transform into emotion in extreme situations.
What is particularly dangerous is the following 1. Weaponized AI: Anything designed to kill is dangerous 2. General Purpose AI: It wouldn't be an idiot savant, and could potentially find us troublesome and pests. 3. Mass unemployment
What is particularly troubling is that we already have the technology to feed, cloth, educate, and shelter every human being on the planet and restore the ecology, but we are not anywhere near accomplishing these things. That our machines tend to already reflect our own selfish values is instant karma.
I'm currently reading a book called The Technological Singularity which addresses many of the possibilities for AI in the future. Some of these possibilities we've already discussed, but one scenario I had not really considered is AI remaining disembodied and essentially living in virtual reality, where it could build AI communities and stuff and we could also learn from it.
My own view is that the technological singularity is merely another western metaphysical attempt to describe mother nature's wicked sense of humor. The law of identity going down the nearest convenient rabbit hole or toilet of your personal preference ensures that any technology taken to its logical extreme will inevitably express more organic behavior. For example, IBM's Watson acquired an unsolicited case of potty mouth. We are staring into the void with our telescopes and microscopes and the void is laughing back in our faces. Already you can buy a cellphone that is also a lie detector as good as what companies use today and there are now two computer systems that can produce better than average jokes meaning your cellphone will be able to give you invaluable insights into what is funny and what is serious, what is bullshit and what is the truth, in any given situation. It is we ourselves that our technology is about to catch up to and help to improve sometimes in dramatic ways.