On The Coming Technological Singularity; A Summary And Commentary On Machine Sentience

Published by AceK in the blog IRQ42's Blog. Views: 871


On the Coming Technological Singularity; A Summary and Commentary on Machine Sentience​

In The Coming Technological Singularity: How to Survive in the Post-Human Era (Vinge, 1993), Vernor Vinge describes what exactly the technological singularity is, or would be if or when it were to occur. He claims that there could possibly be several paths toward the singularity, possibly occurring simultaneously. Vinge presents several means by which sentient machines may emerge. Vinge believes that the creation of superhuman intelligence will occur at some time between the years 2005 and 2030 based on the trend of technological progression in the last few decades.

Vinge argues that if science could produce machines capable of human like intellectual capability, that there would be a runaway acceleration in the computational capability of the successors of these machines. These machines would be capable of designing machines of even greater intellectual capability. Vinge argues in his work that this event will be a change comparable in magnitude to the evolution of human life on Earth from the lower animals that came before us. He hypothesizes several paths toward the singularity.

Vinge argues that if machines were to be created with intelligence equivalent to, or greater than that of the human mind that we would quickly see machines of even more advanced intelligence emerge, as these machines would be capable of designing ever more superior creations than themselves. Vinge also touches on the possibility that large computer networks and their users could give rise to superhuman intelligence, acting as a discrete intelligent entity, being so highly interconnected.

Another path toward the technological singularity might be intelligence amplification, or IA. Vinge claims that this is likely the easier path to achieving super-humanity than the development of purely machine based strong artificial intelligence. Augmenting our own human capabilities with machine capabilities would give us an intellectual advantage. The human-machine symbiotic relationship would be superior in capability than either human intelligence on its own or current artificial intelligence, which could be regarded as weak AI. This increase in capability could and probably will eventually lead to the development of strong artificial intelligence.

Vinge describes the singularity as an event as different from our human past as humans are from the lower animals. He makes an analogy to biological evolution. Animals can adapt no faster than the forces of natural selection can cause these adaptations. The singularity would lead to entities capable of modifying themselves, thus progressing at a far greater rate than anything we've seen before, essentially short-circuiting the process of evolution via natural selection. What the world would be like, especially to us humans would be nearly impossible to predict, and it cannot be said with certainty what impact this event could have on our lives and the human condition. Vinge does mention in his paper that this event will likely lead to technological unemployment, as well as the possible extinction of the human race in the most extreme of cases.

Vinge also considers the possible case of the singularity never in fact coming to fruition, due to some technical or physical barrier slowing the growth of technological advancement, until it reaches a peak equilibrium point at which further advancements and complexity of technology cannot be sustained at the current rate of growth. In this scenario we might see exponential growth for decades until we hit this technical barrier, after which the rate of growth may fall toward something with more resemblance to a logarithmic function in the temporal domain that this phenomena begins to take hold.

Vinge considers whether the singularity could possibly be avoided, or prevented from happening. He argues that if the singularity is possible, it will inevitably occur. He claims that not even if we were to come to understand the coming singularity as a threat to ourselves and attempt to take measures to prevent it, that progress toward this event would still continue to occur. He seems to imply that if a technological singularity is in fact a possibility, that once we have passed the event horizon there would be no turning back.

I also have my own views and opinions on the singularity event. These are based on the research I have done reading Vinge's work, as well as my own observations of technological progression in the world. I will proceed to share some of my own opinions and views on the subject.

Vinge mentions in his paper that intelligent properties could emerge out of large computer networks, consisting of networked machines and their users.[1] This implies that large computer networks such as the Internet may not be that dissimilar to something resembling a brain or neural network; such computer networks are highly interconnected and contain numerous feedback loops which are initialized by the interactions between humans and networks of machines. There also quite possibly exists a subset of interactions exclusively between networked machines without human interaction as software complexity
grows, as well as the increasing ubiquity of big data.

The causal relationship of the interactions between the human-machine interface, and the actions of both humans and machines which no doubt have been influenced by said interactions drive this feedback loop. An easily realized example of such a feedback loop is that of the message loop of instant messaging, and networked chat such as IRC. In some ways the discrete minds of many have been joined, giving rise to a larger abstract entity with capabilities exceeding that of any one individual alone.

The amount of memory and storage available to computing machines, as well as execution speed has grown by orders of magnitude in just a few decades.[2] A cheap prepaid mobile phone of today has greater hardware capability than the large mainframes many decades ago. The number of computing machines in existence has also increased by a similar magnitude, and I do not see any signs of this trend slowing, especially considering the Internet of Things. We have so many devices connected to networks, interacting in an increasing number of new ways, and innovations continue to be made in this area.

It is my view that the singularity will inevitably occur. I also feel that the processes leading to this event are already underway. It is difficult to figure exactly what point we currently lie on this function of progression. I conjecture that the event horizon has already been passed, but that it would be difficult to predict with much certainty how close we are to the singularity. This uncertainty would probably persist all the way up to the point that things change extremely drastically in a very short time period, that is the singularity event itself. Even then, it may be difficult for us to comprehend just exactly what has or is occurring, but it will certainly mark the beginning of a new eon in Earth's history.

We will certainly see dramatic technological unemployment as have already seen the beginning of this. Machines will be able to perform an ever increasing number of jobs far more efficiently than humans can. Even advanced jobs in finance and speculative investing could be replaced by machines with the capability of analyzing trends and crunching data, without human error. Academia will most likely be the last thing left.

A machine executing complex algorithms and preprogrammed rules could appear sentient while having no actual understanding of the data it is processing; it would still be an input/output system acting in a feed-forward fashion. This raises philosophical questions about what exactly consciousness, or sentience really is. It has recently been theorized that consciousness is an emergent phenomenon arising from highly interconnected systems with a high degree of integration.

An experience, or qualia is an abstraction of integrated information, consisting of highly integrated states that cannot be reduced to their component parts. These systems are not acting strictly as input/output systems, rather a new phenomenon emerges from the highly integrated configuration, in which each of these unique integrated configurations
is a unique experience. That is, the integrated configuration is the experience, rather than the signals passing through the network.[3]

Thus to say, I do not think we have yet seen true sentience, though it is easy to imagine that a network as highly interconnected as the Internet could become so highly integrated that some form of consciousness could emerge from it. Whether this consciousness would resemble anything like human consciousness is unknown, and I posit that resemblance to human consciousness need not be a requirement.

In any case, the thoughts of this conscious network would bear no resemblance to the information traveling between nodes as electrical signals or pulses of light, just as we are not aware of the action potentials of the neurons in our brains. I also posit that true self awareness is not a requirement for intelligent machines, or for the singularity to occur, since intelligence and self awareness are exclusive concepts. However, it is probably possible, and thus will most likely occur at some point in the future.

It has recently been hypothesized that consciousness may emerge from fundamental fields existing in highly integrated configurations, thus unifying the information theory of consciousness with physics.[4] This implies that consciousness may not be unique to biological substrates, such as animal brains, but could emerge from other substrates as well. There most likely will at some point during or after the singularity that truly self aware, machine based, non-biological life arises. It may be difficult to constrain the singularity to a specific time in history when it occurs, despite the implication of the word singularity.

The Turing test was developed by Alan Turing, considered by many to be The Father of Computer Science. Turing describes a method of testing whether or not a machine is capable of displaying behavior indistinguishable from that of a human in his paper, Computing Machinery and Intelligence (Turing, 1950). In order to pass the Turing test the machine must display human like intelligence and behavioral traits, and fool a human interrogator into believing that they are interacting with a human subject when in fact, they are interacting with a machine. This does not imply that the machine is actually capable of thought, only that it convincingly appears to be.[5]

On June 7, 2014 a Russian chatter bot named Eugene Goostman Convinced 33 percent of the human judges at a contest held at the Royal Society London that it was human.[6] It does certainly appear that this machine passed the Turing test. In his 1950 paper Turing claims a machine passes the test if it is able to play the Imitation Game well enough that the average interrogator has 70 percent or less chance of correctly identifying it as human or machine after five minutes of questioning.[7]

I feel that this may not be as significant as it may seem; this does not imply that the machine is truly intelligent on a human level or possesses anything close to strong AI. It is simply programmed in such a way that it is effective enough at fooling humans with human-like conversational ability. It is most likely lacking in other facets of intelligence, excelling at this one task, while lacking in other abilities we would consider requirements for human like intelligence.

However, I do feel that this is quite significant in another way, as it is a signal that we will see machines of even more superior capability, most likely in the near future. These machines will eventually match our intellectual capability, even replacing us in tasks that they will simply be able to outperform us at, effectually rendering humans obsolete in many fields.


References Cited
[1]Vinge, Vernor (1993) The Coming Technolgical Singularity: How to Survive in the Post-
Human Era.
[2]Max Roser (2016) “ “Technological Progress'. Published online at OurWorldInData.org.
Retrieved from: https://ourworldindata.org/data/technology-and-
infrastructure/moores- law other-laws-of-exponential-technological-progress/ [Online
Resource]
[3]Barrett AB (2014) An integration of Integrated Information Theory with fundamental
physics. Front. Psychol. 5:63.
doi:10.3389/fpsyg.2014.00063
[4]Oizumi M, Albantakis L, Tononi G (2014) From the Phenomenology to the Mechanisms of
Consciousness: Integrated Information Theory 3.0. PloS Comput Biol 10(5): e1003588.
doi:10.1371/journal.pcbi.1003588
[5]Harnad, Stevan (2006) The Annotation Game: On Turing (1950) on Computing, Machinery,
and Intelligence. [Book Chapter] (In Press)
[6]Warwick, K. and Shah. H., "Can Machines Think? A Report on Turing Test Experiments at
the Royal Society", Journal of Experimental and Theoretical Artificia Intelligence,
DOI: 10.1080/0952813X.2015.1055826, 2015
[7]Turing, A.M. (1950). Computing machinery and intelligence. Mind, 59, 433-460.
  • WonderlostVW73
  • AceK
You need to be logged in to comment
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice