Ideas on Singularity and Post-Human Intelligence

On reading Vernor Vinge's paper, The Coming Technological Singularity: How to Survive in the Post-Human Era, I have a few thoughts, factors which perhaps have been over-looked or not yet considered.

In his paper, Vinge suggests that the inevitable development of super-human intelligence will either generate the physical extinction of the human race, or worse - the slavery of the human race to the super-intelligence, in much the same way animals are now subject to us. Read his paper at http://www.aleph.se/Trans/Global/Singularity/sing.html to get a better outlook of his views.

These are, in fact, very interesting and well-thought out propositions. He goes into great detail of future possibilities. I would like, however, to introduce a few assumptions which may heavily influence how things end up.

First off, Vinge breaks super-intelligence, or the event of Singularity, into several possible avenues of technology. The first is computer artificial intelligence. One would easily conclude that if we could make computers as intelligent as ourselves, one would quickly create a computer more intelligent than ourselves.

The second deals with computer networks, that AI will not arrive as a single computer program, but as a collective of computers and software, making each individual computer like a single cell in our own brains.

The third is crossing over into the lines of IA, or Intelligence Amplification. Thus Singularity will be accomplished through a combination of computer networks and humans.

The fourth is also classified as IA, and deals with actual human augmentation, creating super-humans.

Vinge concludes that any such superior mind would begin developing technology at a rate much faster than any human ever could, and that comprehension of this technology would be beyond our grasp. There would no longer be a use for humans in intellectual positions, or really in any other position, and, as useless beings, we would either be exterminated or exploited.

I, however, am an optimist, and therefore I immediately began to think of reasons why our future would not be so grim.

I believe that any human augmentations would lead to a positive future for humans. Humans are wired with emotions as well as logic, and for the average individual, the emotion of empathy is developed at an early age, about the time a child is learning to read. Other emotions, such as sympathy and love, can rarely be fully separated from a human, and in such cases as they are, the person is usually considered crazy, under the heading of sociopath.

No human being makes a single decision without emotion. The foundation of the decision may appear to be fully logical, but at the moment of implementing the decision, there is always emotion involved. This can clearly be seen with introspection as we go throughout a day.

Emotion is our motivator. The root, "motion", is not accidental. Though our minds have full control on our actions, every emotion we have pushes us to act in some way. Anger motivates us to lash out. Sadness motivates us to cry, and to find an end to whatever is causing the sadness. Guilt motivates us to change our behavior. Empathy motivates us to help others.

Such emotions and logic work entirely independently of each other. One can yell at one's boss motivated by anger, even though it is clearly not in the individual's best interest to do so. One can choose to purchase Item #1 over Item #2 because the individual likes the color of Item #1, even if Item #2 operates more efficiently. Even the most practical, rational person ends every decision with an emotional motivator. When one comes to a sound logical conclusion, endorphins are released to give a feeling of satisfaction and closure, and then the decision is made, and the act done.

These emotions currently run our society. Human emotion, not logic, define ethics. Indeed, there are some humans who are obviously more productive in society than others. In fact, some individuals are not productive to society, nor will they ever be. Institutionalized mentally challenged people can provide no logical good to our society. The bed-ridden terminally ill produce nothing either. A purely logical mind would conclude that these people should be exterminated. In fact, such thinkers have existed and risen to power, and succeeded in exterminating all such "inferiors" under their care. Such thinkers are also usually defined as sociopaths, perhaps not lacking all emotion, but indeed lacking any sense of right and wrong, as defined by our emotional society and individual senses of empathy, sympathy, love, and guilt.

Thus, any human with amplified intelligence would still maintain such feelings and ethics. Some perhaps might rise to power who would lack these ethics and begin exterminating inferiors, but one would assume there would be at least as many IA's with ethics as powerful as the IA's without, and the eternal struggle between "good" and "evil" would continue, just as it does now.

And as smarter humans currently have a desire to share their ideas and teach those who are less-intelligent, so would IA's have the desire to somehow lift the intelligence and knowledge of all humans who are willing to "learn". If one human mind can be amplified, so can another. I believe what we would see is the simultaneous advancement of the entire human species, rather than it's subjectation or extermination.

The next issue would be if Singularity stemmed from Artificial Intelligence. Would a computer mind exterminate its creators, a race of inferior beings who produce nothing, and give nothing in return? Would it find some use for us, and make us subject, as in the movie, The Matirx?

I would like to demonstrate how all of the above points would also apply to any artificial intelligence.

First, one must explore what makes intelligence what it is. Currently, we have computers capable of dependant logical thought, making some types of logical decisions at rates far faster than any human mind can. Yet it is not intelligence. Why? Why can computers not yet think for themselves?

The answer lies in the random factor. Human creativity is a random process. One cannot sit down and invent the next best thing on a moment's notice. The creative process cannot be predicted or compelled. Computers merely follow our commands. All data in is processed the exact same way and therefore the output is consistent and predictable. Even though a computer can use randomly generated variables, the engine itself must always be static.

There is a special randomness of randomness to human creation. The engine itself changes randomly. Not only does the input data change, but the way the data is processed changes, and does not only vary from individual person to person, but from moment to moment. When two or more randomly unassociated ideas come together, a new idea is formed, and when, then, the logic and human sense of discernment "filters" the bad ideas from the good ones, art or science is produced.

And at the root of all this randomness? Emotion.

It is emotion, not logic, that provides the individual with the ability to associate ideas and discern the good from the bad. It is also emotion, again, that motivates the generation of ideas to begin with. Any brainstorm is accompanied with a rush of euphoria that allows us to leap from one idea to another.

Thus, I conclude, that for any super-intelligence to occur, it must include, by definition, emotion. Otherwise, one would only have a very-fast calculator, processing data input the same predictable way each time, "motivated" only by orders from humans. Such a device is, and always will be, a slave to humans, and harmless on its own.

The remaining factor is what emotions such a machine would possess and in what form. Could such a machine be invented with only the emotions of hatred and spite? Would such a machine operate and be able to satisfy the conditions of Singularity? Now we enter into the realm of great unknowns, but we can explore this further.

Many of our "illogical" decisions, such as aiding someone, end up with surprise self-benefiting conclusions. Indeed, all other things being equal, if one were to pit an entity with limited emotion-base against one with a vast emotion-base, the one with the greater resources would win, and as such, the one with emotions such as pity, empathy, and love, would have a better chance at winning, simply because it has more "data" to draw up. If one looks at the history of the world, evil sometimes prevails, and yet eventually, the good guy wins.

In fact, if I could see any development coming out of Singularity, I would see advanced intelligences creating not only mental concepts we cannot comprehend, but also emotions we cannot comprehend.

We are, in a very literal sense, creating our own God(s).

Would such a God, with emotions, destroy it's own creator? If anything, we would be kept around and pampered out of sentiment. Or perhaps not.

Again, such a being or set of beings would perhaps be motivated to elevate and "teach" the human race, to develop the technology to amplify our intelligence to be on par with them or it. Loneliness is also an emotion, and one which has been responsible for many a creation.

Who knows how things will go in the future. By its very definition, we cannot even attempt to fully comprehend the Singularity, at least not at this point in time. Just as the future holds many adverse and miserable possibilities, I maintain that there is just as much chance that it will be positive for everyone.


Sub-Note, an addendum idea

Vinge also pointed out that perhaps a set of rules could be defined for the super-intelligences, such as Asimov's Laws of Robotics. He then pointed out that such a set of rules would quickly be supplanted by machines without rules, and the point is rather utopian and impractical.

Yet the Rule of Religion, whether based upon truth or not, has largely been responsible for human ethics on the level of the society. Belief in a God and in an afterlife has stirred many an emotion in billions of people throughout history. In some cases, emotions of hatred have stirred people to kill. At least as often as that, and arguably much more often, people are instead stirred to help those less fortunate, less capable, and less logically desirable.

Perhaps such a set of rules could be supplanted as something of a religion. Religion has the power of propagating itself, and thus, one super-intelligent being would attempt to convert other super-intelligent beings. If something such as the meta-golden rule were propagated by a set of "spiritual" beliefs based on a few supporting facts, such a belief system could keep these beings benevolent. However illogical religion may be, it has been with humanity, an intelligent species, for tens of thousands of years. Perhaps it could realistically remain during the Singularity.