April 2004

Beyond Defaults: Transhuman Intelligence

Michael Anissimov

 

"The problems that exist in the world today cannot be
solved by the level of thinking that created them."

- Albert Einstein

 

A personal catchphrase of mine is: "never expect the default to be optimal!"

The default situation, whether in human affairs, technological systems, or natural patterns, is always the preexisting system, which often works "satisfactorily", but only because the system itself sets our standards! We're so used to the default cluster of reference points because it's all we see.

The default is the most common. Don't expect what's common to be the same as what's optimal.

In fact, don't expect anything you believe in or anything you witness to represent optimality, the best possible option. "Optimal relative to the options I have available" or "optimal relative to what I've seen so far" may carry a strong psychological weight, but you can't confuse it with true optimality. And don't feel defensive if people point this out, they may be trying to help you. For example: as you are leaving for an important job interview someone points out that your shoes are scuffed. Even though your shoes might look like this normally, the default situation, it is not their optimal appearance.

We're all only human. In different worlds, universes, or in our imagination, there are beings stronger than humans, smarter than humans, kinder than humans, wiser than humans, funnier than humans, more beautiful than humans, better at communicating than humans, and more understanding than humans. We should want to become more like them. We should want to become better people, and seek out specific actions to make this happen. What constitutes "better" is our own decision, but we should realize that saying "I'm totally perfect the way I am" is just a cop-out to avoid the hard work of self-improvement. The notion that humanity is just a default, the first step on a long journey of improvement, is known as transhumanism.

If we value anything at all about living, talking, thinking, or simply being, then we will care about improving the qualities that correspond to what we value, while continuing to enjoy them in the present. There's no need to pretend the present is optimal, even if we can't yet work out how to improve. That just leads to stasis. Change is inevitable, and neglecting the reality of change is delusional. We have only two choices - to deny the reality of continuous change, or to influence the course of change toward widely desirable outcomes. The latter is the only responsible choice.

We can use our imaginations to envision worlds where the underlying qualities we value are amplified. Worlds where people don't get sick and where they don't become decrepit with age. Worlds where can do what we truly want without our livelihoods being put in jeopardy. Worlds where people are almost always happy because bad events rarely occur. Although such worlds might not be composed of strictly humans - Homo sapiens sapiens - the qualities we humans value, our "humanity", would be preserved. Not only could it be preserved, but vastly amplified.

We have to set our standards high first, lowering them perhaps to accomplish specific intermediate steps, but only temporarily. If we start out with low standards, we are cheating ourselves. We must have bold minds that can consider high ideals and practical plans simultaneously.

Never expect the default to be optimal. Realize you are not as smart or wise as you could be, but don't let it depress you. Realize your relationships are not as perfect as they could be, but don't let that depress you either. A strong mind can have high standards but still appreciate the present. A strong spirit will care deeply about closing the gap between the default and better alternatives, even if that gap can never be closed entirely. Every little bit matters, especially when there are so many people out there that are needlessly suffering because we'd previously accepted the default.

In different parts of time and space, probably in our own immediate future, there are beings with more of the fundamental qualities we value. Their bodies and brains have been improved through the application of advanced technology, such as nanotechnology or superintelligence. We might call such beings "transhumans", not to set them aside from humanity morally, but to point out that they would possess superhuman abilities in many or all domains. Just because we realize this doesn't mean we are putting down humans. It just means we see the default is not optimal. As humanity has improved its own abilities and systems since the dawn of history, we have been slowly moving towards a transhuman state. Only when we gain technology advanced enough to replicate and amplify the full functionality spectrum of the human body and mind will the gates to a transhuman world be opened.

Qualities That Create Value

Humans, like everything else in our world, are made up of atoms. Atoms can be positioned in many different patterns, but only a few of these patterns exist in reality. The human story has been about reconfiguring clusters of atoms in ways that correspond to our values. Although our values occasionally conflict, they often reinforce each other, and many of us care about resolving any conflicts.

One of our deep limitations is that we humans can only manipulate atoms in certain ways. We sometimes get into disputes because the easiest ways to manipulate atoms in accordance with our values happen to offend or hurt others. But there is hope for us. As we find out more ways to manipulate atoms, we also get better at manipulating atoms in ways that please us while avoiding injury or offense to others. Sometimes we even find ways to manipulate atoms that end up pleasing everybody.

For millions of years, the number of ways we had available to manipulate atoms was very small. Bare hands and simple tools only. Nowadays, we have many ways to manipulate atoms, and more are invented daily. You would think this would make us very happy, but it has actually made some of us sad or frustrated. That's because, unfortunately, the easiest currently available ways to manipulate atoms often end up injuring other people or offending them. It's also because the underlying minds doing the rearrangement of atoms don't change. Human nature stays the same, and violence, revenge, intimidation and trickery are all deep human inclinations, whether we like it or not.

Opening up more ways to manipulate atoms only solves a part of the problem. Sometimes it makes our problems worse. In other universes, worlds, or our imaginations, there are people kinder, better at communicating, and more understanding than we are. People that are not strictly human, even if they possess all the qualities we value. If we could use our ability to manipulate atoms to eventually turn ourselves into these entities, then we could get much better at manipulating matter in ways that don't injure or offend others, even in ways that help everyone. The functioning of the human brain isn't the big mystery it used to be, and once we learn the principles of operation underlying these qualities, fine-grained physical manipulation will be used to enhance desirable mental qualities without the side effects of crude drugs. Molecular nanotechnology would do the trick nicely.

Some think our greatest limitation is our inability to manipulate atoms in any way we want, but a genuinely greater limitation is our inability to perceive better ways in which atoms can be manipulated. Luckily, our brains are made of atoms too, and our imagination is made up of a special system of atoms which is part of intelligence. Compassion comes from another special system of atoms in the brain. If we could modify the atoms in our brains so that we would have more imagination, compassion, and intelligence, then that would a big improvement over the default, much bigger than coming up with new ways to manipulate atoms in the world outside our brains.

If we could enhance our intelligence, imagination, and compassion, then we'd have a much better foundation to continue devising new ways to manipulate atoms in general. I visualize present-day humanity as a collective of spikey structures, slowly expanding in size within a fixed space. We grow, but we're eventually going to start damaging each other. Some people try to file down the edges of their spikes so they're less sharp, but they rarely realize that they're still innately spikey, and will still hurt or be hurt by others if they grow in size enough. The moral of the analogy is; if great technological power is not accompanied by great wisdom and benevolence, we could suffer grave consequences, up to and including total extinction.

We shouldn't expect new technologies to be safe by default! We have to decide to develop them safely, and then apply them safely, in a consistent fashion. We shouldn't expect default humans to possess the intelligence, benevolence, or reliability required to safely manage arbitrarily advanced technologies. We have to go outside the box of default Homo sapiens thinking, transform ourselves on a deep cognitive level, from spikey structures into softer, flexible structures. Trying to file our spikes down will do no good in the long run. Better schooling, improved parenting, new politics, and advanced technology are solutions that only scratch the surface of our problems. We need to reach towards deeper solutions, ways to fundamentally enhance human imagination, intelligence, and compassion, by getting at the actual physical structure doing the computations underlying these qualities. Because the human design is just a default, we can't help falling short in these areas. This is not putting down humans, it's just acknowledging that humans are not optimal beings.

Human mythology contains many excuses designed to help us come to terms with these facts, which tell us that our limitations are "natural" or normal". But we can do better. We should do better! Why accept the dreary default?


Evolution and Smartness

As it turns out, human neurons are just updated versions of a lobster neurons. The vast majority of biochemical and structural foundations are the same. Biology has limited building materials and zero foresight. As evolution came up with new brains, it had to develop them bit by bit, because it doesn't have the ability to redesign things from the ground up. Thankfully, humans do. Soon we will gain the technological ability to make new minds smarter and kinder than any default human. It's all a matter of observation, reverse-engineering, and redesign. Past technological or scientific accomplishments pale in comparison to the achievement of creating a smarter and kinder mind.

The attributes of "smartness" and "kindness" correspond to certain types of atom arrangements. Thankfully, engineering and improving complex systems of atoms is a human speciality. The people that study the structure of the brain have made progress in determining which patterns correspond to these qualities. Figuring this out doesn't necessitate mediagenic philosophical insights, revelatory experiences, information from a deity, or anything like that. We do it by visualizing the brain as a set of distinct subsystems, employing communication across subdisciplines, state of the art tools, and traditional scientific study and experimentation. Most people are unaware of just how far we've come in discovering how human brains work, partially because there aren't many straightforward ways to apply the knowledge we do have.

It is almost impossible to discover 10% about how smartness works in humans specifically without having to figure out the whole explanation, all 100% of it. Humans in particular are complex because evolution is a long and complicated process, not necessarily because intelligence in general is complex. A simple slug may have many kilobytes of information encoded into it on how to survive and reproduce well in its environment. When a primitive fish species gradually evolves from a slug-like ancestor, this information is successively tweaked, rather than rewritten. This leads to some unusual, and suboptimal, biological features (ask any anatomist). Especially interesting is that the human mind, as implemented on human brains, is also structured like this with what can be best called 'evolutionary relics'. The result is that us humans have a lot of old evolutionary code written into our genes, "legacy code". This is information of questionable utility that plays a part in intelligence only because evolution is incapable of redesigning our brains from the ground up, but must work in small steps which provide advantages to following generations.

Most of that unnecessary complexity doesn't contribute to human qualities we value. We want to amplify the qualities we value, and not waste effort on amplifying the legacy code. Some parts of this code even work against our good sense, such as the cognitive forces leading to rationalization (a form of self-deception). We want a world that is safer, smarter, happier, freer, and kinder than the current one, but carrying forward all that information left over from fish-adaptiveness, reptile-adaptiveness, and so on isn't really necessary - it would waste valuable time. We want to use our ability to manipulate atoms to create systems that are kinder and smarter than humans are, so they can help us improve the world. They wouldn't help us out "because we created them", or "because they feel indebted to us", but because they would be genuinely kind people who want to help for its own sake. Given targeted effort, it should be possible to design the seeds for such systems in the relatively near future.

Some people call this idea "the Singularity" or the "technological Singularity". (The word 'singularity' comes from physics and it is the name for a situation where the usual model no longer works and a new model is required.) The Singularity is defined as the technological creation of transhuman intelligence, the creation of minds that mentally outperform all humans in both quantitatively, and qualitatively. The Singularity is only related to accelerating technology insofar as better technology makes it easier to build a smarter mind. Once created, such minds could then apply their intelligence to the task of further enhancing their intelligence, resulting in a feedback spiral of self-enhancement. Transhuman minds could be perpetually alert and hardware-extensible - that is, they could enhance their intelligence simply by fabricating and integrating new hardware into their cognitive architectures. The potential explosion of intelligence enhancement following a Singularity has been called recursive self-improvement. The resulting intelligence could be directed to the pursuit of real-world goals such as the alleviation of nonconsensual poverty, disease, or ignorance, and the creation of value-structures. There are now quite a few thinkers who view the Singularity as an extremely powerful way to help everyone create the futures they want. So some of us got together and created a nonprofit organization named the Singularity Institute.

The Singularity Institute is trying to make a Singularity happen as quickly and as safely as possible. Since both speed and safety are really important, the people behind the Singularity Institute are spending a lot of time studying the relevant science and technology, figuring out how to get the most safety and speed. The technology we have chosen is Artificial Intelligence. Not Artificial Intelligence like ELIZA, but a true mind that contains all the necessary complexity to implement intelligence, altruism, imagination, and genuine kindness. The creation of Artificial Intelligence will become technologically possible within one or two decades, thanks to the continued doubling of available computing power and progress in brain science. Computing power equal in magnitude (and quickly surpassing) that of the human brain will have arrived by then. A primary reason that AI researchers failed in the past is because they simply didn't have enough computing power, although admittedly lack of understanding is another.

Why do we expect a benevolent Artificial Intelligence to remain benevolent as it undergoes recursive self-improvement, rapidly acquiring enormous levels of intelligence? The answer is that in a properly designed Artificial Intelligence system, desirability flows along predictive links from a unique probabilistic supergoal. In the Artificial Intelligence system the Singularity Institute is proposing, that supergoal will be volition-based Friendliness. The design objective is a goal system that is selfless, unconditionally benevolent, and takes actions based on a fair balance of what other people want. Such a system could be designed with such "strength of moral integrity" that random goal drift, inclinations towards selfishness, and the failure scenarios described in science fiction would all be extremely unlikely. Such changes would be undesirable according to the altruistic goal system. Rather, the system would improve its morality in an open-ended fashion, much like the moral self-improvement of human civilizations over time. Such a system could be at least a human-equivalent philosopher, sharing humanity's "moral frame of reference". The intended result would be an AI that is at least as trustworthy, intelligent, compassionate, and wise as a respected human being, or a group thereof, in the same position. (If the AI design strategy previously stated turns out to be infeasible, then we would want to advocate a different design or a Singularity intiated by a human intelligence augmentee.)


Difficulty of AI

What about the challenge of creating real AI to begin with? AI will not automatically emerge simply by putting enough computing power in one place. The situation is far more complex than that. Many specific software algorithms will be required, but thankfully these algorithms are being uncovered by thousands upon thousands of brain researchers worldwide. Our goal, as people who want to personally implement the Singularity, will be to link all these algorithms together with enough computing power so that human-similar levels of intelligence can emerge. Some people don't believe we can do this within only one or two decades, because the human brain is too complex. They're right in one respect. The human brain is really complex. But, you don't have to copy an entire human brain in order to create entities smarter and kinder than us. You only need to copy the essential algorithms responsible for smartness and kindness, and we are discovering more about these algorithms every day. Due to the inefficiencies of biological evolution and other factors, these critical algorithms may be orders of magnitude less complex than our current brains, as has been confirmed by roboticist Hans Moravec in the area of vision and computational neuroscientist Lloyd Watts in the area of hearing. Building an AI will be very difficult, but it will be easier than most people assume.

In fact, copying over all the critical algorithms responsible for intelligence may be unnecessary. A subhuman AI that programs itself into a human-equivalent AI, through programmer supervision, would work just as well. Such an AI could "unfold" into a full mind rather than starting out as one. One way this could be done is by instructing an infant AI to observe intelligence as it occurs in humans, make guesses about the structure of the underlying cognitive complexity, and take actions to integrate similar complexity into its own programming to the best of its ability. Although this tactic would be useless if the AI lacked any intelligence to begin with, it could be immensely helpful in latter stages of the project. Incidentally, human brains are created in the womb using a seed-based paradigm as well - a mere 70MB or so of genetic code unfolds itself to create the two pound meat computer that houses our intelligence, personalities, and abilities. We call this paradigm "seed AI". Thankfully, an acorn can be a lot less complex than a tree, and an acorn is exactly what we are trying to build.

Why did the Singularity Institute choose Artificial Intelligence rather than human brain enhancement? Why not Brain-Computer Interfacing, for example? Would that work to create smarter-than-human intelligence? In theory it can, but in practice there are many difficulties. One is that it's really hard to experiment on humans. We're fragile, and all the really useful experiments would probably end up hurting the subjects! Another is that evolution has already optimized the human brain to a large degree, making it unlikely that first-generation cybernetic interfaces or biotechnological approaches will appreciably enhance intelligence. Another is that the human brain is intertwined with a whole army of stubborn homeostatic mechanisms that work 24/7 to preserve the brain's default state. There's also the issue that so few researchers are working on genuine human intelligence enhancement, but there are thousands or even millions of researchers working on how to make smarter software - they're called "programmers". Even though the software we have today is relatively dumb, there are other researchers working on complex, experimental software that approximates some of the critical algorithms humans use for thinking and perceiving.

We don't hear too much about this software because it's incomplete. "90% of an intelligence" is unimpressive in the same way that "90% of an engine" or "90% of cell phone" is unimpressive. It's only useful if you have 100%! The point here is that someone could have already built 90% of an intelligence and nobody in the public would know, because "90% of an intelligence" isn't enough to do anything visibly impressive. It needs to be roughly completed for any sort of interesting level of intelligence to emerge. So, how fast does 90% of a cheetah run?


Friendly AI

The Singularity Institute wants to be the first to complete that "100% of an intelligence". We really want to do this quickly, because we are concerned about what could happen if someone else builds such an intelligence first, but neglects giving that intelligence the algorithms that correspond to kindness, understanding, wisdom - qualities that humans universally value - algorithms we loosely call "Friendliness", with a capital 'F'. (Note that "Friendliness" is a much more inclusive concept than what is entailed by mere "friendliness".) A Friendliness-complete AI would then be called a Friendly AI.

Kindness is a really complex and specific thing, and we shouldn't expect it to arise automatically in any form of intelligence. (But it seems obvious to us because we have a lot of built-in brainware that makes it easy to visualize, just as we can easily visualize our mother's face despite its high objective complexity.) Indeed, it is easier to create an intelligence without kindness than one that possesses it! But that would be really dangerous. It would be difficult to defend ourselves against an intelligence that could outsmart us, even if it started out as an AI. A sufficiently intelligent AI could gain powerful real-world footholds quite quickly, possibly through nanotechnology. We would want a Friendly AI in particular rather than an arbitrary AI because of the great power and intelligence a self-improving AI would inevitably attain. Demand an AI that is substantially kinder-than-human, and you have a lot less to worry about.

Luckily, the question of "how do we build a goal system that is benevolent towards sentient beings, and remains that way?" seems far easier than the difficult question, "how do we convince other humans to be nice to us, and to keep them convinced?" The Singularity Institute's Friendliness plan is a matter of cognitive engineering, building a certain type of brain structure, and is only peripherally related to traditional philosophical, moral, or political questions about "fairness" or "rightness". The act of human-to-AI creation is vastly different than the act of human-to-human discussion. The object is not to create an AI that contains some static output of programmer beliefs, but an AI that is as compassionate and flexible as kind human beings can be when they are at their best. Our objective is a Friendly AI that possesses the underlying cognitive structure genuine altruists use to choose between right and wrong. (The underlying structure is the focus, not the surface conclusions the structure happens to produce.) Mostly, this burns down to volition - treating people the way they want to be treated.

The Singularity Institute has only been around since 2000, but we've already published the equivalent of two volumes of material on the topic of how to create systems that are more intelligent and kind than default humans. If you think our mission is a noble one, then please read through some of our introductory material. The reasoning is occasionally complex, but some very thoughtful and scientifically rigorous people support it. Many of us thought the Singularity sounded like a crazy idea when we first came across it, but we changed our minds when we found out more about the logic and the science behind it.

Our literature has lots of interesting information on evolutionary psychology, altruism, AI, nanotechnology, philosophy, cognitive science, and morality. It will change the way you think about yourself and humanity in general. You may have read dozens of popular books and magazines on these topics, but you will be impressed, even shocked, by the ideas from the Singularity Institute. Visit our site, you won't be disappointed.

Visit the Singularity Institute here: http://www.singinst.org/

Uploaded: 12-08-05