Monday, April 11, 2011

SXSW 2011: The Singularity: Humanity's Huge Techno Challenge

Will supercomputing intelligences outsmart human-level intelligence? "The Singularity: Humanity's Huge Techno Challenge" panel claimed to dissect the very core of the Singularity, if and when it will occur, and what we can expect to happen. The question was debated by Doug Lenat, founder of an artificial intelligence project CYC, Michael Vassar, president of the Singularity Institute for Artificial Intelligence, and Natasha Vita-More, vice chair of Humanity +.

Technological Singularity is considered to be a hypothetical event occurring when technological progress becomes so rapid that it makes the future impossible to predict. It is commonly thought that such event would happen if superhuman intelligence was created. For starters, Doug Lenat gave an overview of possible scenarios of how technological singularity would happen, or why it wouldn't happen. He lists these forces driving us towards creation of superhuman intelligence: demand for competitive, cutting edge software applications (commercial and government); demand for personal assistants, such as SIRI, but enhanced; demand for "smarter" AI in games; mass vetting of errorful learned knowledge, such as in Wikipedia. And the forces that may preclude Singularity? Large enterprises can stay on top in other ways than being technologically competitive; humans, too, may be satisfied with bread and circuits, immersing themselves in games to distract them from pressing realities. Also, Singularity may not happen if some event or trend kills all the advanced technology: an energy crisis, neo-luddite backlash, or AI's merciful suicide (say, AI realizes it's a threat to humanity, and kills itself). Then there are pick-your-favorite doomsday scenarios, such as grey goo, wherein nanobots multiplying out of control munch up all the matter on Earth.

Doug Lenat speaks about forces pushing us towards Singularity Doug Lenat speaks about forces pushing us towards Singularity. More pictures from SXSW 2011 are in my photo gallery.

Which is more likely -- that the Singularity will happen, or that some forces will prevent it from happening? How dangerous will it be for us, humans? How compatible it will be with our continuing existence?

As one would expect from a president of Singularity institute, Michael Vassar seems to think Singularity is likely, and that we would get there much sooner if we planned technology more deliberately than we do. "The more you study history, the more you'll see that we don't do very much deliberation. And the little that we do, really goes a very long way," he says. For millenia, technology was evolving in a random, unplanned way, similar to biological evolution. About 300 years ago humans started thinking more deliberately. (I don't know where Vassar gets this number -- Industrial Revolution started 200 rather than 300 years ago.) Automating the kind of human thought that can be well performed by machines, and combining it with the kind of thought that's not easy to automate, may lead us to a very rapid technological acceleration. But to close the gap between machine and human intelligence, we need to build a very good understanding of human intelligence. At some point in history humanity discovered scientific method, which is a very rudimentary understanding of how reasoning works. It allowed us to build institutions that will shape the future, the way no other institutions have been able to, says Vassar.

As to us being able to control whether nonhuman superintelligences will help us or cause our extinction, Vassar is not too optimistic. "Ray Kurzweil thinks we can get emerging superhuman intelligences to slow down. But we, humans, don't have a good track record of getting potentially dangerous trends to slow down."

Michael Vassar, Dougt Lenat, and Natasha Vita-More on the Singularity panel at SXSW 2011 Michael Vassar, Dougt Lenat, and Natasha Vita-More. More pictures from SXSW 2011 are in my photo gallery.

In every panel on Singularity, you'll get people who understand that Singularity may happen entirely without the humans' control, and then you'll get those who view Singularity only as a tool for progress, especially social progress, and have no interest in it otherwise. This was the case, for example, at the Singularity panel at ArmadilloCon 2003, when one writer said, if Singularity isn't going to enforce social justice, it's not going to happen. I got an impression that Natasha Vita-More is in the second camp. She spoke about how advancing technologies need to solve aging, healthcare, and social problems, especially those that still needlessly exist in the third world, as if technology will only do what we need it to do. She did not address the possibility that Singularity might take off without our control or influence.

She started by saying: "The Singularity is presumed to be an event that happens to us rather than an opportunity to boost human cognitive abilities. The very same technology that proposes to build superintelligences could also dramatically enhance human cognition. Rather than looking at the Singularity as a fait accompli birthing of superintelligences that might foster human extinction risk, an alternative theory forms an intervention between human and technology. [...] The Singularity needs smart design to solve problems." According to her, humans would achieve that by "evolving at the speed of technology", in other words, cyborgizing themselves.

Humans may have to deliberately redesign their brains and bodies to keep up or merge with the machines, but it still does not preclude the chance that Singularity might not come about by our design. If nonhuman superintelligences evolve, what incentive would they have to merge with humans? Why carry around flesh bodies, even engineered with excessive strength, resilience, or longevity? I'm reminded of what Bruce Sterling said on another occasion about trying to fit new technology into a conceptual framework of old technology: it would be like putting a papier-mache horse head on the hood of your car.

Doug Lenat disagrees that integration of our physical bodies with machines is necessary or sufficient for Singularity to happen. He would focus on not dramatic cyborgization, but just the information technology. Having information processing apliances that amplify our brain power would change us the same way that 100 years ago electrical devices amplifed our muscles. We travelled farther than our legs would carry us, we communicated farther than we could shout -- it changed our lives in fundamental ways and never changed back. Approaching Singularity, we'll see appliances amplifying our minds the same way. The society will amplify as well, become smarter in general, and will be able to solve the problems that Natasha Vita-More was talking about. At the same time, he doesn't think technology is a panacea for that. "When technology automated a number of things that were done manually before, social stratification only increased."

Michael Vassar goes even further: "We have technologies to solve most social problems today. But what we don't have is ability to engage ourselves in solving the problems we don't care about."

Somebody in the audience asked: "do you think a consciousness that exists outside human body (e.g. in a machine) can be spontaneously generated?" Michael Vassar replied: "I don't know what you mean by spontaneously generated, but I think, not likely. Consciousness would not be generated without a great deal of design." Doug Lenat thought this question was too vague. In a limited sense of consciousness, programs are conscious. You can interrogate CYC (Lenat's AI project) programs about their goals or methods, so they do have some self-reflection built into them. But it's probably nothing like what a human observer would perceive as consciousness. To answer this question, a better definition of consciousness is needed.

Also, in the future we will each have many avatars doing many different things, says Doug Lenat. Mental aids will direct our attention to where it's most needed at the moment. In that sense, each person's consciousness will exist everywhere.

Another question from the audience. "To be truly creative, you have to unplug yourself from technology often enough. So how would uploaded brains do that? Would inability to do that kill their creativity?"

Michael Vassar. "If I was an uploaded or enhanced being, I would be able to unplug myself much better. I would not only unplug from my laptop or the internet, but even from my visual cortex."

And here is another take on Singularity, where the original popularizer of the concept, Vernor Vinge, discusses the concept with several science fiction writers.

No comments: