Showing posts with label Doug Lenat. Show all posts
Showing posts with label Doug Lenat. Show all posts

Monday, April 11, 2011

SXSW 2011: The Singularity: Humanity's Huge Techno Challenge

Will supercomputing intelligences outsmart human-level intelligence? "The Singularity: Humanity's Huge Techno Challenge" panel claimed to dissect the very core of the Singularity, if and when it will occur, and what we can expect to happen. The question was debated by Doug Lenat, founder of an artificial intelligence project CYC, Michael Vassar, president of the Singularity Institute for Artificial Intelligence, and Natasha Vita-More, vice chair of Humanity +.

Technological Singularity is considered to be a hypothetical event occurring when technological progress becomes so rapid that it makes the future impossible to predict. It is commonly thought that such event would happen if superhuman intelligence was created. For starters, Doug Lenat gave an overview of possible scenarios of how technological singularity would happen, or why it wouldn't happen. He lists these forces driving us towards creation of superhuman intelligence: demand for competitive, cutting edge software applications (commercial and government); demand for personal assistants, such as SIRI, but enhanced; demand for "smarter" AI in games; mass vetting of errorful learned knowledge, such as in Wikipedia. And the forces that may preclude Singularity? Large enterprises can stay on top in other ways than being technologically competitive; humans, too, may be satisfied with bread and circuits, immersing themselves in games to distract them from pressing realities. Also, Singularity may not happen if some event or trend kills all the advanced technology: an energy crisis, neo-luddite backlash, or AI's merciful suicide (say, AI realizes it's a threat to humanity, and kills itself). Then there are pick-your-favorite doomsday scenarios, such as grey goo, wherein nanobots multiplying out of control munch up all the matter on Earth.

Doug Lenat speaks about forces pushing us towards Singularity Doug Lenat speaks about forces pushing us towards Singularity. More pictures from SXSW 2011 are in my photo gallery.

Which is more likely -- that the Singularity will happen, or that some forces will prevent it from happening? How dangerous will it be for us, humans? How compatible it will be with our continuing existence?

As one would expect from a president of Singularity institute, Michael Vassar seems to think Singularity is likely, and that we would get there much sooner if we planned technology more deliberately than we do. "The more you study history, the more you'll see that we don't do very much deliberation. And the little that we do, really goes a very long way," he says. For millenia, technology was evolving in a random, unplanned way, similar to biological evolution. About 300 years ago humans started thinking more deliberately. (I don't know where Vassar gets this number -- Industrial Revolution started 200 rather than 300 years ago.) Automating the kind of human thought that can be well performed by machines, and combining it with the kind of thought that's not easy to automate, may lead us to a very rapid technological acceleration. But to close the gap between machine and human intelligence, we need to build a very good understanding of human intelligence. At some point in history humanity discovered scientific method, which is a very rudimentary understanding of how reasoning works. It allowed us to build institutions that will shape the future, the way no other institutions have been able to, says Vassar.

As to us being able to control whether nonhuman superintelligences will help us or cause our extinction, Vassar is not too optimistic. "Ray Kurzweil thinks we can get emerging superhuman intelligences to slow down. But we, humans, don't have a good track record of getting potentially dangerous trends to slow down."

Michael Vassar, Dougt Lenat, and Natasha Vita-More on the Singularity panel at SXSW 2011 Michael Vassar, Dougt Lenat, and Natasha Vita-More. More pictures from SXSW 2011 are in my photo gallery.

In every panel on Singularity, you'll get people who understand that Singularity may happen entirely without the humans' control, and then you'll get those who view Singularity only as a tool for progress, especially social progress, and have no interest in it otherwise. This was the case, for example, at the Singularity panel at ArmadilloCon 2003, when one writer said, if Singularity isn't going to enforce social justice, it's not going to happen. I got an impression that Natasha Vita-More is in the second camp. She spoke about how advancing technologies need to solve aging, healthcare, and social problems, especially those that still needlessly exist in the third world, as if technology will only do what we need it to do. She did not address the possibility that Singularity might take off without our control or influence.

She started by saying: "The Singularity is presumed to be an event that happens to us rather than an opportunity to boost human cognitive abilities. The very same technology that proposes to build superintelligences could also dramatically enhance human cognition. Rather than looking at the Singularity as a fait accompli birthing of superintelligences that might foster human extinction risk, an alternative theory forms an intervention between human and technology. [...] The Singularity needs smart design to solve problems." According to her, humans would achieve that by "evolving at the speed of technology", in other words, cyborgizing themselves.

Humans may have to deliberately redesign their brains and bodies to keep up or merge with the machines, but it still does not preclude the chance that Singularity might not come about by our design. If nonhuman superintelligences evolve, what incentive would they have to merge with humans? Why carry around flesh bodies, even engineered with excessive strength, resilience, or longevity? I'm reminded of what Bruce Sterling said on another occasion about trying to fit new technology into a conceptual framework of old technology: it would be like putting a papier-mache horse head on the hood of your car.

Doug Lenat disagrees that integration of our physical bodies with machines is necessary or sufficient for Singularity to happen. He would focus on not dramatic cyborgization, but just the information technology. Having information processing apliances that amplify our brain power would change us the same way that 100 years ago electrical devices amplifed our muscles. We travelled farther than our legs would carry us, we communicated farther than we could shout -- it changed our lives in fundamental ways and never changed back. Approaching Singularity, we'll see appliances amplifying our minds the same way. The society will amplify as well, become smarter in general, and will be able to solve the problems that Natasha Vita-More was talking about. At the same time, he doesn't think technology is a panacea for that. "When technology automated a number of things that were done manually before, social stratification only increased."

Michael Vassar goes even further: "We have technologies to solve most social problems today. But what we don't have is ability to engage ourselves in solving the problems we don't care about."

Somebody in the audience asked: "do you think a consciousness that exists outside human body (e.g. in a machine) can be spontaneously generated?" Michael Vassar replied: "I don't know what you mean by spontaneously generated, but I think, not likely. Consciousness would not be generated without a great deal of design." Doug Lenat thought this question was too vague. In a limited sense of consciousness, programs are conscious. You can interrogate CYC (Lenat's AI project) programs about their goals or methods, so they do have some self-reflection built into them. But it's probably nothing like what a human observer would perceive as consciousness. To answer this question, a better definition of consciousness is needed.

Also, in the future we will each have many avatars doing many different things, says Doug Lenat. Mental aids will direct our attention to where it's most needed at the moment. In that sense, each person's consciousness will exist everywhere.

Another question from the audience. "To be truly creative, you have to unplug yourself from technology often enough. So how would uploaded brains do that? Would inability to do that kill their creativity?"

Michael Vassar. "If I was an uploaded or enhanced being, I would be able to unplug myself much better. I would not only unplug from my laptop or the internet, but even from my visual cortex."

And here is another take on Singularity, where the original popularizer of the concept, Vernor Vinge, discusses the concept with several science fiction writers.

Saturday, November 28, 2009

CYC-ology - Using AI to Organize Knowledge

Doug Lenat, an artificial intelligence researcher and CEO of Cycorp, a company that aims to build general artificial intelligence, gave a talk at the Center For Inquiry Austin. He examined why AI is so difficult to create, and how CYC is approaching this task.

Why haven't we been able to create a program that would pass the Turing test, i.e. be able to converse in such a way as to be indistinguishable from a human? For a large part it's because human thinking is faulty in ways that are very hard to approximate in software. Doug Lenat calls these idiosyncracies of human thinking translogical behaviors. Those are illogical but predictable decisions that most people make; incorrect but predictable answers to queries. Lenat listed some of those behaviors in his talk. He also addressed them in his article, "The Voice of the Turtle: Whatever Happened to AI?" (PDF). Here are some examples, compiled from both the article and the talk.

Flawed memory and arithmetic ability: while a human will correctly tell you what day of the week was yesterday, he or she will most likely be wrong if asked what day of the week was April 7, 1996. For the same reason, humans are likely to give wrong answers to math problems, but certain answers are more "human" than others. 93 - 25 = 78 is more understandable than 0 or 9998.

Conjunction Fallacy: Most people will say "A and B" more likely than A. For example, asked to decide which is more likely, "Fred S. just got lung cancer" or "Fred S. smokes and just got lung cancer," most people say the latter.

Incorrectly estimating probabilities of various events. People worry more about dying in a hijacked flight than the drive to the airport.

Failure to discount sunk cost; also, skewed perception of risk and reward. People estimate risks and rewards very differently if it means losing something they already had, as opposed to investing into something they don't yet have.

Reflection framing effect. Let's say, before adopting a certain public health program (e.g. medical screening), 500 people a year used to die from whatever this program is supposed to prevent. If you market it to the public as saving 200 lives a year, many more people will vote for it than if you say "300 people per year will die".

Another example. In two neighboring countries organ donor rates are 85% and 15%. Lenat asked us to guess the cause of this drastic difference, considering that the two countries are very similar in their socio-political and economic situation. It turns out, the only difference is that when you get a driver's license in the country A, you have to opt-in to be an organ donor by checking a box on a form; in country B, you have to opt OUT of it, also by checking a box. When opt-in is the default, 85% people opt-in; the reverse is also true. 85% of people just don't bother to check the checkbox either way. Who would have thought?

Failure to understand regression to the mean. That's a kind of translogical thinking I found the most poignant. Many parents punish their child after he or she gets an abnormally bad grade, and reward them after getting a good grade. However, after an unusually bad grade, the next one is statistically likely to be better without any punishment or reward. Similarly, after a good grade, the next one is likely to be worse. So parents who react to grades with punishments or rewards, get an idea that punishment works, but rewards don't. Historically it explains a lot of cruelty among humans, says Lennat. In reality, he believes, nothing really has any effect on human beings.

Doug Lenat with Center For Inquiry Austin folks

Scott, Steve Bratteng (center) and Doug Lenat (right) chat after Doug Lenat's lecture "CYC-ology - Using AI to Organize Knowledge" at the CFI Austin.

Despite these uniquely human weaknesses, people can easily make inferences about the world that computers can't. Even the best search engines today are falling short of putting together simplest facts about the world, and drawing conclusions from them. If you ask Google "Is the Space Needle taller than the Eiffel tower?", you'll get tons of pages that give the heights of theose objects, but no page that tells you which one is taller. You also won't get an answer to "who was U.S. president when Barack Obama was born?", because search engines still can't string together two facts: Obama's year of birth, and the identity of the president that year (John Kennedy). Today's search engines only handle syntactic search, while these queries represent examples of semantic search.

As we saw, human reasoning strengths and weaknesses are not the same as AI reasoning strengths and weaknesses. There is an opportunity for synergy here, says Lenat. What is the missing piece to bridge that chasm? CYC is ready to bridge it with their (a) ontology of terms, which includes over 1 million of general terms and proper names, (b) knowledge base of general knowledge, which is 10x size of the ontology, and includes such facts as unsupported objects fall; once you're dead, you stay dead; people sleep at night; wheeled vehicles slow down in mud, and (3) a fast inference engine. So for example, if you query CYC system for an image of "someone smiling", it will retrieve a picture with a caption "A man helping his daughter take her first step". The system achieves this by putting together the fact that when you are happy, you smile, with the fact that you become happy when someone you love accomplishes a milestone.

(As an aside, I think this level of sophistication is easy to foil. Human emotions are more complicated than this example describes, and someone watching their child take first steps could easily have tears in their eyes. So an AI would have to know that there is such a thing as "tears of joy". But how would it tell between those, and tears of sadness? An AI would have a long, long way to go before it could recognize similar emotional nuances.)

So are we making any progress towards AI? Doug Lenat believes that the current semantic web "craze, fad, or trend" (his words) is moving us in the right direction. Instead of syntactic searching like Google is doing now, in a small number of years we might be able to see semantic searching. What would be the signs that our software is becoming more intelligent? Look for speech understanding systems, like Dragon Naturally Speaking, to stop making dumb mistakes, says Lenat. When they no longer garble your words in ways that a human would never misunderstand, that would be the sign that speech recognition programs have some semantic awareness.

Are we on the road to Singularity, then? Nobody in the audience asked Lenat this question outright, but he admitted he believes it's only a matter of time until artificial intelligence crosses human intelligence.