Doug Lenat, an artificial intelligence researcher and CEO of Cycorp, a company that aims to build general artificial intelligence, gave a talk at the Center For Inquiry Austin. He examined why AI is so difficult to create, and how CYC is approaching this task.
Why haven't we been able to create a program that would pass the Turing test, i.e. be able to converse in such a way as to be indistinguishable from a human? For a large part it's because human thinking is faulty in ways that are very hard to approximate in software. Doug Lenat calls these idiosyncracies of human thinking translogical behaviors. Those are illogical but predictable decisions that most people make; incorrect but predictable answers to queries. Lenat listed some of those behaviors in his talk. He also addressed them in his article, "The Voice of the Turtle: Whatever Happened to AI?" (PDF). Here are some examples, compiled from both the article and the talk.
Flawed memory and arithmetic ability: while a human will correctly tell you what day of the week was yesterday, he or she will most likely be wrong if asked what day of the week was April 7, 1996. For the same reason, humans are likely to give wrong answers to math problems, but certain answers are more "human" than others. 93 - 25 = 78 is more understandable than 0 or 9998.
Conjunction Fallacy: Most people will say "A and B" more likely than A. For example, asked to decide which is more likely, "Fred S. just got lung cancer" or "Fred S. smokes and just got lung cancer," most people say the latter.
Incorrectly estimating probabilities of various events. People worry more about dying in a hijacked flight than the drive to the airport.
Failure to discount sunk cost; also, skewed perception of risk and reward. People estimate risks and rewards very differently if it means losing something they already had, as opposed to investing into something they don't yet have.
Reflection framing effect. Let's say, before adopting a certain public health program (e.g. medical screening), 500 people a year used to die from whatever this program is supposed to prevent. If you market it to the public as saving 200 lives a year, many more people will vote for it than if you say "300 people per year will die".
Another example. In two neighboring countries organ donor rates are 85% and 15%. Lenat asked us to guess the cause of this drastic difference, considering that the two countries are very similar in their socio-political and economic situation. It turns out, the only difference is that when you get a driver's license in the country A, you have to opt-in to be an organ donor by checking a box on a form; in country B, you have to opt OUT of it, also by checking a box. When opt-in is the default, 85% people opt-in; the reverse is also true. 85% of people just don't bother to check the checkbox either way. Who would have thought?
Failure to understand regression to the mean. That's a kind of translogical thinking I found the most poignant. Many parents punish their child after he or she gets an abnormally bad grade, and reward them after getting a good grade. However, after an unusually bad grade, the next one is statistically likely to be better without any punishment or reward. Similarly, after a good grade, the next one is likely to be worse. So parents who react to grades with punishments or rewards, get an idea that punishment works, but rewards don't. Historically it explains a lot of cruelty among humans, says Lennat. In reality, he believes, nothing really has any effect on human beings.
Scott, Steve Bratteng (center) and Doug Lenat (right) chat after Doug Lenat's lecture "CYC-ology - Using AI to Organize Knowledge" at the CFI Austin.
Despite these uniquely human weaknesses, people can easily make inferences about the world that computers can't. Even the best search engines today are falling short of putting together simplest facts about the world, and drawing conclusions from them. If you ask Google "Is the Space Needle taller than the Eiffel tower?", you'll get tons of pages that give the heights of theose objects, but no page that tells you which one is taller. You also won't get an answer to "who was U.S. president when Barack Obama was born?", because search engines still can't string together two facts: Obama's year of birth, and the identity of the president that year (John Kennedy). Today's search engines only handle syntactic search, while these queries represent examples of semantic search.
As we saw, human reasoning strengths and weaknesses are not the same as AI reasoning strengths and weaknesses. There is an opportunity for synergy here, says Lenat. What is the missing piece to bridge that chasm? CYC is ready to bridge it with their (a) ontology of terms, which includes over 1 million of general terms and proper names, (b) knowledge base of general knowledge, which is 10x size of the ontology, and includes such facts as unsupported objects fall; once you're dead, you stay dead; people sleep at night; wheeled vehicles slow down in mud, and (3) a fast inference engine. So for example, if you query CYC system for an image of "someone smiling", it will retrieve a picture with a caption "A man helping his daughter take her first step". The system achieves this by putting together the fact that when you are happy, you smile, with the fact that you become happy when someone you love accomplishes a milestone.
(As an aside, I think this level of sophistication is easy to foil. Human emotions are more complicated than this example describes, and someone watching their child take first steps could easily have tears in their eyes. So an AI would have to know that there is such a thing as "tears of joy". But how would it tell between those, and tears of sadness? An AI would have a long, long way to go before it could recognize similar emotional nuances.)
So are we making any progress towards AI? Doug Lenat believes that the current semantic web "craze, fad, or trend" (his words) is moving us in the right direction. Instead of syntactic searching like Google is doing now, in a small number of years we might be able to see semantic searching. What would be the signs that our software is becoming more intelligent? Look for speech understanding systems, like Dragon Naturally Speaking, to stop making dumb mistakes, says Lenat. When they no longer garble your words in ways that a human would never misunderstand, that would be the sign that speech recognition programs have some semantic awareness.
Are we on the road to Singularity, then? Nobody in the audience asked Lenat this question outright, but he admitted he believes it's only a matter of time until artificial intelligence crosses human intelligence.
2 comments:
I hope you don't mind me commenting here instead of sending an email. I just wanted to let you know that the link to this blog on geekitude.com is broken. It takes you to http://http//sfragments.blogspot.com/
Thanks for notifying me of the problem. I fixed it.
Post a Comment