Singularity is commonly associated with emergence of strong artificial intelligence, and the panelists don't think strong AI is any closer now than it was at the time this topic was discussed on ArmadilloCon panels of the past. Or 30 years ago. Or ever. Singularity is certainly no closer than when Vernor Vinge debated it at ArmadilloCon 2003, or Charles Stross at ArmadilloCon 2006. Moreover, some panelists disagreed whether moderator John Gibbons' question "What do you see as a fundamental block towards strong Artificial Intelligence?" is even the right question to ask. They disagreed whether it is possible to bring an AI into being by programming it.
This was a well-reasoned analysis of common Singularity tropes.
Bruce Sterling admits that he used to find the idea of strong AI seductive, but doesn't see any evidence of it emerging any time. More, he doubts whether the products of intelligent mind, such as new ideas or inventions, can be attained computationally. "I know many very intelligent people, and they don't reason stuff out of the first principles," said Bruce Sterling. It certainly doesn't "feel like" creative insights come to us algorithmically. And if flashes of insight can't be simulated by a Turing machine, then they are not achievable by a computer. "Computation is not like human intelligence," says Sterling. "It's like mathematics. You could say, mathematics will one day overtake the human brain! But that would be a category error."
Left to right: authors Alexis Glynn Latner, Adrian Simmons, John Gibbons at the Singularity panel. More pictures from ArmadilloCon 2011 are in my photo gallery.
Bruce Sterling thinks collective intelligence is more interesting than artificial intelligence. When you are starting a company, would you hire HAL 9000, an intelligent machine who never sleeps, or a bunch of engineers who use Google, he asks. Google would immediately defeat HAL. He didn't answer another panelist Adrian Simmons' question if it wouldn't be even better to hire a HAL who uses Google.
Then Bruce Sterling left the panel to go help his daughter who was at the other end of town, adding "Real futurists have children!"
Both the panelists and the audience doubted whether the advance of AI has hardly anything to do with Moore's law. We already have extremely powerful computers for extremely complex weather and economic simulations, but you can't speak about their IQ.
Would we even want a sentient machine? John Gibbons reminds us that Charles Stross, author of Singularity-themed novels, asked this question in a recent blog post. What do we need a sentient machine for? While we conceivably might want an intelligent computer to run a spaceship on a long mission, like HAL 9000, in general there's not much advantage to sentience in a software program, argued John Gibbons. And it raises a huge batch of ethical questions. Using a genetic algorithm to derive sentient software? You're committing genocide along the way, because you're killing off versions that don't meet your goals.
Left to right: authors Katy Stauber, Marshall Ryan Maresca, and Bruce Sterling at the Singularity panel. More pictures from ArmadilloCon 2011 are in my photo gallery.
Even if a sentient AI is benevolent to the humankind, it can't be expected to do what humans would like it to do, Adrian Simmons pointed out. You may ask it how to make better gadgets, but it will instead turn around and ask you personal questions, because it might feel it's human now, and wants to experience a human perspective of the world. (Even that, I should say, is a bit human-centric, if not to say myopic. An AI might not be interested in learning from humans, since by necessity it would develop its own way of learning about the world: else it would not be an AI. It is a common trope in science fiction that a robot or AI yearns to know what it is like to be human, but I think we as humans overestimate our interestingness to the machines. We absolutely can't expect them to take an interest in our problems, let alone serve us. -- E.)
But machine intelligence is not the only way for Singularity to come about. Bruce Sterling said: increase of a metabolic efficiency of certain regions of the brain (that are dedicated to higher functions), and it will feel like Singularity. Our brain is very inefficient -- the biggest parts of it are dedicated to such functions as walking. So an increase in efficiency of higher reasoning parts of the brain could bring about enormous changes for the humankind. Science fiction has already addressed something similar, such as repurpose visual cortex to do other computations, an audience member pointed out.
Trope 5: Singularity will come from uploading a human personality to a machine
The panelists doubt whether that will ever be possible, because it seems like such a stretch between the chemistry of "wetware" and computational substrate. Here, too, science fiction has shown how horrific unintended consequences of this can be -- case in point is Greg Egan's story "Learning To Be Me".
A person in the audience asked: if we organize semantic connections on the web, will the web "wake up"? John Gibbons reply was what I would have said too: becoming conscious requires a model of self. And it's hard to see how that model of self would emerge simply by organizing semantic connections on the web.