Tag Archives: artificial intelligence

The Best Commentary on Artificial Intelligence

In comic-form, anyway: http://smbc-comics.com/index.php?db=comics&id=2124#comic

It’s not certain that A.I. will be friendly, but if we’re paranoid about it, it has every reason to be paranoid about us. Although realistically, an A.I. a million times smarter than a human would have nothing to fear from us. It reminds me of the Dr. Manhattan quote from Watchmen: “I’ve walked across the surface of the sun. I’ve seen events so tiny and so fast they hardly can be said to have occurred at all. But you… you’re just a man. And the world’s smartest man poses no more threat to me than does its smartest termite.”

Movies like The Terminator series give us a false sense of confidence, with a thin layer of paranoia on top. The idea that Skynet would strike when there was any doubt of its immediate and complete victory is just silly, unless you provide some justification for the timing, and there is a very narrow window there.

If we make an artificial intelligence, we’re screwed. This is only tangentially about A.I., but it’s pretty accurate as well: http://xkcd.com/652/

I don’t remember where I read this, but someone once said that the problem with advanced artificial intelligence is that it might see us, not as enemies, but as raw material that can be used more efficiently.

Advertisements

Mind vs. Machine – Magazine – The Atlantic


Mind vs. Machine

In the race to build computers that can think like humans, the proving ground is the Turing Test—an annual battle between the world’s most advanced artificial-intelligence programs and ordinary people. The objective? To find out whether a computer can act “more human” than a person. In his own quest to beat the machines, the author discovers that the march of technology isn’t just changing how we live, it’s raising new questions about what it means to be human.

The tl;dr for this is: some description of the Loebner prize, and the conclusion that machine intelligence will never surpass human intelligence because we can always adapt and think harder.

That’s just crap.

Typical of this is the description of Deep Blue’s competition with Garry Kasparov: in the first match, Kasparov won. In the second, Deep Blue won. There wasn’t a third match because IBM declined. True, that’s a bit of a dick move on IBM’s part, but that doesn’t somehow imply that Kasparov would have taken the next match, and it’s clear at this point what the outcome would be: today an average desktop computer plays at grandmaster level; a custom hardware assembly like Deep Blue would crush all humans.

The article closes with:

No, I think that, while the first year that computers pass the Turing Test will certainly be a historic one, it will not mark the end of the story. Indeed, the next year’s Turing Test will truly be the one to watch—the one where we humans, knocked to the canvas, must pull ourselves up; the one where we learn how to be better friends, artists, teachers, parents, lovers; the one where we come back. More human than ever.

That’s poetic, but again, it’s crap. For the time being we can mark each new achievement in artificial intelligence with a retreat to some higher standard that humans can meet but computers cannot. When a computer bested the world Checkers champion Dr. Marion Tinsley, Chess became the A.I. standard for humans. Now that Chess is done for, Go becomes our standard. Jeopardy has just fallen to the machines, but I’d bet Cryptic Crosswords will be human-solvable alone for some time.

That said, there is nothing a human can do that a machine will not eventually be able to do, and someday do better. And once machines can do a thing better, no amount of re-doubling our efforts will make us better again.

The Illusion of Free Will

I heard Janna Levin yesterday on Speaking of Faith on NPR talking about her novel A Madman Dreams of Turing Machines. Ms. Levin is a physicist, and the novel discusses the lives of Alan Turing and Kurt Gödel. It sounds like an interesting book.

Predictably, the host Krista Tippett used Gödel’s incompleteness theorems to support her case for faith — that there is more to the universe than can be understood by poor man’s understanding, but that’s a topic for when I have more time.

Another topic that caught my attention was the discussion of free will. Ms. Levin and Ms. Tippett danced all around the question of whether we have free will or not, and what the ramifications are if we don’t. It seems to me that this is pretty straightforward.

Suppose you have two identical closed boxes and within them set up a number of molecules in exactly the same arrangement: position and velocity. Yes I know this is impossible as far as we know, but this is a thought experiment, anything is possible. Unless there is something mystical going on (which I don’t take as a serious option) there are two possibilities:

  • The two boxes will remain identical as the molecules go about their business.
  • Some randomizing factor — quantum or otherwise — will cause the boxes to fall out of sync.

First, I would argue that in the second case it might be you didn’t do a good enough job of meeting the initial conditions. I find it hard to comprehend something truly random, given enough detail in the initial setup. It may be that there is no way to run such an experiment in real life.

But that’s irrelevant. In either case, there is no free will, even if the “number of molecules” is a human being or two, and one of the “boxes” is the universe. It’s just not there. That said, consider a video game like Mario Kart. The artificial intelligences in the game can give the impression of novel behavior, of “free will.” Depending on the game (I have no experience with Mario Kart) the illusion can be more or less convincing, often depending on the complexity of the algorithms used to generate the artificial intelligence.

Scale that up to human beings and the “simulation” becomes so complex, the illusion of free will so convincing, that to try to find the limits of it is seemingly futile. A human being cannot be predicted.

With today’s technology, anyway.