Wherein I Compare AI Development To Global Warming

Instapundit highlights this little article on Artificial Intelligence where J. Storrs Hall writes the following:

If you’re OK with calling a robot human equivalent if it can, say, do everything a janitor is supposed to, it’s likely by 2025; if it has to be able to create art and literature and do science and wheel and deal in the political and economic world and be a productive entrepreneur, you may have to wait a little bit longer.

Insty quotes this, and it’s a misleading. Hall believes we’ll have an AI capable of janitorial work, not really an AI that can “do everything a janitor is supposed to”. What he means is that we’ll have, essentially, a more advanced Roomba—perhaps humanoid, though humanoid shape wouldn’t be necessarily optimal.

And, no, this isn’t human intelligence. Robot janitors will, guaranteed, be stupid. They’ll clean while building burns—or if that’s prepared for, while the building floods. And if they’re programmed for that, while the roof caves in.

To my mind, the key graf is:

What remains to be seen is whether it will be equivalent to the 2-year-old in that essential aspect that it will learn, grow, and gain in wisdom as it ages.

First of all: No, it won’t. No mystery. See, that would be intelligence, versus pre-programming a set of defined tasks with a certain set of fixed parameters. I’ll give him some credit that he’s wondering, as opposed to making a prediction that anyone will actually be there in 15 years. 25 years ago, people who used to write and speak about AI predicted wondrous things in 5, 10, 15 years.

And we have the Roomba. And some other very cool domain-specializing tools. But nothing like intelligence.

But the idea that a two-year-old is considered less than a janitor, and a janitor less than an artist suggests to me that the field is still lacking a definition of intelligence. A two-year-old has as powerful an intellect as any of us will ever meet. A janitor’s intelligence isn’t necessarily going to be taxed by his job very often, but sometimes it will be—knowing how to react in an unexpected circumstances, like a fire, a flood, previously unsuspected structural unsoundness.

One can argue that many janitors who face such circumstances react wrongly or inappropriately, but they react to the best of their ability. Robots will simply fail to react to things outside their parameters.

Again, not to say that there won’t be useful ‘bots, but this isn’t intelligence.

I’m not an expert in it, but I think the singularity guys have based their theory on a combination of working AI and Moore’s Law. But Moore’s Law is a trend, not an actual “law”, and AI doesn’t seem to be any closer to realization than it ever was—it’s only a massive amount of computing power that allows the meagerest appearance of less-than-animal intelligence.

Appearance, I say. It’s not even intelligence and the distinction is not something that can be remedied with quantity.

I’ll go one step further: If the singularity were to come to pass, it would be a nightmare for humanity. But that’s a different topic for a different rant.

Leave a Reply

Your email address will not be published. Required fields are marked *