A Capitalist’s Defense of Artificial Intelligence
Tyler Cowen sends us to an essay by Gary Marcus arguing that modern artificial intelligence (AI) has failed. Not only because it has failed to live up to its goals, but also because it has the wrong goals. At the lowest common denominator – between computer scientists and everyone else – AI’s chief mandate is to pass the so-called “Turing Test” wherein a human “judge” has a natural language conversation with a machine, and cannot tell that it is not a human. Modern AI agents, like Siri, Watson or your neighborhood NSA, basically mine troves of data trolling for correlations to find something significant. Marcus argues this is not “intelligence” because even the smartest and most powerful machines cannot answer simple, common sensical questions like,
“Can an alligator run the hundred-metre hurdles?”
Joan made sure to thank Susan for all the help she had given. Who had given the help?
because billions of Google searches will not yield any patterns to this effect: since nobody has asked this question before. He also points out most machines that “pass” a Turing Test do so with misdirection and deception, not signs of innate brilliance.
At a theoretical and conceptual level maybe Marcus is right. There is something profoundly unintelligent about using “big data” to solve human problems. That is because, “theoretically” by definition , anything that fails to use theory derived from axiomatic laws is not intelligent. It is why mathematical economists – even after their time has come and perhaps gone (with apologies to Al Roth who is a mechanism designer that has used theory to profoundly improve our lives) – hold their nose high towards the empiricists. It is why theoretical computer scientists have mid-career forgotten how to actually program.
Theory is brilliant for some things. Thank god that we are not using randomized experiments and inductive reasoning to conclude that for a right-angled triangle the sum of the squares of each side does indeed equal the length of the hypotenuse squared. But a practitioner, capitalist, or your average Joe would look at Marcus’ critique another way. Who really cares that computers cannot answer questions that nobody has asked. Computers that are deeply common sensical by definition are not targets for artificial intelligence.
I would normally not write about computer science. Contrary to my choice of major, I’m quite a bit more confident with my command of economics than abstract philosophy or computer science. But I do understand the Turing Test. And any computer witty enough to “trick” humans is smart enough for me. I do not recall Alan Turing issuing an exception for remarkably sarcastic computers.
I also know a bit about capitalism and why it works. We may experience any sort of “market failure”. Maybe it’s too cheap to pollute. Maybe money demand is too high in a liquidity trap. But, by and large, markets work. That means useless companies go out of business and good ones stay. It means Apple’s iPad brings in treasure chests of profit while Microsoft’s Surface does… I do not know what.
It means artificial intelligence answers questions people give a shit about. Private enterprise has done wonders for the tech world. And the tech world is busy fixing problems that substantially improve our lives.
Marcus does not consider the flip side of his claim. He is embittered by an industry that attempts to “trick” humans (not in general, but only when they are specifically asked to), but is upset computers cannot answer heroically contrived questions similarly designed to trick insightful algorithms, with the failure of a non-formal language such as English. In fact, computers can understand basically every colloquially important part of English. It would take a computer scientist to design a question that computers cannot effectively answer.
Markets work. Artificial intelligence may not change the world as once did steam engines or double-entry bookkeeping. But it is answering questions profound to our social existence. Some, like Marcus, are upset that the artificial intelligentsia is keen on making programs only to trick human readers. He believes this does not take Turing’s argument in good faith, for he surely could not have foreseen the preponderance of big, evil data! Yet, at a theoretical level, Turing’s very insight demanded that machines trick minds, for if an algorithm convincing a judge that it is a human is not a “trick”, I do not know what is. More importantly, passing a silly test hardly defines the profession any more. It is about answering questions that generate mass profit, and hence mass welfare.
Contemporary AI has “forgotten” about the question of philosophical intelligence because it is not a well-defined phenomenon. An essay in the New Yorker a definition of intelligence does not make; indeed there is a far more philosophically elegant beauty in the Turing Test than querying a computer about the habits of Alligators. And I have a theory for that.
I actually agree with all of that, aside from the minor fraud in what AI has at various times claimed to be versus what it has ever been.
An AI with “true common sense” would allow robots to do a whole host of tasks that they are hopeless for now. And would be required to bring certain hot technologies to full fruition.
For example, “driverless cars” won’t actually lead to “driverless long haul trucks” for the general case, until the AI’s controlling the trucks can deal with all the weird things that happen in the real world.
So instead of “Joan thanked Susan for all her help… who helped whom?” think of an AI system that watches an intersection and changes the lights to optimize traffic flow. A heavy truck appears which takes up two lanes (on purpose) with the goal of safely executing a left turn. Today’s automatic lights just cycle normally, and the truck driver depends on everybody around the truck cutting it some slack. A truly AI intersection controller might show a green light to ONLY the truck until it has cleared. Current systems can only deal with funky cases like that when somebody encounters them, remembers, and programs it into them.
I agree with your comment. There’s a lot common sensical AI can do that would be the stuff of dreams. To the extent that it is valuable, the computer science community has not forgotten about it, as the New Yorker piece would suggest. It is only the purely conceptual and rather useless “nobody” questions that is the focus of my article. This is a fair response, I believe, as Marcus specifically highlights “questions that nobody has asked” as a theme of real intelligence.
But we agree.