Skip to main content
Philosophy, sociology and religion

Philosophy, sociology and religion

Why artificial intelligence has brought scientists and philosophers together

05 Mar 2019 Robert P Crease
Taken from the March 2019 issue of Physics World, where it appeared under the headline "A frame of mind".

An ongoing debate in artificial-intelligence research reveals productive interactions between cognitive scientists and philosophers, says Robert P Crease

Horse race
You better you bet: Can a robot predict the outcome of a horse race better than a human expert? (Courtesy: iStock/ppart)

When the concept of artificial intelligence (AI) was developed in the 1950s, its founders thought they were on the verge of fully modelling human thought processes and intelligence. Indeed, the US economist Herbert Simon – a future Nobel laureate – was so confident of AI’s prospects that in 1965 he predicted machines would, by 1985, “be capable of doing any work a man can do”. As for Marvin Minsky, the Massachusetts Institute of Technology (MIT) cognitive scientist who co-founded its Artificial Intelligence Laboratory, he boldly announced that “within a generation…the problem of creating artificial intelligence will substantially be solved”.

Not everyone was so sure. Hubert Dreyfus, an MIT philosopher, argued that these ambitions were conceived in such a way as to be unachievable in practice and impossible in principle. AI research was doomed to fail because it was based on an incoherent rationalist philosophy. Intelligent human behaviour, he argued, is much richer than information processing. It requires responding to situations with “common-sense knowledge”, which was not amenable to programming.

Dreyfus outlined his thinking in a 1964 article, commissioned by the Rand Corporation (a policy think-tank) entitled “Alchemy and artificial intelligence”. He elaborated the arguments in his 1972 book What Computers Can’t Do. Many arguments concerned what had already been dubbed the “frame problem”.

Get in the frame

Robots, Dreyfus thought, can be programmed to execute complex human tasks like walking over rough surfaces, smiling or placing reasonable bets. But to do so in a human way requires taking into account the specific situation in which these actions occur. To place a judicious bet on a racehorse, for instance, a human normally looks at the horse’s age, past history of wins, the jockey’s training and history, and so forth, all of which change with each race.

No problem, argued Minsky. A robot can be programmed to do that, turning these factors into information for the robot to process with a complete set of rules or “frame”. If you give the robot a frame, it’ll be able to bet on racehorses like a human – maybe better.

But there’s more to it, Dreyfus argued in What Computers Can’t Do. Many other factors are present in racehorse betting, such as the horse’s allergies, the racetrack’s condition, and the jockey’s mood, all of which change from race to race. However, when computer programmers analyse the relevant factors, Dreyfus noted, they end up behaving more like novices and amateurs (who often get things wrong) and less like experts (who get things wrong much less often yet appear to rely on no set “programming” method at all). In this regard, a true expert more closely resembles someone exercising common sense than someone consciously analysing factors.

Some AI enthusiasts accused Dreyfus of using the frame problem to try to drive a stake through the heart of AI. To counter Dreyfus’s objection, they felt, one simply programmed the robot to identify these other factors if and when they are relevant, thus putting another frame around the first. Dreyfus responded that the computer would need still another frame to recognize when to apply this new one. “Any program using frames,” he wrote in his 1972 book, “was going to be caught in a regress of frames for recognizing relevant frames for recognizing relevant facts.” The common-sense knowledge storage and retrieval problem, in other words, “wasn’t just a problem; it was a sign that something was seriously wrong with the whole approach”.

Two approaches

The frame debate turned on a fundamental philosophical difference between a “Cartesian” and a “Heideggerian” approach. In a Cartesian approach, the world in which humans live and cope is assumed to exist entirely apart from the minds that consciously try to “know” it. In this view, to be human is to cope with life using knowledge and beliefs to assess the meaningful factors that they confront. Programming a computer to respond to a situation humanly therefore means supplying it with knowledge and information-processing capacity. If those prove inadequate, the solution is to add in more of the right kind of knowledge or processing ability. This Cartesian-inspired strategy is known as “Good Old Fashioned AI”, or GOFAI.

The Heideggerian approach, in contrast, begins with a very different conception of the human–world relation. It says that the fundamental human experience of the world is not of an external realm of objects that we theorize about. Rather, humans are immersed in the world in a web of connections and practices that make the world familiar. Coping with the world is more like enacting common sense than employing theories and analysing them. Theorizing comes later, as a guide to some forms of deliberate action, where spontaneous employment of common sense is ineffective or otherwise insufficient.

Putting a frame around a situation transforms it from a situation in the world into an artificial world. That’s valuable for executing many tasks, such as playing chess or evaluating complex systems. But if the aim is to make a robot respond to all situations, adding more knowledge or computing capacity won’t do. There is no “frame of all frames” that would allow this. Heideggerian AI, Dreyfus writes, “doesn’t just ignore the frame problem nor solve it, but shows why it doesn’t occur”.

The critical point

AI research has come a long way in the last half-century, and no longer depends on the crude conceptions of intelligence taken for granted in GOFAI. In fact, the more sophisticated recent conceptions were developed in part by incorporating elements of Dreyfus’s initial critique. Mostly absent now, for instance, is the claim that AI will fully model human thinking and intelligence. Arguments about the frame problem have become quite technical, but it remains a core issue. It is one of the few areas where philosophers and scientists have managed to engage each other productively.

Copyright © 2025 by IOP Publishing Ltd and individual contributors