Skip to main content
Modelling and simulation

Modelling and simulation

Exploring the computational universe with Stephen Wolfram

04 Nov 2019 Margaret Harris
This article first appeared in the 2019 Physics World Focus on Computing under the title "Inside the computational universe"

Stephen Wolfram – the physicist who created Mathematica – reflects on how this computational tool has changed in 30 years, and on the practical and conceptual role that computational thinking plays in physics

Stephen Wolfram created Mathematica so that physicists – himself included – could compute things for themselves.
Power to the people Stephen Wolfram created Mathematica so that physicists – himself included – could compute things for themselves. (Courtesy: Wolfram Research, Inc.)

Why did you create Mathematica?

Because I wanted to use it myself. I was interested in physics from a young age, and I started doing physics research when I was in my early teens, in the mid-1970s. I didn’t like doing all the mathematical calculations that were needed, and I thought it should be possible to automate them. I soon became the main user of the various experimental systems for doing mathematical computation that existed at the time, but by 1979 I had outgrown them, so I decided I had to build a system for myself.

The result was SMP, the first version of which was released in 1981. SMP ran on large computers and found users in quite a few areas, including physics. I started my first company to develop and market SMP. But quite quickly thereafter I went back to basic science, starting my explorations of cellular automata and the computational universe, and helping to found the field that’s now called complexity theory.

By 1986, though, I decided there was an opportunity to create a more powerful tool that would cover all the computation I would ever want to do. That was also a time when personal computers were beginning to be able to do serious computation. And I wanted to build a system that could bring computation to a wide audience. At the time, most physicists really didn’t use computers themselves. They would delegate computing to someone else. I was very pleased with the way that Mathematica changed that and let actual physicists compute things themselves. It was a very nice transition to watch.

How has the program changed over the past 30 years?

Ninety-five per cent of what’s in it now wasn’t there 30 years ago. The core principles of the system have stood the test of time extremely well, and I’m pleased to say that almost any version 1 program from 1988 will still run in version 12 today (something that is very rare in the computing world). The core symbolic programming paradigm of Mathematica was also already there 30 years ago, and was broadly applicable from the very beginning. But in the intervening years, we’ve dramatically broadened and deepened the coverage of mathematical computations. We’ve also expanded into a great many other areas, to the extent that mathematical computation is now perhaps only 10% of what the system does. We’re also dealing with multiparadigm data science, machine learning, all kinds of visualization, text computation, graphs and networks, image computation, geometry, audio computation, knowledge representation and so on. Another major thing is that the program incorporates a huge amount of built-in real-world data, about chemicals or particles or planets – or countries, movies, and companies. This is the same data that powers Wolfram|Alpha, and which, in turn, powers intelligent assistants like Apple’s Siri and Amazon’s Alexa.

Thirty years ago, we had already invented our notebook interface. Today that interface is considerably more developed, and it also runs in the cloud, so people can publish computable documents directly on the web. We’ve done a lot of work over the past 30 years, and the applications of Mathematica have dramatically expanded. Whether it’s being used as an embedded part of some robot or experimental data system, or for physics education with real-world data, or for the latest high-performance computation, there are things routinely done with Mathematica today that wouldn’t have been thinkable 30 years ago.

You’re developing something called Wolfram Language, which you’ve described as a combination of natural languages, mathematical notation and computational language. What’s the rationale behind this?

The concept of Wolfram Language (which is a direct extension of my original vision for Mathematica) is to have a computational language that can describe things in the world – things people want to talk about – in computational terms. It’s common to take small pieces of natural language (like “density of tungsten”) and have our natural language understanding system turn them into symbolic representations from which we can do computation. In that sense, Wolfram Language is, as much as anything, a description of what Mathematica has become, recognizing that “mathematics” is no longer a central focus.

I think there’s an interesting analogy between our effort to create a computational language and the origins of mathematical notation. Four hundred years ago, mathematics had to be described in words and ordinary language. But then mathematical notation was invented, and it provided a streamlined way for people to represent mathematical ideas – opening up the development of algebra, calculus and our modern mathematical sciences. It’s the same story with our computational language. We’re providing a broad language for representing computational ideas, and it’s unlocking “computational x” for essentially all fields x.

Computational essays are an important concept in Wolfram Language. Today, people write papers, for example in physics, using a combination of human language and mathematical notation. But with our computational language, it’s possible to routinely represent computational ideas, in a form that not only computers, but also humans, can readily understand. Our computational language provides a new channel for communicating ideas, and it’s also immediately executable. That means the papers of the future can be computational essays where people can not just read, but also execute, what’s said. Underlying data can be brought in (for example from our Wolfram Data Repository). And people can immediately build on one piece of work to do more.

How is computing different from programming?

The key to computational language is to find a way to express whatever one wants to talk about in a form that a computer can understand. Programming languages are about starting from the underlying operations in a computer and working out how to tell the computer which operations to perform. A programming language has concepts like arrays or pointers. Our computational language, in contrast, has concepts like differential equations, or galaxies, or chemical elements, or countries. A lot of what’s normally considered “programming” is completely automated when you’re using our computational language. You’re essentially operating at a much higher level, and we’re taking care of all the details of doing what you want to as efficiently as possible. The people who are doing “computational x” are really interested in computational thinking, not in programming as such.

What are some examples of ways that thinking computationally, rather than mathematically, about a system can aid understanding?

For 300 years mathematical equations were the dominant method used to make models of things. In just the last decade or so, there’s been a kind of silent revolution in modelling, and new models of almost any concept or system – regardless of whether it is physical, engineering, social, biological – have started to be based on computation (and effectively, programs) rather than mathematical equations. It’s quite a paradigm shift, which is why I called my big book on the subject A New Kind of Science. [Update: Wolfram has also written a collection of essays on computation, Adventures of a Computational Explorer, published in October 2019.]

Why do you think physicists, in particular, should consider framing problems in computational, rather than mathematical, terms?

Computation is a generalization of mathematics. Yes, there are plenty of systems that have traditionally been studied in physics that can be modelled mathematically, for example with differential equations. But there are a lot more systems (including plenty of physical ones) that need a generalization of the equations approach. There’s a lot of new physics that’s made possible by this.

What role do you think computation will play in the future of physics?

Physics was early in using computers to aid in working with its existing paradigms, and I would like to think that Mathematica helped with that. The biggest growth directions, I think, will be in the use of computation as a paradigm for physics. Part of this involves using computational models for physical systems. But part of it also involves understanding computational concepts like computational irreducibility, and seeing how they relate to phenomena in physics.

It’s hard to know what might crack the problem of finding a fundamental theory of physics, but perhaps it will be computation. Certainly, the intuition that we now have from exploring the computational universe of simple programs is something completely new – and it seems potentially highly relevant to questions of fundamental physics. I’ve been thinking about these kinds of things for a long time, and I’m finally about to mount a serious project to see whether there’s a computational way to approach fundamental physics that will get further than the quantum field theory and general relativity approaches that we’ve been trying for a hundred years. Of course, it may be the wrong century – or the wrong approach – to crack the problem. But there’s definitely a lot of interesting theoretical structure to investigate.

Related events

Copyright © 2024 by IOP Publishing Ltd and individual contributors