When Eben Upton developed the Raspberry Pi in 2012, he expected the no-frills, single-board computer to appeal mainly to schoolchildren. Consumers had other ideas. Margaret Harris spoke to him about how simple computers like the Raspberry Pi are becoming integral parts of the emerging industrial Internet of Things
The Internet of Things (IoT) means different things to different people. What’s your definition?
I think we’ve been done a bit of a disservice, because the earliest examples of IoT devices – the ones that people often think about when they hear the term “Internet of Things” – were consumer items such as the Internet-connected light bulbs, smart switches and thermostats made by a company called Nest. I have a Nest myself, but it’s misleading to think of the IoT in terms of stuff like that. Most of the potential for connected devices is on the industrial side, because there, the applications are not limited by human attention.
The number of IoT objects you will have in your life as a consumer is going to be countable on the fingers of one hand. You might have a smart thermostat. You might have a digital assistant-type object like Amazon’s Alexa. You might have some home automation to turn lights on and off and open your garage door. But fundamentally, you only have so much time to interact with connected objects. The industrial IoT is more interesting because it’s largely about machines talking to other machines, and there is no limit to the size of that market as long as there’s a return on investment.
The market for industrial IoT objects is big because of the amount of money you can save. The inefficiencies that abound in, say, manufacturing are so enormous and so demonstrable that you can put in lots of extra automation and monitoring equipment and it will quickly have a large and demonstrable payoff. So, for me, the IoT is much more about factories and making industrial processes run more smoothly than it is about consumer products.
What kind of industrial processes?
I’ll give you an example from the factory that makes Raspberry Pis. The factory is in South Wales and it’s absolutely vast; it’s owned by Sony and they make many other things there as well. The industrial equipment they use tends to have data ports on the back, usually Ethernet ports or serial ports, which spew out data about how the machines are performing – but almost always, historically, nobody’s been listening.
The reason is that to listen properly, you need something to connect to those data ports and do some pre-processing on the data. Then you need to shoot the processed data back over the network to a machine that can store it until you’re ready to analyse it. But until recently, there hasn’t been anything cheap, low-power and compact enough to plug into the data ports, and there’s been nowhere to store the resulting datasets if you wanted to back them up. We’ve also lacked the algorithmic expertise necessary to extract data and turn it into information that would generate insights on how you can improve the performance of the machines.
Now, with the collapse in the cost of data storage, the rise of machine-learning techniques and the emergence of small computers like the Raspberry Pi, that’s all changed. You’ve got a thing you can plug into a piece of industrial equipment to capture that stream of data. You’ve got somewhere to put the data you capture. And once you’ve got it back over the network, you’ve got the tools you need to get information out of it. So what Sony has done is to put in a whole new layer of monitoring to observe the behaviour of the equipment and feed it into “big data” analytics operations – and it’s been done without disrupting the existing industrial control systems the firm has been using for years.
How else could small devices be used?
There are lots of examples, but one of my favourites is that someone put a Raspberry Pi attached to a microphone in their elevator and recorded the sound of the elevator running. Once they’d accumulated enough data, they were able to identify which sounds were bad, in the sense of predicting that the elevator would need maintenance in the near future. The bulk of Raspberry Pi deployments in industry are like that, in the sense that they involve monitoring rather than control, although that’s starting to change.
What makes small computers like the Raspberry Pi suited to these “machines talking to machines” applications?
Low cost and hackability are both important, because together, they reduce the cost of experimentation. We want people to be able to try this stuff, adding monitoring and automation to their production lines, using whatever discretionary budgets they have available to them, without getting locked into a particular vendor. That way they can experiment quickly, and if they get a negative outcome, well, that’s fine, they’ve only spent £50 on a Raspberry Pi. But if they get a positive outcome, the low unit costs of these computers enable them to scale up while maintaining a good return on investment (ROI). The payoffs in some industries are quite large, but you can still win a lot more ROI calculations with a £50 product than you can with a £500 product, which is where we used to be in terms of diagnostic equipment.
There’s also a secondary aspect to ROI calculations, which is the total cost of ownership. If you plug in a PC and leave it idling, it’ll consume tens of pounds’ worth of electricity a year. With a small computer like the Raspberry Pi, that drops to a handful of pounds, and over the years, the difference adds up. The lack of moving parts in a Raspberry Pi also makes it robust, so you don’t have to replace them as often. Building a robust machine was key for us because we were expecting kids to use them, and the metrics we used for that are useful for industry, too. Which is the tougher environment for a computer: an oil rig or a kid’s bedroom?
Security is another topic that often comes up in conversations about the IoT. How do you keep these machine-to-machine conversations secure?
Whenever you talk about a glorious “big data” universe in which devices are hoovering up large amounts of data, people get queasy about it. That’s not unreasonable. Someone made an IoT television a few years ago that turned out to be recording basically everything that went on in your house and shipping it to the data “cloud”, and people were rightly terrified.
I don’t have an Alexa-type device in my house. I talk about too much secret stuff; my wife works at Raspberry Pi, too, so we talk about the business a lot. But the same problem also surfaces in an industrial context, and I see local data processing as part of the solution. If you’re running a machine-learning-type application, you’re probably not going to be able to train your algorithms locally. But once you’ve got a pre-trained model, you can probably do local inference using the processor on a Raspberry Pi. You can imagine building a monitoring system composed of some cheap sensors connected via Bluetooth to a Raspberry Pi that aggregates data from these sensors, does some processing, and then sends a relatively small amount of relevant data over the network.
For much of the history of computing, progress has been about making faster, more powerful computers. Is that emphasis changing and, if so, what does that tell us about the future direction of the field?
I think the era of free returns in processor speeds is drawing to a close, because we’re running out of atoms. The smallest structures on silicon chips are now spaced around 7 nm apart, which is about 70 atoms, and at those distances both the physics and the economics of the system start to go awry. Our knowledge of the behaviour of semiconductors is based on a statistical model of each thousand silicon atoms having, on average, this many dopant atoms embedded within them. But of course, once you’re making silicon structures 70 atoms apart, it’s no longer a statistical process, so your assumptions start to break down on the physics side. At the same time, on the economic side, it’s becoming ruinously expensive to build faster chips.
Does that mean that Moore’s law no longer holds?
Moore’s law was only ever really an agreement between interested parties – chip designers, foundries and manufacturers of foundry equipment – that the number of transistors per unit area of silicon would advance along an exponential curve at a certain rate. It was kind of a consensus. But the trends that enabled that consensus are coming to an end, and that means we’re beginning to see a new focus on efficiency in software engineering. I’m excited by this because I’m still a software engineer at heart, and until recently it’s been very hard to argue for writing more efficient code because the doubling in computer power meant it wasn’t necessary. You just waited two years, and your code ran twice as fast.
Any other changes?
I’m seeing an increasing focus on communications, making it easier for computers to interact with the real world. There isn’t so much excitement anymore in doing lots and lots of maths really fast on one computer in isolation, and we actually see this on the educational side of our business.
When we built the first Raspberry Pi, I didn’t want to put input-output pins on it, because I thought kids would be interested in using them to write programs. Of course, what children actually love doing with Raspberry Pi is interacting with the real world, building weather stations and robot controllers and things like that. And maybe that was a harbinger of things to come, or the kids were attuned to the zeitgeist more than we were. The kinds of things they were interested in then are the things we’re all interested in now, which is working out what problems computers can solve for you. And now that the era of free returns is coming to an end, I think we can broaden that question out a little bit.
- Eben Upton is the co-founder of the Raspberry Pi Foundation and chief executive of its commercial arm, Raspberry Pi Trading, e-mail eben@raspberrypi.org
- Read more about Eben Upton and how physics has influenced his work in his “Once a physicist” interview (Physics World, November 2019 p61).