Skip to main content
Education and outreach

Education and outreach

A reading and writing revolution

10 Feb 2004

Merging pen and paper with digital media could change the way we learn and communicate

Paper++

Wherever and whenever people need to work with information you will find paper. For most of us paper provides the best way to understand something, but the reasons why are not obvious. Research shows that there is a sophisticated interplay between people and paper. The simple act of reading a paper document, for example, is invariably a two-handed operation, which is not normally the case for e-books or PCs. It seems that the interaction between our hands and the page actually aids cognition in ways that are not fully understood.

We also use pens and other writing implements when working with paper, not just to annotate and mark documents but also to point and gesture, or even to tap on desks to the frustration of others. Furthermore, this is not something that is acquired during education – it is a natural human trait. These reading and writing practices may not be understood, but over a period of 2000 years they have evolved into a highly efficient communication medium.

Despite its ubiquity and strengths, however, the paper-and-pen approach is limited by today’s media standards. Paper is very good for communicating static images, but electronic media can incorporate motion and sound, and can thus communicate certain information more clearly. Digital media and the pen-and-paper paradigm have strong complementary functions, but melding them together could lead to even more efficient ways to communicate and comprehend. This reading and writing revolution is being pioneered by several companies worldwide and was the subject of a workshop held at King’s College, London, in December last year. The distinction between what is paper and what is a computer is about to become blurred.

Imagine filling out an address and ticking a box in newspaper advert, only to receive the product a few days later without having to cut out or post a form. Or envisage paramedics being able record a patient’s condition in the ambulance so that it is registered with the hospital computer system by the time they arrive. The more romantic among you could soon be able to send a personalized Valentine message, such as doodle of a heart, to your partner’s mobile phone.

Paper displays

The most obvious step towards such technology is to make a computer-generated display on a paper-like material. Ideally we would like to use real paper for this, since it aids cognition and is also portable and versatile. This, however, is some way off. Today the best paper results are seven-segment displays built from chemicals printed on paper substrates, which can display simple numerical information. Several researchers have developed fully functional, pixel-based displays on glass, and today’s prototypes are being made on plastic substrates that are thinner, cheaper and more flexible.

Paper-thin display technologies require a “chromomorphic” component to affect changes in colour. This is normally achieved using an electric field or a small current, which is applied across the thickness of the display material. The driving electronics to address each pixel are generally built in to the back of the display and a second, counter electrode lies between the display material and the user. The ambition of many is to print the driving electronics on the display sheet, possibly using conductive organic materials, because this would enable thinner and more cost-effective displays to be produced. Researchers at Linköping University in Sweden and Plastic Logic in Cambridge are both working on such technologies.

Several different chromomorphic systems are being developed. Gyricon in Michigan in the US, for example, favours a sheet containing millions of bi-coloured microscopic spheres. Each sphere has an electrical dipole moment so that it can rotate between the two colours when an electrostatic field is applied. E-Ink in Massachusetts uses microspheres that contain dark and light particles with opposite charges, which undergo electrophoretic movement in a field. The Swedish company Acreo and NTera in Ireland, on the other hand, use different types of electrochromic chemicals that change colour when they are charged. Most recent on the scene are electro-wetting surfaces patented by Philips, which enable coloured oils to cover or expose white surfaces under the influence of an electric field. These devices all produce a two-colour display – often using blue and white – but with more complex driving electronics they are, in principle, capable of a full-colour output.

Few of these products have reached large-scale commercialization, but several test installations exist. Personal data assistants (PDAs) are seen as a natural application for chromomorphic technologies, but the real market explosion will take place when the technology allows the production of TV screens that can be rolled up. As well as greater portability, such screens could be used to decorate walls or other flat surfaces. Moreover, the TVs of the future will offer more personalized content than today’s models, which means it is important that people can engage with the technology as they currently engage with paper.

Paper-like input

The desire for paper-like display devices is well recognized, but the concept of paper-like input devices is rather more obscure. The first paper-like input devices were graphic tablets that supported writing, which required a lot of electronics behind the “paper” to detect the position of a pen electromagnetically. Several decades ago companies such as Xerox began to experiment with video-capture systems, whereby a camera installed in the ceiling above a worker’s desk was used to track the position of the pen and paper. Today video-free graphic tablets exist that use ultrasonic triangulation to capture the motion of special pens on A4 pads, such as Ink Link, or on flip-charts such as Mimio.

These techniques are all clever and widely used, but the majority of them require some external device such as a video camera or an ultrasonic detector. Furthermore, the co-ordinate system that is used to detect position is set by the detector rather than the paper, which detracts from the portability and flexibility of paper. A direct pen-and-paper paradigm is therefore the ultimate goal in tracking the position of a pen on a writing surface.

There are currently two radical examples of such position-dependent detection paper. The first is from a small Swedish company called Anoto, and forms the basis of Nokia’s Digital Pen, Logitech’s Io and Sony Ericsson’s Chatpen. These devices allow users to convert handwriting into digital images that can be sent via e-mail or even posted automatically on web-diaries. The second example – which comes from a European Union consortium known as Paper++ – takes a different approach in which users interact with pre-printed documents.

The Anoto technology relies on an almost invisible pattern of pre-printed dots on the paper and a great deal of intelligence built into the pen. Instead of scanning and recognizing single lines of text, the Anoto pen uses a built-in CCD camera to view the infrared-absorbing dots, each of which is slightly misplaced from a square array. The relative positions of dots in a six-by-six array maps to a unique x-y position in a vast possible address space. Images are recorded and analysed in real time to give up to 100 x-y positions per second, which is fast enough and of sufficient resolution to capture a good representation of all handwriting. The equivalent of several A4 pages can be recorded and stored in the pen before being transmitted to a PC. <textbreak=Paper++> The aim of the Paper++ project, which is led by researchers at King’s College, London, is quite different. The idea is to annotate normal printed paper so that it can directly interface with all types of digital media through a cheap and simple pen-like device. Printed documents would be overprinted with an almost invisible pattern of conductive ink that uniquely encodes the x-y location on the document. This code can then be interpreted by a pen with concentric electrodes that is swiped over the paper in a tick-like motion. The pen converts the code into a frequency-modulated signal that is input to the microphone socket on a PC, or to any other digital device. The sound is then translated to an x-y location. This location, in turn, is mapped to a resource that may be a sound byte, a video clip, a URL, a telephone number or any other resource that the user wishes to launch.

Numerous companies worldwide have discovered that paper is a resource that will not be replaced by today’s computing resources. The paperless office, it seems, is indeed a myth. Industry is therefore investing heavily in closing the gap between the human functionality of pen and paper, and the technical competence of computing. The outcome is surely set to change our lives in the very near future.

Copyright © 2024 by IOP Publishing Ltd and individual contributors