A third gravitational wave has been detected by physicists working on the LIGO gravitational-wave detectors in the US. The wave was produced by two black holes that merged about 3 billion light-years from Earth. One black hole was 31 times more massive than the Sun and the other weighed in at 19 solar masses. The previous two sightings of gravitational waves were also produced by black-hole mergers, but LIGO researchers think this is the first event in which the spin of one of the merging black holes could have been pointing in the opposite direction to the orbital rotation of the black holes.
The latest gravitational-wave observation is also the furthest of the three black-hole mergers seen so far, with the first and second detections being 1.3 and 1.4 billion light-years away, respectively. The newly observed merger created a black hole of 49 solar masses, which sits nicely between the first detection (62 solar masses) and the second (21 solar masses). Before LIGO saw its first gravitational waves in 2015, astronomers had no idea that such solar-mass black holes existed in the universe.
The new event, dubbed GW170104, was observed on 4 January 2017 when signals lasting about one tenth of a second were recorded in LIGO’s two detectors in Washington and Louisiana, which are giant interferometers, each consisting of two perpendicular arms 4 km long. Laser light travels back and forth between mirrors at either ends of the arms and some of this light is sent to a detector, where interference occurs. When a gravitational wave passes through a LIGO detector, it can slightly stretch one arm and compress the other, thereby altering the measured interference and letting the gravitational wave be measured in real time.
Death spiral
The newly observed event began as two black holes neared each other in a death spiral. As the system rotates, it broadcasts gravitational waves that spread out across the cosmos – some reaching Earth. The signals at both Livingston and Hanford had the characteristic “chirp” and “ringdown” seen in the previous two detections. Chirp describes the rapid increase in the frequency and amplitude of the gravitational wave that occurs just before the black holes merge. Ringdown is the gravitational wave that is emitted by the non-spherical merged black hole as it relaxes to become a sphere.
By studying both the chirp and the ringdown, LIGO physicists worked out the masses of the two initial black holes as well as the mass of the merged object. The merged black hole weighed 49 solar masses – the difference being radiated away in the form of gravitational waves.
Spinning around
Another key piece of information that can be gleaned from the gravitational wave is the alignment of the intrinsic angular momenta (or spins) of the black holes. As well as orbiting each other, each black hole can be spinning on its own axis – much like the Earth does. When the black holes coalesce, the total rotational velocity of the merged black hole cannot exceed a certain upper limit. So if the spins of the two merging black holes point in the same direction as the orbital spin, some of the orbital angular momentum must be discarded to meet this criterion before the merger can occur. This is done by emitting additional gravitational waves before the merger.
Data from the first detected black-hole merger (GW150914), recorded in 2015, suggested that the spins of both black holes were aligned with the orbital angular momentum. In the second detection (GW151226), recorded last year, there is some evidence that the spin of one of the black holes could be at an angle to the orbital angular momentum – but still have a component in the direction of the orbital angular momentum. In GW170104, however, it is possible that the spin of at least one of the black holes is at an angle and has a component in the opposite direction to the orbital angular momentum.
According to Bangalore Sathyaprakash of the University of Cardiff in the UK, the relative orientations of the spin and orbital angular momenta of a binary black hole provide important information about how the system formed. If they are aligned, it is likely that the system developed in isolation as two large stars that then collapse to create a binary black hole. Misalignment suggests that the black holes formed separately and then came together to create a binary system.
Real statistics
“We’re starting to gather real statistics on binary black-hole systems,” says Keita Kawabe of Caltech, who is based at the LIGO Hanford Observatory. “That’s interesting because some models of black-hole binary formation are somewhat favoured over the others even now and, in the future, we can further narrow this down.”
This third observation has also allowed LIGO scientists to put further limits on models that modify Albert Einstein’s general theory of relativity. Einstein predicts that the speed of a gravitational wave through empty space is independent of the frequency of the wave. So any deviation from this constancy would therefore show up as a frequency-dependent distortion of gravitational-wave signals seen by LIGO. Given that this was not seen, despite the waves travelling such long distances through the cosmos, general relativity appears to be holding firm.
“It looks like Einstein was right – even for this new event, which is about two times farther away than our first detection,” says LIGO physicist Laura Cadonati of Georgia Tech. “We can see no deviation from the predictions of general relativity, and this greater distance helps us to make that statement with more confidence.”
GW170104 is the first gravitational-wave detection to be announced since LIGO began its current observational run in November 2016. The LIGO team has also identified six other candidate events during this run, which they are now analysing. The observation is reported in Physical Review Letters.
The ability to observe gravitational waves heralds a new era of astronomy in which scientists will combine observations from a number of different instruments. You can read more in Multimessenger Astronomy by Imre Bartos and Marek Kowalski
Over the past few years 3D printing has become one of the most-hyped topics in 21st century technology. Research into the techniques used in 3D printing has already given scientists new freedom in the design of components and devices, while the ability to manufacture highly individualized parts in a cost-effective way is finding new applications all the time.
So far, much of the hype has centred on macroscopic objects such as 3D printed implants or prototype parts for the automotive industry. However, a logical extension of the technology would be to exploit the unique potential of 3D printing in the domain of microstructures. This is of particular interest for optical scientists who wish to enhance the functionality of micro-optical devices such as highly integrated cameras, or even make it possible to build entirely new categories of devices that cannot be manufactured via a combination of traditional techniques such as UV- or electron-beam lithography.
Macro vs micro
Macro-sized 3D printing relies on several well-established techniques, including the selective melting of metal powders; inkjet printing; stereolithography; and fused deposition modelling. In contrast, the realm of 3D microprinting is dominated by a single method: two-photon polymerization (2PP). In some ways this method is similar to conventional UV lithography, where a (usually) liquid photopolymer is illuminated with light at an appropriate wavelength; the polymer solidifies where the light is absorbed; and a combination of masks and subsequent removal of unexposed polymer are used to create highly sophisticated structures. The main difference is that in 2PP the trigger for solidification to take place inside the focal volume is a femtosecond laser pulse. The physical phenomenon that drives this process is known as two-photon absorption (TPA), and it can only occur if the laser light is very intense: if you focused all of the sunlight falling on the city of Würzburg, Germany, onto a single grain of sand, you would get an intensity equivalent to that used for 2PP. Under these extreme conditions, two simultaneously absorbed photons with wavelengths in the visible part of the spectrum (such as 515 nm) can trigger the same chemical reaction as a single UV photon. Because this reaction does not occur at lower light intensities, solidification is strongly confined to the focal volume, and the polymer does not interact with out-of-focus light at all. The site of solidification can then be scanned in three dimensions to generate a 3D microstructure, as with normal 3D printing.
The concept of 2PP has been well known for at least 20 years, and several research groups have used it to generate objects such as photonic crystals (which are essentially semiconductors for light), 3D structures for life-science applications and micro-optical devices. However, before the 2PP process can be exploited in commercial applications, some technological challenges need to be resolved. One constraint is that, as with many additive manufacturing processes, 2PP is limited to using a single focal volume to process the material. This “serial” process is inherently slow, comparable to painting an entire wall with a small brush instead of a roller. The second disadvantage is the limited availability of photopolymers that can be processed using 2PP while also providing superior functionality for 3D micro-optical devices. For example, some photopolymers have been optimized for the 2PP processing itself, but their chemical and physical properties leave something to be desired in terms of the performance of the completed device.
Industrial optimization
Within the optics and electronics department at the Fraunhofer Institute for Silicate Research (Fraunhofer ISC) in Würzburg, scientists are working to optimize 2PP processing with respect to industrial applications. A key aspect of this research is to develop hybrid polymers that combine inorganic or glass-like materials with substances that exhibit organic photochemistry (as in the reaction described above), and thereby unite the favourable mechanical properties of glass with the processability of purely organic polymers. These hybrid polymers, known commercially as Ormocers®, are liquid resins consisting of a [Si-O]n backbone with different organic side groups. The most important side groups are polymerizable components such as acrylate or epoxies, as they can be used to solidify the resin via photochemical processes. The choice of precursors and the processing conditions determine the conformation of the interlinked organic and inorganic networks, and thus the chemical and physical properties of the final substance. The main benefits of Ormocer chemistry are an excellent stability against temperature, resistance against chemical attack and superior mechanical properties (such as stiffness). Additionally, many properties can be tailored to meet the requirements of different applications.
Tight control: (a) In two-photon absorption, a pair of photons in the visible part of the spectrum (green lines) is absorbed simultaneously in place of a single UV photon (pink lines). This process requires high light intensities, so it can only occur inside the focal volume. (b) The two-photon polymerization (2PP) technique uses this to restrict polymer solidification reactions to the focal volume of the laser, making it possible to print features as small as 100 nm.
Recently, researchers at Fraunhofer ISC have worked to improve the optical properties and stability of parts that have been 3D microprinted using Ormocers. For example, we have demonstrated that newly synthesized high-refractive-index hybrid polymers do not turn yellow even after being exposed to high-intensity UV LED light and temperatures of 150 °C for 72 hours. Temperature stability has also been proven by creating 2PP-printed microlenses that did not change their shape after being exposed to 200 °C for more than 1.5 hours. Such robustness is important because it means that microstructures fabricated via 2PP from Ormocer material can withstand the autoclaving procedures used to sterilize medical equipment, such as an endoscope equipped with sophisticated micro-optical lenses at the end of an optical fibre.
Another key benefit of these materials is their compatibility with biological matter. Ormocer composites are used as filling materials in dentistry, and many other material modifications are biocompatible with several human cell types. Some of these materials are even biodegradable, which is important when selecting appropriate materials for 3D-printed medical implants or scaffolds for tissue engineering. The micro-patterned hybrid polymer mimics the extracellular matrix, making it possible for human cells to be grown on it either in vitro or in vivo. Once this new tissue is formed, the scaffold material can then be reabsorbed by the human body.
Active interest
Despite their advantages over conventional polymers, hybrid polymers are still “passive” materials – meaning that their properties, defined by their chemical composition and processing, remain unchanged in response to external stimuli. To truly fulfil the potential of 2PP-written microstructures, scientists are also working to develop “active” materials. Such materials might, for example, act as optical gain media, convert one wavelength of light into another, or exhibit mechanical responses to external electrical or magnetic fields. Gain media have a number of applications, including the amplification of signals in optical data communication. Mechanical responses are important for actuators, for example in human–machine interfaces or miniaturized energy harvesters.
There are two main routes for achieving these properties. One is to incorporate active components during the chemical synthesis process itself, for example by linking optically active ions directly to the polymeric network. The other route is based on a “guest–host” approach, in which active nanoparticles are introduced into a (hybrid) polymer matrix. This route can be very straightforward as long as the guest nanoparticles “match” the host system; in other words, as long as they can be dispersed homogeneously and they maintain their active properties inside the matrix.
Foundations
Researchers at Fraunhofer ISC demonstrated recently that material systems of the latter type – so-called “nanocomposites” – can be 3D microprinted using 2PP. In a proof-of-principle experiment, they introduced silica nanoparticles 48 nm and 380 nm in diameter into an Ormocer matrix and studied the material’s behaviour when it was illuminated with femtosecond laser pulses. An example of the results is shown in the “Fine structure” figure, which reveals a complex 3D pattern created in the 48 nm nanocomposite (left image pair) and the institute’s initials in the system containing 380 nm particles (right image pair). The hope is that such experiments will be the foundation for creating structures with more sophisticated particles.
Tiny prisms: The Fraunhofer emblem shown at top is made up of 10 000 tiny prisms. Bottom left: part of the symbol on the left side of the emblem, magnified 10×. Bottom middle: The same area at 50× magnification, with individual prisms visible. Bottom right: a false-colour image (redder colours indicate increased height) of a single prism at 150× magnification. (Courtesy: Fraunhofer ISC)
In addition to new materials, another key area of development for 3D microprinting relates to the optics used to control the position of the laser focus during the printing process. The goal here is to set the position of the focus quickly as well as accurately. For this reason, “galvoscanner” mirrors have become increasingly popular, since they have significantly lower moving masses than high-accuracy linear stages, and thus make it possible to rapidly accelerate and position the focal spot in the focal plane.
The huge capability for rapid 3D micropatterning using galvoscanner technology is shown in the “Tiny prisms” image on p32. Here, the Fraunhofer ISC emblem is composed of 10 000 individual microprisms, with a base area of 50 × 60 µm² and a height of 25 µm. Printing a single microprism only takes a few seconds, even without thoroughly optimizing the process in terms of applied photon doses and writing velocity. This result clearly indicates that areas of several square centimetres can easily be filled with micron-sized elements – meaning that we could, in principle, use these 3D-printed microprism arrays in displays and other commercial technologies that rely on redirecting light with low optical losses. Furthermore, the depicted arrangement can be replicated easily, as it is only a so-called “2.5 dimensional” pattern with no undercuts.
Forging ahead
The tremendous potential of 3D microprinting using two-photon polymerization will be achieved when it is possible to employ materials that are not only compatible with the printing process, but that also exhibit new functional properties (either active or passive). Strategies for accelerating the printing process are also highly desirable, and galvoscanner mirror technology is already being implemented today. Both approaches – new materials and faster fabrication – will help ensure that 2PP is adopted on a larger scale in the future.
Mike Mandina, Optimax president I lost my job so I had a choice: I could either go work for someone else or create something new. By chance, I met a couple of entrepreneurs who had partnered with some people who were moonlighting from day jobs at Kodak. They had set up the beginnings of an optics shop in the basement of a barn. It had a concrete floor, sawdust and dirt would fall down from the ceiling if it was windy outside, and the low ceilings limited what kind of machinery could go in there. But it had a lot of power, and they had found and refurbished a number of pieces of equipment, so it was enough to get started making rudimentary optics – the basic elements of traditional lens manufacturing. That’s what became Optimax.
What had your career been like before that?
MM I went to university to study sociology and psychology, but after one semester I transferred over to the optics programme at Monroe Community College, working on an associate’s degree in optics technology. I did just over a year of full-time college and then I got a job in the optics industry, grinding lenses on the second shift. Because I was working nights, I was able to continue my education during the day. Eventually, I earned a bachelor’s degree in applied physics from Empire State College and an executive MBA from the Rochester Institute of Technology (these are all in New York, by the way). I founded my first optics company with a partner back in 1976 and sold it to Melles Griot around five years later. It became Melles Griot Optical Systems and that was the company that let me go in 1990.
Rick, how did you get involved in Optimax?
Rick Plympton, Optimax CEO I grew up in the Rochester area and my education was similar to Mike’s in that I went to the Florida Institute of Technology for a year, but I spent more time on the beach than I did in classes. So I came back and continued my education at Finger Lakes Community College in New York, US before transferring to the optical engineering programme at the University of Rochester. The first time Mike hired me was in 1984, when he was the production manager for Melles Griot Optical Systems and I was a student. We worked together for three or four years and then I went off and chased my career, working at Melles Griot’s corporate headquarters in southern California, as a field sales engineer in the south-east US and eventually at Melles Griot in Europe. After two years there, though, I came back to Rochester because this is where family is. I’d kept in touch with Mike throughout, and when I came back in 1995 there were 10 guys here struggling to make weekly payroll. Manufacturing was fleeing the Rochester community at the time: Kodak, Bausch and Lomb, Xerox, all the big players were downsizing and sending their manufacturing offshore. But when I looked at what Mike and the guys at Optimax were doing, I saw they were leveraging computer-controlled machining technology to make optical components and that meant they could make prototype lenses 10 times faster than the industry standard. Normally it takes about 10 weeks to make a lens; these guys could do it in a couple of days. So we developed a marketing plan around one-week delivery of prototype optics and started growing the business.
How did you get funding to expand beyond the barn?
MM In the very early days some of the founders put in about $30,000 in cash. The rest was credit cards and leases, and if there wasn’t enough cash coming in, the principals didn’t get paid. Rick was one of several people willing to not exactly get paid a lot of money for believing in the future. A number of those people are still here because they hung on and ultimately we developed great careers for them. They helped build the company.
RP By the time I joined, I knew the optics industry all over the US and Europe, so I went out and bragged about the capability that Optimax had developed using this new machining technology. And we really focused on being the prototype guys. Most factories are set up to do production work, so that niche – high-quality prototype optics, quick delivery if you need it – gave us an advantage in the market. And from 1995 to now, we’ve grown the business by about 25% per year.
RP Workforce development. We cannot go out on the street and hire people who know how to do what we do – we have to train everybody. The best we can do is find people who have good foundational skills through their hobbies, education or work experience. But aside from that, what’s most important is that they’re the type of people who want to learn and contribute.
What kind of technical background do you look for?
MM As time has gone on, the easy manufacturing has moved to countries with lower costs, so the complexity and difficulty of the optics that are left to be manufactured in the US has grown tremendously. Some optics can cost up to $100 000 and take months to fabricate, and they require sophisticated equipment, instruments and technical skills. We have an interesting cross-section of people at Optimax – anything from PhDs to GEDs, which in the US is the equivalent of a high-school diploma that you earn through night school. Optics manufacturing is still a cross between technology and art, so we look for people who have both a technical and an artisan mindset. They may or may not have terrific academic credentials, but if they can be productive, if they’re good people who like working with others to create value, and if they’re prepared to rely on each other for their family’s future prosperity – then Optimax is a good place for them.
What was your most difficult moment?
MM Back in the early days I had about a dozen people working for six weeks to ship products to a particular customer. Unbeknown to me, this customer was developing their own in-house capability and suddenly they sent everything back, cancelled the order and refused to pay. And that was that! It was not very nice. Most people in the industry don’t behave that way, but the person who was running this firm was a certain type of – well, anyway, that’s how they behaved. It didn’t kill us but it was very stressful trying to keep our people employed and find replacement work. That was around the time that Plympton showed up, so that was good timing; we were able to patch things up and keep going. But that could have been a defining moment for us – it was a defining moment for us. Since that time, by the way, we’ve done other work with that customer. They do pay a premium.
What are your plans for the future?
MM We’ve got big plans. Real big plans. I mean, real big!
RP Okay, enough Trump jokes – let’s see if we can give a better answer. We’re hiring about 50 people a year and our core business of optical components could easily triple in the next five to seven years, but we’re looking at ways to grow beyond that. So Mike and I have been putting together a spin-out programme for employees who are entrepreneurial-minded and want to think about market challenges that are outside our core capabilities. We’ll look at their ideas and we may fund a few new businesses, so 10 or 15 years down the road we might look more like an Optimax family of companies, addressing a multitude of market needs.
What do you know now that you wish you’d known when you started?
MM Some of the stuff you learn in business school is fundamental and will always work. But other things, by the time they do the research and understand how they work, then society, culture, the whole world has changed and those models don’t necessarily hold up anymore. Breaking away from those traditions in a sensible way sometimes pays.
RP I’d say that business schools are maybe not teaching what you need to know to grow a progressive company. Here’s an example: we take 25% of our profit every month and share it with employees. We know if we sold this business to a multinational corporation run by a bunch of MBAs, one of the first things they’d do is throw this bonus plan out the window. But we believe it’s been key to our success through the years. We also do some things to keep communication going and maintain a family sort of environment, such as monthly parties and a fun committee. We run three shifts around the clock five days a week, and building camaraderie among the different shifts is a big challenge that we’ve wrestled with through the years.
MM Another example is that we’re trying to keep our organizational structure flat. We’re at 300 employees now, and at times we’ve got to a place where the market wanted us to grow, but we couldn’t grow profitably because whatever we were doing up to that point didn’t work anymore. Things you do to run a business of 25 people don’t work as well at 50 people, and when you figure out how to run things with 50 people, that doesn’t work as well at 100 people, and so on. You start having more overhead costs; communication suffers; you’re in a bigger space; you have more geographic distances between people; your employees are in departments, so you start generating silos – it’s been quite an education about what not to do.
Any advice for someone thinking of starting a new optics firm?
MM They should come see us. Maybe we’ll partner with them! But seriously, I’d say you have to have passion in what you’re doing because there’s going to be impediments along the way, and you can choose to cave in at any time. We’ve had our trying moments – it’s not all upward rising. Things like the Great Recession happen, or a problem can occur with a technology or a customer. You’ve got to suffer through it. I think most people who’ve run a business have had to do those gut checks along the way.
RP If you’re starting out, you want to make sure you’re developing a capability that will enable others to be successful. The more you can help other people be successful with their programmes, the better shot you have at being successful in the long run.
We’ve come a long way in the decades since the laser was invented, but in many ways the photonics industry is still nascent, reminiscent of where the electronics industry was in the 1960s. Lasers are nearly ubiquitous, but if you look inside the fancy consumer packaging you’ll still find a lot of duct tape holding things together (sometimes literally), and companies that operate out of the owner’s garage are effectively competing with giant international firms. In this environment, the inherently multidisciplinary nature of physics training means that people with a physics background have the opportunity to thrive. On any given day I may be called on to tackle problems in chemistry, biology, computer science, mechanical engineering, materials science or fluid dynamics – and even, on occasion, physics and optics.
Chemistry problems bedevil most laser systems. Photochemistry can lead to material changes (such as photodarkening in optical fibres) and ion mobility can cause colour centres and other localization problems, but the most common headaches are optical contamination and damage. The causes are impressively varied. In one particularly frustrating case we traced an ongoing optics contamination problem to sewer gas entering the lab from an unused floor drain. Volatile organics love to condense on optics, and the source of these compounds can range from oils or lotion on a user’s hands, to Scotch tape residue, to outgassing from wire insulation. We learned the hard way that optical coatings may be damaged by ozone, which is generated when UV light from the laser interacts with air; it turns out that purging closed laser systems to remove the air really is necessary at higher power levels. Then again, the nitrogen gas used to perform purges will interact with certain coatings, and some coatings behave differently in zero humidity, so you’d better be careful how you purge.
In the laser itself, material coatings such as anodization or paint may interact with different wavelengths of light, with outcomes ranging from discolouration to off-gassing and film deposition on optics. And finally, in water-cooled lasers one must consider the potential for corrosion, bimetallic or galvanic interactions between components, or carbon dioxide from the air forming carbonic acid and “eating” a bushing in the water pump (to list just a few examples).
Multiple obstacles
Speaking of cooling, though, remember that where there is water, there is life. Unfortunately, “life” in a laser cooling system means algae and biofilms, which cause various sorts of mischief. In a solid-state laser this slimy stuff may coat the flow tube or active element (such as the YAG/YLF rod), blocking the pump light and thereby reducing output energy. Good chemical knowledge will help you solve the problem – but remember that “simple” solutions such as bleach may damage seals and tubing, so you have to consider carefully and understand all of the materials used in the system and their chemical interactions.
Electrical engineering is everywhere in laser science, and sometimes it brings fascinating and unexpected challenges. Of course every laser designer needs to know some electrical engineering in order to design power and control electronics. But electrical engineering expertise is also necessary for understanding and mitigating electromagnetic interference (EMI), whether it comes from high voltages generated within the laser system itself by things like Pockels cells (which typically require a few kilovolts, with switching times of a few nanoseconds) or from the customer’s other equipment. For some customers this “external” EMI can be huge; spare a thought for the people designing electronics to operate near, say, the 350 TW pulse generator at Sandia National Laboratory in the US.
Another example of how I’ve put my electrical engineering knowhow into action involved configuring ~20 kW mains for a laser power supply so that the laser can be manufactured in Europe, integrated into a large system in the US and finally installed in Israel. This task required us to take into account each country’s electrical standards. I’ve also learned about corona discharge in high-voltage flashlamp leads (4 kV) and how to mitigate this so that the discharge won’t ionize air and break down the leads’ rubber insulation.
Applying knowledge
As lasers move into mainstream applications, users demand computerized controls and diagnostics. This requires knowledge of software and computer engineering. With computerized control comes the need (or ability) for the laser to interface with other equipment – whether it be detectors in a spectroscopy system, motion control in a micromachining system, or something else entirely. New capabilities lead to new applications, and to conversations with customers that include the phrase “Great, but could you also…”. This, in turn, leads to further software and computer engineering challenges/opportunities (“opportunity” and “challenge” should always be viewed as synonymous).
The applications of mechanical engineering, materials science and fluid dynamics to laser science are probably most evident in thermal management problems, such as extracting heat from the laser and maintaining stability across varying ambient temperatures. If we want to answer questions like “Why did that YAG rod crack?” or “Why is one mirror mount more stable than another, and how can we make mounts even more stable?” or “How can we transfer heat from where it is generated to where it can be dumped safely?” then it is necessary to have a good understanding of thermal conductivity and thermal coefficients of expansion for various materials. Otherwise it will not be possible to build a laser system that is stable across a “reasonable” temperature range (where “reasonable” is defined by not-always-reasonable customer requirements).
That explains why mechanical engineering and materials science are important, but fluid dynamics? Well, when designing cooling systems one must consider the relationships between tubing diameter and flow resistance, and understand why turbulent flow is generally essential for efficient heat transfer from a surface to a cooling fluid. Building a stable laser is thus a fascinating interplay of mechanical engineering, materials science and fluid dynamics.
And finally, physics
Optics and physics are common to everything we do, but not always in the ways you might expect. Ray tracing and elementary optics principles are, obviously, employed for designing the laser, but they are also necessary for delivering the laser output beam to the target, often with rather complex geometries. Lenses and mirrors may seem simple enough, but as one involves articulated arms and/or long path lengths, the problem becomes difficult indeed.
Conservation of energy is a fundamental principle of physics and it is also fundamental to troubleshooting laser problems. Often the first manifestation of a laser problem is low (or no) energy at the output. So, is there energy going in? (Check mains power.) Is there pump light to the active element? (Measure the optical power at test points.) At each stage, if there is energy going in, then there must be energy coming out, with accommodation for normal losses.
Moving on to 20th century physics, a working knowledge of the uncertainty principle provides a basis for understanding limits on frequency bandwidth and pulse duration (especially for pulses in the picosecond range and shorter). The Kerr effect, Raman effects and a host of related nonlinear effects make it possible to generate very short pulses (tens of femtoseconds) and are used in numerous spectroscopic applications, but they can also confound our efforts if they crop up when not wanted. For example, stimulated Brillouin scattering limits power transmission in fibre networks and nonlinear self-focusing limits peak power in laser amplifiers. Entropy is elementary to any physics education and manifests in various ways within laser systems – in particular, it explains why dust gets everywhere. And finally, there is Murphy’s law, which predicts that any speck of dust or other contamination will settle on (and damage) the most expensive optic in the system.
In any customer-facing position, you are likely to face all of the above opportunities/challenges, plus more of the same, as you work with customers to understand how they want to use the laser, what challenges they face and what they really need to accomplish their work. A laser that stably produces light with the specified parameters is a good start, but true success only happens when the user achieves the desired effect – whether it be fabricating a component or making a measurement leading to new scientific discoveries.
On any given day
Lasers don’t taste good and aren’t much use as protection from the weather, so we must monetize our creations in order to have food and shelter. Here again, physics training can serve you well. As Milton Chang asserts in his book Toward Entrepreneurship, physicists’ skills are easily portable to the business environment. In part, this is because physics training incorporates the notion of “widgets” – sets of tools and principles that can be applied to various systems in various frames of reference. We’ve already seen that conservation of energy is useful for laser troubleshooting, but it is also essentially the same thing as accounting. Whether you are accounting for units of energy or counting units of money, the principles are the same. Multidisciplinary product knowledge, combined with a grasp of basic accounting, is a solid foundation on which to build success, whether you are leading a product line or an entire company.
On any given day, laser scientists may be using nano, pico and femtosecond pulse durations to reach giga, tera and petawatt peak powers to interact with materials on micron to nanometre distance scales or femtosecond and shorter time scales. I joke that our jobs should be subtitled “Rarely within nine orders of zero”! Physics students tend to work with the small and the large, from subatomic particles to supernovae, so fluidly moving across 20 orders of magnitude is “all in a day’s work”.
Of course there is room for specialization, particularly in larger companies, but there are still many positions within the laser industry that are inherently multidisciplinary. A physics training, multidisciplinary in nature, provides a solid basis for developing specialization in any number of fields, or being versatile enough to work on and solve a wide range of problems.
Physics World’s Laser at 60 coverage is supported by HÜBNER Photonics, a leading supplier of high performance laser products which meet the ever increasing opportunities for lasers in science and industry. Visit hubner-photonics.com to find out more.
Most stories about venture-funded companies are about winners. We’ve all heard them. They go a little like this: a brilliant founder had an idea, raised money, developed the product and – presto! – it was a terrific success. In reality the course of product development is rarely so simple or straightforward. There is always ambiguity about how much skill, and how much luck, figured in making any product a success. You also see a “survivor’s bias” to these stories, meaning that you hear much more about the successes than you do about the far more numerous failures.
Few entrepreneurs want to tell the story about their association with a failed start-up, yet the greatest lessons are often learned from these difficult experiences – lessons like how to assess development risk, timelines to development, or the risks associated with jumping into an ever-evolving product market. Over the past 35 years the company I founded, Optikos, has worked with dozens of start-up and venture-funded companies in fields ranging from medical diagnostics and industrial instrumentation to consumer products. Our clients have included everyone from individuals with a good idea but no business experience to university spin-offs and well-funded corporate entities. What they have in common is that in most cases they have a core technology or product idea that is enabled by optics, but the optical aspect of it falls outside their core competence and expertise.
Making a difference
I’ve seen clients miss out on success for a whole range of reasons. Maybe they misjudged the market need, or entered the market too late. Sometimes they simply weren’t able to get their core technology to work well enough. But I’ve also seen cases where help from established specialty firms like ours has made the difference between a success story and a painful (though valuable) lesson.
Optikos started as an engineering services company, and within a few years we developed our own line of optical metrology products. Some 35 years later, more than half a billion optical components have been manufactured to our designs. We also provide optical product development services for a wide range of customers, and although our contributions are typically “unseen”, products that Optikos helped design and test can be found in millions of homes and businesses around the world.
Contributing expertise: Optikos helped develop manufacturing testing protocols for this gene-sequencing instrument. (Courtesy: Optikos)
Our team of mechanical, electrical, software and optical engineers has accumulated a lot of optical design experience, and our longevity in the industry also gives us “tribal knowledge” of suppliers from around the world. These assets put us in a good position to help start-ups fast-track the development of their products, quickly taking them from the laboratory to the market. We give these customers as much or as little as they need – from a short consultation or “sanity check” of something they’re developing, to connections to other resources such as manufacturing facilities. We also assist some of them as they transition from a pre-beta prototype to a manufacturable unit that can scale for high volume or off-shore production.
As an example, one of our clients has core technological expertise in biochemistry, and a related fluorescence measurement process that facilitates high-speed medical diagnostics. What they lacked was the optical technology and expertise needed to transform their laboratory demonstration into a manufacturable instrument as quickly as possible. Time-to-market is often the most important challenge faced by a new company, and accelerating that process depends directly on the start-up’s ability to identify and access technical resources. Before approaching us, many prospective clients have considered adding an in-house optical instrumentation capability, but quickly realized that they don’t have time to recruit, staff and develop a cohesive technical design team. Even with the means to quickly get a team of experienced optical instrumentation engineers in place, they would not be able to provide a long-term career trajectory for those highly specialized individuals and the team would dissolve after product launch.
With another client, our main contribution was to help them substantially reduce the cost of their product. This particular client was developing a fluid-based medical diagnostic system and we got involved at the product inception phase – all design issues were on the table. The most expensive component in their system was a cooled camera with a cost exceeding $10 000. We suggested that the cost of this camera could be reduced significantly because the camera, as used, was only a component in a much larger system, not a camera packaged for retail sale. We were able to engineer a low-cost cooled camera that they could assemble themselves, which reduced the component cost below $800. This enabled them to sell their product at a price point that would undercut their competitor’s offerings while still offering excellent performance.
Picking winners
Start-ups are generally viewed as risky investments, so venture-capital investors are only interested in firms that have the potential, if successful, to provide exceptional investment returns. At the earliest stage of development it’s often difficult to know whether a product will ultimately be a success, but the venture-capital model anticipates this because it assumes that most companies will not achieve the aspirational plans that initial funding was based on. After all, if you could predict success with high confidence at the earliest stages, then it wouldn’t truly be a risky investment deserving a high financial return.
Thus, sometimes the ideas with the greatest potential are ones that initially seem impractical, even odd, or too great a technical stretch. These are the products or technologies that can be exciting disruptors, capable of permanently changing the market and enjoying tremendous success. Our role is to inform and educate clients about potential engineering improvements, trade-offs and constraints in implementing their technology, while also helping them create realistic expectations about costs, timelines, tooling for high versus low production volumes and other important considerations, before a significant investment is made. But it’s never easy and we have seen extremely well-funded start-ups, particularly in the field of medical diagnostics, veer off course because they failed to achieve the diagnostic accuracy or speed requirements for commercial success.
Despite the high risk of failure, however, working with a start-up can be an exhilarating experience. We have worked with companies that were on their last legs, struggling to develop an initial prototype product – and had the satisfaction of seeing them go public two years later. The excitement of new technology, new markets, a breakneck pace of product development, and committed and impassioned people looking to change the world combine to create an environment unlike any other. Who wouldn’t want to get involved in that?
Can you tell what branch of physics is being described on the blackboard above? It’s one of six photographs taken by the communications folks at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, where blackboards are an integral feature of the building’s design, appearing everywhere from the lifts to coffee areas.
In this quiz, your task is to study six blackboards and match them up with the physics topics they represent. There’s no prize, other than the satisfaction of having at least some inkling of what those clever theorists at the Perimeter are up to.
So here are the six topics:
• Accretion physics and general relativity
• Cosmology
• Neural networks and condensed matter
• Particle physics 1
• Particle physics 2
• Strings
And here are the six blackboards (you can click on each to see it in more detail).
BLACKBOARD 1
BLACKBOARD 2
BLACKBOARD 3
BLACKBOARD 4
BLACKBOARD 5
BLACKBOARD 6
We’ll reveal the answers at the end of the month. In the meantime, please don’t spoil the quiz for others by revealing the answers in the comments.
You can find out more about the power of blackboards in a great feature in the June 2017 issue of Physics World by science writer Philip Ball, who reckons that the blackboard still
retains an aura and usefulness for physicists that more advanced technologies can’t match.
Remember that if you are a member of the Institute of Physics, you can read Physics World magazine every month via our digital apps for iOS, Android and Web browsers.
Fermilab mades its name with the Tevatron proton–antiproton collider but neutrinos hold the key to the lab’s future, as Ben Still from Queen Mary University of London makes clear in a feature on the physics of these elusive particles.
You can also enjoy a cracking review of Tommaso Dorigo’s new warts-and-all account of life in the CDF collaboration at Fermilab, while Seyda Ipek from the lab pops up in Philip Ball’s homage to the blackboard – which you can also read on physicsworld.com.
Plus don’t miss this month’s Lateral Thoughts, which reveals how one physicist working in a Scottish call centre ended up chatting to Enrico Fermi’s daughter-in-law about her TV.
Remember that if you’re a member of the Institute of Physics, you can read Physics World magazine every month via our digital apps for iOS, Android and Web browsers.
Do an online image search for Richard Feynman. Go on, try it now. What do you notice?
He’s a photogenic sort of guy, of course: that puckish smile, the twinkling eyes, the exuberant mane of hair. But what is most noticeable is that Feynman is often standing in front of a blackboard – usually adorned with squiggles that most physicists will identify as the notation of quantum mechanics.
While looking through images of famous physicists for a forthcoming book on quantum theory, I was struck by how often the blackboard is their backdrop. From Albert Einstein and Niels Bohr to Werner Heisenberg and Paul Dirac, all have their “blackboard portrait”. Sure, experimentalists are usually depicted surrounded by lab equipment, but it seems we have decided nothing announces “theoretical physicist” as clearly as the blackboard. What’s going on?
When thoughts become real: physicist Lauren Hayward Sierens captured as a real person in The Living Chalkboard artwork at the Perimeter Institute for Theoretical Physics in Waterloo, Canada. (Courtesy: Alexa Meade/Perimeter Institute)
Teaching tool
The profession-defining pose is an old idea. Back in the 19th and early 20th centuries, chemists – from Louis Pasteur to Marie Curie – were commonly photographed or painted holding aloft a flask and gazing nobly at its contents. It was a gesture that actually derives from a rather unheroic tradition: physicians in the late Middle Ages and the Renaissance would typically be depicted diagnosing their patients by a visual inspection of their urine.
Physics is a younger discipline, barely recognized in today’s sense until the 19th century. And theoretical physics is more recent still – Einstein’s generation was the first to make it a distinct endeavour. But by choosing the blackboard pose as the archetypal image of the physicist, we seem to be saying that physics is inherently cerebral, defined by abstract mathematical ideas inscribed in chalk.
That conception probably owes a great deal to Einstein himself. As the French literary theorist Roland Barthes explained in 1957: “The historic equation E = mc2, by its unexpected simplicity, almost embodies the pure idea of the key…opening with a wholly magical ease a door which had resisted the desperate efforts of centuries.” And popular imagery, Barthes continued, faithfully expressed that idea. “Photographs of Einstein,” he wrote, “show him standing next to a blackboard covered with mathematical signs of obvious complexity; but cartoons of Einstein…show him chalk still in hand, and having just written on an empty blackboard, as if without preparation, the magic formula of the world.”
The evocative power of the equation as a “magic formula”, as if it is some gnostic incantation to unlock the secrets of the universe, is an image with roots in the Renaissance tradition of natural magic. But why should writing it on a blackboard make it so potent?
Black to basics
The invention of the blackboard is popularly attributed to a Scottish schoolteacher named James Pillans, who early in the 19th century placed many slate tablets side by side so that the old practice of writing on them with chalk could convey more complex information and illustration. But these writing devices might be much older. “I have heard that blackboards originated in India”, says theoretical physicist Harsh Mathur of Case Western Reserve University in Cleveland, Ohio, who adds that the famous Persian traveller Al-Biruni wrote about their use in the 11th century.
Whatever their origin, by the mid-1800s these boards were made instead from wood coated with a thick black paint, which could be wiped clean with dry rags or felt erasers. And while the appeal of a cheap, erasable surface for displaying large words and diagrams in high-contrast markings might not seem particularly mysterious, anyone who has ever used a blackboard and chalk knows there is more to it than that.
A place to collaborate: the Perimeter Institute for Theoretical Physics was designed with blackboards everywhere, even in the lifts. (Courtesy: Gabriela Secara)
Make an error in your spelling or calculation and – swish! – it’s gone, as if you’d never made the slip at all. There are no electronics to malfunction or bulbs to burn out, as was often the case with the overhead projectors that once sought to usurp the blackboard’s role. It’s easy to edit the surface, leaving parts of what you’ve written while erasing others. And there’s no more satisfying way of starting afresh on a problem than wiping your earlier thoughts with a damp cloth to return to that light-absorbing void.
Sure, whiteboards don’t cloud you in dust, but nor do they capture the same aesthetic. Perhaps, given the white or pale walls of most academic environments, whiteboards don’t sufficiently demarcate a space for thinking from the distractions of the surroundings. Besides, the pens smell and dry up, they slip and slide on the shiny surface, and they’re easily smudged. Worse, you can never quite get the damned boards clean: there’s always a faint residue, the distracting whisper of someone else’s ideas.
Blackboard and chalk – like paper and ink – are a combination that modern technologies can’t improve or displace. You still see blackboards (and plenty of whiteboards too, it’s true) in physics research centres across the world. At the Perimeter Institute for Theoretical Physics in Waterloo, Canada, they’re an essential element of the design, being installed in the lifts and coffee areas of the original building. The Isaac Newton Institute for Mathematical Sciences in Cambridge, UK, even has blackboards in the toilets; you never know when insight might strike.
This ubiquity can create a sense of community and shared endeavour, as if the creative thoughts of one’s peers seep into the very walls. “The evidence of past conversations can be inspiring”, says Lauren Hayward Sierens, a condensed-matter physicist at the Perimeter Institute. “Often what you’ll see on a given blackboard at the Perimeter is a combination of many different conversations. I can rarely understand these past conversations if I wasn’t a part of them, but it’s inspiring nonetheless to be surrounded by so many ideas.”
Chalk and talk
For Seyda Ipek, a particle physicist at Fermilab, blackboards are such a part of her everyday life that talking about them is like discussing how one drinks water. “You don’t think about it until it is pointed out,” she says. “At Fermilab both offices and common areas are filled with blackboards and whiteboards. My previous institution, the University of Washington, also had blackboards everywhere, including hallways.”
Ipek says that these surfaces promote informal, impromptu communication and discussion. “We have a whiteboard in our coffee lounge. While we have our after-lunch coffee, we often use it. Someone asks ‘What’s new?’ and then someone else goes up to the board and says ‘I’ve been thinking about this lately. Let me show you.’”
Quite simply, the blackboard is a democratic space, where ideas can be easily shared. “Two people can’t bend over a notebook to discuss,” Ipek points out. “The board gives ample space and it is generally understood that anyone can go up to the board. Sometimes people do that to clarify their misunderstandings, or to challenge each other. Duelling with ideas at the blackboard is not uncommon.”
That kind of intellectual sparring might be hugely facilitated by this shared canvas for thinking on. If someone claims your idea is wrong, you might feel attacked and respond defensively. But if your ideas are chalked on a board, you and your colleagues can scrutinize them almost as an impersonal object of study. Over at Case Western, Mathur believes that the ease of erasure makes students less hesitant to put down answers on the board. “Perhaps it’s the impermanence of writing on a board that makes them feel less concerned about being judged negatively,” he says.
Using a blackboard also moderates the pace of a discussion or explanation. Blackboards help in teaching by slowing down the lecture and allowing the students to absorb information and knowledge at a more human rate. Students in Mathur’s first-year introductory physics class overwhelmingly favour the blackboard over PowerPoint as the primary means of communication.
The time and effort involved in using a blackboard can be good discipline for communication too. PowerPoint speakers who flash up slide after equation-packed slide would have to speak more slowly and think twice about what to include if forced to write everything out by hand. Blackboards, Ipek notes, regulate one’s talking speed and give the audience time to absorb the ideas and ask questions. “At Fermilab we have a journal club where each week one person gives a blackboard summary of an interesting paper. One week we had a talk with slides, and everyone complained.”
There’s also something about a blackboard that seems to fit with the way the mind works: sketching, erasing, supporting a free flow of ideas. “Many physicists like to do a back-of-the-envelope calculation before delving deeper into a computation, and blackboards are a great tool for that”, says Tibra Ali, another theorist at the Perimeter Institute. Indeed, there’s a trophy-like quality to a clever piece of work prominently displayed on a blackboard. Some physicists like transcribing a hard-won solution onto a blackboard to understand the full ramifications – and perhaps just to gloat.
The physics pose: Richard Feynman is just one of countless theorists caught on camera in front of a blackboard. (Courtesy: CERN)
If it ain’t broke…
Despite their low-tech nature, blackboards seem to be working together with new technologies. Ali, for example, says that he and his collaborator often do computations on a blackboard and take an image of them with their mobile phones before erasing the writing and moving on to the next step. “Many a time,” Ali says, “the main idea or the main computation for a project that becomes a paper happens while we are doing these intense computations on a blackboard.”
Given that the blackboard appears to be an optimized technology, tampering with it might seem to be a bad idea. Designers of the new Stephen Hawking wing of the Perimeter Institute thought they knew better, installing a special glass in the discussion areas when it opened in 2011. Opaque but bright when viewed from the front, the glass becomes transparent from the side. “The idea was to have an open bright space with natural light but at the same time have the glass serve as whiteboards on which physicists can write with markers,” recalls Ali.
But the physicists didn’t bite – and eventually old-fashioned blackboards were placed in those discussion areas instead. Likewise, when shiny PVC blackboards, requiring special pens, were installed at the National Graphene Institute at the University of Manchester, UK, to eliminate “dangerous” chalk dust, they were barely used. Squiggles written on a visit by then British chancellor George Osborne were later accidentally wiped by an over-eager cleaner.
There are, then, plenty of practical reasons why blackboards are great tools for thinking, collaborating and communicating. But as Barthes hinted, their significance for physics goes beyond the pragmatic. Displayed at epic scale on walls, blackboards can’t help but exude power, authority and even artistry. With their imperfectly erased ghosts of equations past, they remind us of medieval palimpsests: documents on vellum that, too expensive to discard, were scraped almost clean for reuse while still carrying the tantalizing traces of other thoughts in other minds.
Like historical relics and works of art, blackboards may themselves become venerated objects, imbued with almost mystical significance. The blackboard used by Einstein when he gave three lectures on general relativity at the University of Oxford in 1931 has been preserved as a historical artefact at the Museum of the History of Science in Oxford. (There used to be two blackboards, but one disappeared “in mysterious circumstances”, according to former museum director Jim Bennett.) The board shows Einstein’s calculations of the age, size and density of the universe, and it has become the most famous object in the collection. “People come to the door of the museum and say ‘Where is Einstein’s blackboard?’,” says Bennett. “It’s become a sort of icon. People come and look at it as if is was almost a sort of quasi-religious object.”
We do that to other historical artefacts of science too, of course – Michael Faraday’s induction coils, Galileo’s wooden ramps, a first edition of Isaac Newton’s Principia. But a blackboard used by a legendary scientist has a unique aura, not just because the equations and diagrams were traced in perilously fragile chalk dust by their own hand but also because these markings seem like a trace of thought itself. Like thoughts, they can be fleeting, they can vanish at the stroke of a hand. Yet here they remain: the magic formulae of the world.
Feynman’s blackboard at the California Institute of Technology was photographed at the time of his death in 1988, and seems almost tailored to serve as an epitaph for the great scientist. “What I cannot create I do not understand,” he had written – followed by what might be seen as a corollary: “Know how to solve every problem that has been solved.” Feynman might just as well have written these thoughts in his notebook. But how much more mystique and pathos they acquire on a blackboard.
The art of the blackboard
Physics in action: blackboards photographed by Spanish artist Alejandro Guijarro at Stanford University (left) and the University of California, Berkeley (right). (Courtesy: Alejandro Guijarro)
It was the allusive quality of semi-erased blackboards that appealed to Alejandro Guijarro, a Spanish artist who has taken a series of photographs of physics blackboards that he found in lecture halls and researchers’ offices at CERN, and university institutes in Oxford, Cambridge (UK), Stanford and Berkeley. These images, Guijarro has explained, “are fragmented pieces of ideas, thoughts or explanations from which arises a level of randomness”. He admits that he didn’t understand any of the physics but selected the blackboards purely on aesthetic grounds. “I was interested in the action, the gestures and the marks on the surface” – which he looked at as one might the brushstrokes in an abstract painting.
Given their size and their public nature, blackboards can become almost a “performance space” for the physicist. The performative human traces left in blackboard inscriptions were the subject of an art installation commissioned by Canada’s Perimeter Institute for Theoretical Physics in 2015. Artist-in-residence Alexa Meade created a “room” that was one gigantic blackboard, in which not only its walls and armchairs but also two live researchers became the dark surfaces covered with inscribed lines and symbols and the expressionist smears of erased chalk. Meade made her blackboard “universe” after first immersing herself in the culture of the institute, attending lectures and talking to the scientists.
“I think Alexa’s work captured the fundamental connection between the blackboard and the theoretical physicist, illustrating how the blackboard allows a physicist to put his or her thoughts and ideas into a new form,” says Lauren Hayward Sierens, who was one of two Perimeter scientists enlisted by Meade as a human blackboard. It’s hard to imagine whiteboards having quite the same visual appeal for artists.
Watch artist Alexa Meade create The Living Chalkboard with Perimeter Institute physicists. For more about this project, see the Perimeter Institute website. (Video courtesy: Alexa Meade/Perimeter Institute)
A camera made by combining graphene with industrial semiconductor processing has been unveiled by researchers in Spain. Their device is sensitive to a wider spectrum of light than any commercial camera and the team says that the new process could also be used to create high-speed optical interconnects for communications networks.
Graphene is a sheet of carbon just one atom thick and this “wonder material” has a number of very useful electronic properties, such as an extraordinarily high electron mobility. As a result it has been used to create displays, loudspeakers, touchscreens and other electronic devices. However, most of these applications are in the early stages of development and researchers and companies are still working on how to integrate graphene into industrial-scale manufacturing processes.
Today’s electronics industry is dominated by the complementary metal-oxide semiconductor (CMOS) process, which combines silicon with metals and insulators on single wafers that can contain billions of transistors. Integration of other semiconductors such as graphene into CMOS, however, poses a problem because the lattice mismatch between different materials usually makes it impossible to grow high-quality layers of other semiconductors on silicon. Indeed, when graphene electronic devices have been created, they have not been integrated into CMOS circuits.
Limited range
The inability to integrate other semiconductors puts restrictions on the performance of CMOS-based cameras. “The camera in your smartphone can only see visible light as silicon only absorbs visible light,” explains Frank Koppens of the Institute of Photonic Sciences in Barcelona. “If you want to detect infrared light you have to buy an indium gallium arsenide camera, for example. That will cost you around $40,000 or $50,000 because indium gallium arsenide is not monolithically integrated with CMOS, so they have a very complicated process to integrate the readout circuit with the photodetectors.”
In 2011, Koppens and colleagues produced a high-sensitivity photodetector for both infrared and visible wavelengths by attaching two electrodes to a sheet of graphene covered with lead sulphide quantum dots. Photons absorbed in the quantum dots create electron-hole pairs. The electrons were retained in the quantum dots, while the holes moved down into the graphene, dramatically increasing its electrical conductivity and producing a large increase in current. However, the researchers could not then go on to produce a camera. “A photodetector you can just wire up to an electronic board,” explains Koppens. “A camera needs to read out one million photodetectors at the same time, so you need a micro-electronic circuit.”
In the new work, Koppens’ team transferred graphene epitaxially grown on copper foil onto the surface of a silicon CMOS chip. The chip was embedded with the circuitry to read out each camera pixel individually. They then patterned the graphene to define each pixel and deposited a layer of quantum dots on top. The resulting camera can detect wavelengths from 300 nm (near-ultraviolet) to 2000 nm (short-wave infrared). Even though the graphene is not used to absorb the light, its extraordinarily high electronic mobility produces a stronger signal, which allows it to detect infrared light above noise where other devices cannot. The researchers believe the device could find use in cameras for smartphones, security systems, vehicles, and food and pharmaceutical inspection systems. Crucially, its integrated CMOS production could make it no more expensive than current smartphone cameras.
Unprecedented speeds
The researchers are also working to produce graphene-based optical interconnects, which could boost the capacity of optical communications networks and even lead to optical computers. Although, in the current design, the quantum dots limit the speed of the camera, graphene itself can absorb light – albeit much less effectively – at unprecedented speeds: “For data communications you need to integrate graphene with silicon photonics,” says Koppens. “That’s also a silicon CMOS-based technology.”
Andrea Ferrari of the University of Cambridge in the UK told Physics World, “The most important result [of the research] without any doubt is the first bona fide, large area graphene-CMOS integrated device”. Ferrari, who was not involved in the research, adds: “This is the last challenge when it comes to graphene optoelectronics.” He says one of the next big hurdles will be to develop a production process suitable for “fabs” – the billion-dollar production facilities that produce commercial CMOS chips. “If graphene-CMOS integration actually works properly in the fab, then we are done: we are looking at a major revolution, with optoelectronic devices in your phone, in data transmitter units for the internet of things – all based on graphene,” he says. “This is a major result”
Photons in relatively weak beams of light could be made to interact with each other by shining them through a piece of silicon with a specific set of voids cut through it. That is the conclusion of Hyongrak Choi, Mikkel Heuck, and Dirk Englund of the Massachusetts Institute of Technology in the US. They have done calculations that suggest a weak beam of light can create strong electric fields within a piece of silicon that contains a precise arrangement of nanometre-sized voids. The field can be as much as 10,000 times the strength of the electric field normally associated with such light. The presence of such a field would allow a photon to modify the index of refraction in the region that surrounds it. A second photon travelling through this region would be affected by this change – the result being an interaction between the photons. Normally, extremely intense laser light is required to create this effect. The ability to interact photons within much weaker light beams could lead to the development of new types of switches and other devices to create fast and energy-efficient optical communications networks that do not require electrical components. The effect is described in Physical Review Letters and could even be used to create devices for quantum computers in which information is encoded into photons.
Plasma drives high-gain laser amplifier
Vulcan laser target area at the Central Laser Facility showing the set-up for the plasma laser amplifier. (Courtesy: University of Strathclyde)
A plasma-based amplifier of laser light is described by its creators as having the highest ever gain. Built by an international team led by Dino Jaroszynski at the University of Strathclyde, the system takes picosecond-duration laser pulses carrying just a few picojoules of energy and boosts them up to about 100 mJ – which is a gain of about 100 million. The amplifier uses high-energy 100 J laser pulses at the Vulcan laser at the UK’s Central Laser Facility in Oxfordshire to create a plasma by firing the laser at a jet of hydrogen gas. The picojoule laser pulse to be amplified is fired at the plasma, where it collides with a high-energy laser pulse. The collision produces a beat wave of light that drives plasma electrons into a regular pattern that mimics the beat wave. This wave sweeps up the energy of the high-energy pulse and outputs it into the low energy pulse, resulting in a huge amplification of the low-energy pulse. An important feature of the amplification process is that the duration of the low-energy laser pulse is not increased significantly during the amplification process. “Our results are very significant in that they demonstrate the flexibility of the plasma medium as a very high gain amplifier medium,” says Jaroszynski. “We also show that the efficiency of the amplifier can be quite large, at least 10%, which is unprecedented and can be increased further.” However, he points out that random fluctuations in the plasma are also amplified, which contributes to noise in the amplified pulse. The team believes that plasma-based amplifiers could play important roles in the development of the next generation of high-power lasers. The research is described in Scientific Reports.