Founded in 2017, US-based Lightelligence develops optical chips for high-performance computing applications. Co-founders Yichen Shen and Spencer Powers talk to Physics World’s Hamish Johnston about how the company spun out of the Massachusetts Institute of Technology (MIT), its latest product and their plans for Lightelligence’s future
How did Lightelligence get going?
Spencer Powers: I was an MBA student at MIT in 2015 when I met Yichen, who was doing a PhD with Marin Soljačić – who is also a co-founder of Lightelligence. We met in a class where half the students were doing an MBA and the other half were doing PhDs.
Everything started in 2017 when Yichen and Marin published a paper in Nature Photonics called “Deep learning with coherent nanophotonic circuits”. This describes the basis of our technology, which was Yichen’s PhD project.
In 2018 we assembled a team of 15 employees and the company got its first round of funding. In 2019 Lightelligence produced its first optical artificial intelligence (AI) accelerator demo called COMET, which was able to accurately recognize handwritten digits.
By 2020 we had raised more than $100 million and today we have about 180 employees who have already created several chip designs for integrated optical computing.
Why is there a need for integrated optical computing systems?
Yichen Shen: Starting around 2012, AI moved into its “modern era” in terms of performance and processing requirements. The processing power needed to run the most advanced AI algorithms is doubling every 3–4 months. So in the past decade, requirements have increased by a factor of about 150 000 – and conventional computers cannot keep up with this. People started to use graphical processing units (GPUs) – but now even GPUs can’t keep up.
In about 2015, Moore’s law started to break down – transistors are still getting smaller, but the performance gains from the ongoing miniaturization are not as good as they used to be. What is more, the time it takes to develop the next generation of smaller devices is getting longer.
The AI industry has responded by moving to increasingly specialized technologies – from central processing unit (CPU) to GPU to field-programmable gate array (FPGA) to application-specific integrated circuit (ASIC) – but it is running out of fuel. So what we really need is a new technology to further expand the computing power needed for AI.
And that’s where optical processors come in?
YS: Yes, at Lightelligence we want to power the next generation of AI technology with integrated photonic ASICs. Light provides an ideal solution to AI computing for three key reasons. First, you can get a very high data throughput using photonics. An optical fibre can carry more than one terabit per second, which is about 100 times greater than a copper wire. Second, photonics is much more energy efficient than electronics because there is no ohmic heating as light travels through a circuit. Finally, light has lower latency because it travels much faster through circuits than electrons.
How does your technology solve the speed problem?
YS: We have invented an optical multiplication and accumulation (MAC) unit that uses nanoscale photonic devices to do matrix multiplications. We also have a separate network on chip (NOC) to carry and broadcast data to the electronic chips in the system.
We use a combination of optical and electrical devices because researchers have not yet solved the “optical memory problem”, so we must use electronic memory storage. Our solution is a hybrid and we believe that this type of architecture will remain for at least the next few decades.
Our optical circuits do not contain optical transistors. Instead, our chips carry out analogue optical computing. Light travels through the device via waveguides and the signals interfere with each other to perform the required computation. It’s a bit like light going through a lens, which does a Fourier transform on the light.
While this is a passive effect, we can actively tune how this interference occurs. We have created a programmable chip where we control what kind of matrix multiplication the light is doing. This is done by applying voltages to different parts of the chip.
What do you plan to do with this technology?
YS: Using this technology, we can build an AI processor that is much higher speed and lower power than traditional digital electronics. Our potential market would be cloud AI, finance and smart retail. Digital assistants such as Siri and Alexa could benefit from better natural language processing – recognizing longer sentences and behaving more like real human beings. Other applications include computer vision for self-driving cars, optical character recognition, drug discovery and robotics.
We can go to mass production volumes when needed – we are “lab-to-fab”. Our products do not use esoteric technologies and can be made using standard chip fabrication processes
In December 2021 you released a new product called PACE; what does it do?
YS: PACE, the photonic arithmetic computing engine has been developed over the past two years. We believe that PACE is the first optical system that has been shown to be superior to a digital system. PACE is also our first system that is designed for applications beyond AI and deep learning. It searches for solutions to some of the hardest computational problems including the Ising model, the graph max-cut problem and the graph min-cut problem. The Ising model is an NP (nondeterministic polynomial) complete problem that is very difficult to solve – all quantum computing companies use this as a benchmark.
PACE solves these problems using an optimization algorithm. Real world applications include bio-informatics, scheduling, circuit design and materials discovery.
The system combines a photonic integrated circuit (PIC) with an electronic integrated circuit (EIC). The EIC contains the memory for PACE, and it also does some non-linear operations using conventional logic gates. The EIC is interfaced with the PIC, which does the matrix multiplication. PACE runs a recurrent heuristic algorithm, which converges on the solution after thousands or tens of thousands of iterations.
How fast is PACE?
YS: The computing speed is dictated by how fast we can do the matrix multiplication. Our chip can finish a 64 × 64 matrix within 4 ns. In contrast, the best GPUs will take hundreds of nanoseconds. This is how PACE can achieve orders of magnitude advantage over digital electronics.
PACE runs hundreds of times faster than high-end CPUs and GPUs and 25 times faster than electronic systems that are especially designed to solve the Ising model.
What are the current technological challenges that you face in developing products like PACE?
YS: Today, the biggest limiting factor for our technology is the electronics. We can’t live without electronic memory – we need to use it. This means we must convert photonic signals back to electronics and vice versa, and that limits our performance. Another challenge is device size. Photonic devices are still much larger than electronic devices. But we are still better than pure electronics.
What is next for Lightelligence?
SP: We will continue to grow the company and we have several projects running internally now. Later this year we will be unveiling an AI accelerator pilot project that will involve pilot customers. It is not a production-scale product, but it is a commercial-grade AI accelerator that gets its advantage from photonics. This will be an important milestone for us.
Following that, we will release a production-level AI accelerator that will have high throughput and low latency compared to other AI accelerators.
Can your chips be mass produced?
SP: Yes, we can go to mass production volumes when needed – we are “lab-to-fab”. Our products do not use esoteric technologies and can be made using standard chip fabrication processes. With PACE, mass volume production is not needed. We are producing in the order of 10 units for demonstration purposes.