Our first product is a hardware co-processor called the Optical Processing Unit – or OPU. It is designed to boost some of the most compute-intensive tasks in Machine Learning. The OPU can just be plugged onto a standard server or workstation, and accessed through a simple toolbox that is seamlessly integrated within familiar programming environments. Full-scale OPU prototypes are already available to selected users through the LightOn Cloud. We are opening our registration for those researchers and data scientists interested in trying out our technology on our cloud. The sign-up page is here.
Matrix-vector multiplications are amongst the most important elementary computing blocks in Machine Learning. For instance, Deep Learning schemes essentially stack such matrix-vector multiplications with non-linearities.
An OPU just does that: it multiplies the input data by a fixed matrix, passes through an element-wise non-linearity, and outputs the result. But because the OPU harnesses optics it can do this operation
- for very large data size
- with low power consumption
What makes each OPU device literally unique is the fixed random matrix at the core if its computations, well fitted to the statistical learning of many Machine Learning / Artificial Intelligence schemes. More technical information on how our Optical Processing Unit perform its operation can be found here.
Examples of successful use cases of the OPU technology include:
- Image and Video Classification
- Recommender Systems
- Anomaly Detection
- Natural Language Processing
Our current Optical Processing Unit performs Random Projections. These Random projections have a long history in terms of permitting the analysis of large sized data. They allow for oblivious dimensionality reduction thanks to the Johnson-Lindenstrauss lemma and have been found useful in large variety of fields such as Compressed Sensing, Randomized Numerical Linear Algebra, Streaming and Sketching algorithms…. Findings using our technology are communicated to the science community through preprints, presentations at conferences, blog posts , workshops and meetups, examples through the use of our API on LightOn Cloud, and publications.
Supervised Learning (Random Features kernels)
Lifting Data… and Washing Machines: Kernel Computations from Optical Random Features (blog by Ruben Ohana, March 17, 2020)
- Au Revoir Backprop! Bonjour Optical Transfer Learning! (blog post by Luca Tommasone), Feb 20, 2020.
- Beyond Overfitting and Beyond Silicon: The double descent curve (blog post by Alessandro Cappelli), Jan 15, 2020.
- Supervised Random Projections made Lighter, (blog post by François Boniface), Dec 6, 2019
- Kernel computations from large-scale random features obtained by Optical Processing Units, Ruben Ohana, Jonas Wacker, Jonathan Dong, Sébastien Marmin, Florent Krzakala, Maurizio Filippone, Laurent Daudet. Dec 3, 2019, arXiv:1910.09880, Accepted at ICASSP 2020
- « Don’t take it lightly: Phasing optical random projections with unknown operators » Sidharth Gupta, Rémi Gribonval, Laurent Daudet, Ivan Dokmanić, NeurIPS 2019
- Transfer Learning on the OPU (API example available on LightOn Cloud)
- Random Projections through multiple optical scattering: Approximating kernels at the speed of light, Alaa Saade, Francesco Caltagirone, Igor Carron, Laurent Daudet, Angélique Drémeau, Sylvain Gigan, Florent Krzakala, ICASSP 2016
Unsupervised Learning / Randomized Numerical Linear Algebra
- Randomized SVD (API example available on LightOn Cloud)
Accelerating SARS-COv2 Molecular Dynamics Studies with Optical Random Features (blog post by Amélie Chatelain, March 25 2020)
Natural Language Processing
Time Series Analysis
- High dimensional time series change detection
- « NEWMA: a new method for scalable model-free online change-point detection« , Nicolas Keriven, Damien Garreau, Iacopo Poli, Oct 8, 2018, arXiv:1805.08061
- Special Recurrent Neural Networks
- « Optical Reservoir Computing using multiple light scattering for chaotic systems prediction« , Jonathan Dong, Mushegh Rafayelyan, Florent Krzakala, Sylvain Gigan, August 27, 2019, arXiv:1907.00657
- « Scaling up Echo-State Networks with multiple light scattering« , Jonathan Dong, Sylvain Gigan, Florent Krzakala, Gilles Wainrib, IEEE Statistical Signal Processing Workshop (SSP), Freiburg, Germany, 2018, pp. 448-452
LightOn’s Artificial Intelligence Research Workshops
- LightOn AI Fourth Research Workshop — FoRM #4: The Future of Random Matrices, December 19th, 2019.
- LightOn’s Third Research Workshop, May 24th, 2019
- LightOn’s Second Research Workshop: Mini-workshop: The Future of Random Projections II, May 2, 2018
- LightOn’s First Research Workshop: The Future of Random Projections : a mini-workshop, December 21, 2017
- Fast Optical System Identification by Numerical Interferometry by Sidharth Gupta, Rémi Gribonval, Laurent Daudet, Ivan Dokmanić, arXiv:1911.01006, 4 Nov 2019. Accepted at ICASSP 2020
- « Machine learning and the physical sciences » Giuseppe Carleo, Ignacio Cirac, Kyle Cranmer, Laurent Daudet, Maria Schuld, Naftali Tishby, Leslie Vogt-Maranto, Lenka Zdeborová, arXiv:1903.10563, March 25, 2019. Rev. Mod. Phys. 91, 045002 (2019), Published December 6, 2019
- LightOn’s Summer Series #1 — Faith No Moore: Silicon Will Not Scale Indefinitely (blog post by Julien Launay), Aug 9, 2019
- LightOn’s Summer Series #2 — Optical Computing: a New Hope (blog post by Julien Launay), Aug 20, 2019
- LightOn’s Summer Series #3 — How I Learned to Stop Worrying and Love Random Projections (blog post by Julien Launay), Oct 28, 2019
- Random Projections at the Speed of Light: Full Ahead Mr. Sulu, Maximum Warp (blog post)