LightOn’s technology uses light to perform some computations of interest to Machine Learning / Artificial Intelligence. Our analog computation devices literally harvest natural physical processes at unprecedented speed, size, and power efficiency.

Our first product is a hardware co-processor called the Optical Processing Unit – or OPU -. It is designed to boost some of the most compute-intensive tasks in Machine Learning. The OPU can just be plugged onto a standard server or workstation, and accessed through a simple toolbox that is seamlessly integrated within familiar programming environments. Full-scale OPU prototypes are already available to selected users through the LightOn Cloud. We are opening our registration for those researchers and data scientists interested in trying out our technology on our cloud. The sign-up page is here.

What it does ?

Matrix-vector multiplications are amongst the most important elementary computing blocks in Machine Learning. For instance, Deep Learning schemes essentially stack such matrix-vector multiplications with non-linearities. 

An OPU just does that: it multiplies the input data by a fixed matrix, passes through an element-wise non-linearity, and outputs the result. But because the OPU harnesses optics it can do this operation

  • at massive data size
  • very fast
  • at minimal power consumption

What makes each OPU device literally unique is the fixed random matrix at the core if its computations, well fitted to the statistical learning of many Machine Learning / Artificial Intelligence schemes. 

Use cases

Examples of successful use cases of the OPU technology include:

  • Image and Video Classification
  • Recommender Systems
  • Anomaly Detection
  • Natural Language Processing
A deeper insight

We regularly communicate our findings to the science community through preprints, presentations at conferences, blog posts  and publications.

The proof of concept of our first generation prototype can be found at:

       Random Projections through multiple optical scattering: Approximating kernels at the speed of lightAlaa SaadeFrancesco CaltagironeIgor CarronLaurent Daudet, Angélique DrémeauSylvain GiganFlorent Krzakalahttps://arxiv.org/abs/1510.06664

 

Our OPU can be made linear as shown in this preprint

       « Don’t take it lightly: Phasing optical random projections with unknown operators« , Sidharth GuptaRémi GribonvalLaurent DaudetIvan Dokmanić, https://arxiv.org/abs/1907.01703

 

Two different uses of the OPU for time-series analysis have been published in three papers/preprints :

Times series prediction with Echo State Networks
       « Scaling up Echo-State Networks with multiple light scattering« , Jonathan Dong, Sylvain Gigan, Florent Krzakala, Gilles Wainrib, IEEE Statistical Signal Processing Workshop (SSP), Freiburg, Germany, 2018, pp. 448-452, https://arxiv.org/abs/1609.05204

and 

       « Optical Reservoir Computing using multiple light scattering for chaotic systems prediction« , Jonathan DongMushegh RafayelyanFlorent KrzakalaSylvain Gigan, https://arxiv.org/abs/1907.00657

 

        Online change-point detection in time series:
« NEWMA: a new method for scalable model-free online change-point detection« , Nicolas Keriven, Damien Garreau, Iacopo Poli, https://arxiv.org/abs/1805.08061

 

Together with fellow scientists, we discuss this initial concept within the general framework of the links between machine learning and physics (including quantum computing)


       Machine learning and the physical sciences https://arxiv.org/abs/1903.10563