LightOn FAQ

Common Questions and Answers about LightOn's Technology

LightOn is re-defining computing for some of today’s largest challenges in AI and HPC. We develop new photonic computing hardware orders of magnitude more efficient than silicon chips. We advocate for a hybrid technology, with standard chips and LightOn’s photonic co-processors ideally complementing each other - eventually getting the best of both worlds. In terms of algorithms, either seamlessly integrated into existing computing pipelines or leveraging optimized algorithms, LightOn’s technology makes it radically easier to process large-scale data. With massive models now more accessible, we provide unique tools to unlock the tremendous economic, societal, and scientific impact of Transformative AI.
LightOn is built around a community of AI / ML / HPC researchers and engineers in both academia and industry. We communicate to the scientific and tech community through papers/conferences, preprints, blog posts, workshops, and our API documentation pages. We also maintain our own GitHub account where we open-source most of the LightOn AI Research (LAIR) algorithms. We also organize monthly meetups with world-class guest researchers; you are welcome to join us in the next one!

Not at all! LightOn’s technology harnesses light-matter interaction to perform computations “at the speed of light” but let us worry about that; you only need to import our libraries through a single line of Python code. We also have plenty of documentation, a user forum, and responsive support. Some users have told us that they were up and running in only half a day.

Of course, we are only getting started! You can be among the first to learn about what we are working on by subscribing to our monthly newsletter.

We would be happy to speak with you about your needs! LightOn may be able to provide consulting services to perform this analysis with your specific use case.

As a researcher, you are eligible for the LightOn Cloud Research program that offers 20 free hours of LightOn Cloud usage.
If you are interested in using a LightOn Appliance for your research (for instance an H2020/Horizon Europe project or a nationally-funded project) please contact us to discuss possible arrangements.

A Random Projection, performed by the current OPU family (Aurora), is a specific kind of matrix-vector multiplication: the multiplication of an input vector with a matrix of fixed random coefficients.

  • Random Projections have a long history for the analysis of large-size data since they achieve universal data compression. For example, you can use this tool to reduce the size of any type of data, while keeping all the important information. There are well-established mathematical guarantees on why this works, in particular thanks to the Johnson-Lindenstrauss lemma. Essentially, this lemma states that compression through RPs approximately preserves distances: if two items are similar in the original domain, their compressed version will also be similar. In a Neural Network framework, this operation is simply a fixed fully-connected random layer commonly used to reduce data dimension. In randomized Numerical Linear Algebra, random projections are one of the required tools to make computations for data that is just too large to be handled by classical methods (for instance, in Randomized SVD).
  • Random Projections can also be used for data expansion, for instance when the classes are not easily separated in the original domain: expanding in higher dimension makes the linear separation more efficient - in the asymptotics, this actually approximates a well-defined kernel.
    Regardless of input/output dimensions, Random Projections also have some optimal data mixing properties, used for instance for data pre-conditioning.

Yes. Random Projections are universal operators that can therefore be applied to any type of data: images and videos, time-series, scientific experimental data, HPC simulation results, text/tokenized text graphs, audio/speech signals, financial data or more abstract features … anywhere where one is “drowned in data but starved for wisdom”.

Yes, you can perform multiplication at small scales with a random matrix on a CPU or GPU (or an FPGA …). However, at larger scales an OPU allows you to perform such an operation, much faster, with much lower power consumption and without hitting memory limits - depending on your hardware configuration, the crossover may appear at dimensions from a few thousand to a few tens of thousands, not even mentioning the one-off cost of generating the random matrix.
We even have a simulated OPU mode in our API, so that only a single line of code allows you to run the Random Projections on OPU or CPU/GPU.

Short Answer: The OPU works internally in the analog domain, but has digital inputs/outputs.


Long Answer: The computations inside the OPU are performed in an analog fashion, following a non-von Neumann computing paradigm. Such low-precision computing is now standard in the framework of Machine Learning, which is fundamentally about statistics on noisy data. In practice, we have noted repeatedly that the end result (e.g. in terms of classification rate) is not affected as compared to the same computations performed digitally in very high 32-bit precision. 

OPUs use fixed random matrices at extremely large sizes, with more than 1012 (one trillion) random parameters with physics-guaranteed statistical distributions. OPUs uses these matrices for super-fast matrix-vector multiplications, without having to explicitly identify these random coefficients. Extracting these random numbers is possible but would be slow  - without even mentioning storage issues -, so definitely not as efficient as a (True) Random Number Generator.

Common Questions and Answers about LightOn's Hardware Products

Short Answer: It depends on the algorithm and the specific implementation.
Long Answer: LightOn’s products feature an OPU, designed as a specialized co-processor, that speeds up some of large-scale matrix-vector multiplications useful in AI and HPC. We have not internally examined all the potential algorithms, therefore we may not know whether there is potential for speedup towards your specific use-case. You can evaluate the OPU for this purpose by accessing it directly through LightOn Cloud (cloud.lighton.ai). LightOn’s team has written a few blog posts about some common use cases, together with the publications section (https://lighton.ai/publications/) by our team and user community.

LightOn can also provide consulting services (possibly under NDA) to help you evaluate the potential of the OPU technology for a specific use case.

  • LightOn Appliance does not need a GPU but it does need a CPU. The Appliance must be connected to a compatible server. You can see the minimum requirements in the spec sheet.
  • You should keep in mind that the LightOn Appliance is not a general-purpose processor and can perform specific computations of interest to ML and HPC. Therefore, in the majority of cases, the OPU can improve the performance of an ML pipeline working in conjunction with a GPU.

LightOn Appliance is our on-premises product for production-grade Machine Learning. LightOn Appliance can be seamlessly integrated into your own data center and is available to pre-order. Learn more at https://lighton.ai/lighton-appliance/
LightOn Cloud is our cloud platform through which you can have access to LightOn OPUs without the need to host a LightOn Appliance on your data center. Learn more at https://cloud.lighton.ai
Both products use the same underlying OPU technology. A typical situation is to start using the LightOn Cloud to test your algorithms at a small scale before leasing an Appliance to go to production.

OPUs, Photonic Cores, LightOn Appliance... how is everything connected?
  • Both LightOn Appliance and LightOn Cloud are based on the same underlying Optical Processing Unit (OPU) technology, the first large-scale hybrid digital/analog computing platform for AI.
    Inside the OPU lies the Photonic Core, the brain of the OPU. The Photonic Core implements specific operations in a massively parallel fashion.
  • Different Photonic Core families perform different operations. For example, the Nitro Photonic Cores perform the native Random Projections followed by an element-wise quadratic non-linearity, while the Lazuli Photonic Cores perform linear Random Projections.
  • Improvements in OPU or Photonic Core generations (without changing the performed operation) are marked with gen# progression.