Lighting: The Unsung Hero of RGB, Multispectral, and Hyperspectral Imaging for Machine Learning

When it comes to imaging – whether you’re working with monochrome, colour (RGB), multispectral, or hyperspectral cameras (HSI) for machine vision or machine learning–lighting is often the most overlooked factor.

Yet, it’s the foundation of accurate, reliable data. Cameras don’t magically “see” everything; they only capture the light in the scene. If your illumination isn’t right, you might miss something important. In other words, lighting for hyperspectral imaging and machine vision isn’t a nice-to-have; it’s a core design parameter. 

Every imaging system depends on reflected or emitted light. If certain wavelengths aren’t present in your illumination, they won’t appear in your data. For example, many standard LED lights look perfectly white to the human eye but have almost no power beyond 700nm (red) and they often contain a large amount of blue light relative to the other colours. This means that spectral features in the near-infrared are lost and that features at the blue end of the spectrum can be emphasised. For multispectral and HSI applications, this can be a deal-breaker. 

At Living Optics, we design VNIR hyperspectral cameras and software for machine vision, computer vision, and machine learning workflows. Our hyperspectral technology maximises light throughput and works well in both sunlight and indoor setups with supplemental illumination, but well-chosen, well-controlled lighting is still essential for consistent, reliable results 

Key Lighting Principles for RGB, Multispectral, and Hyperspectral Imaging 

When choosing a lighting solution for a machine vision or computer vision application, especially if you’re training machine learning models, you need to consider a few key principles. 

First, you need to control the geometry of the illumination and the effect of ambient light. Lighting angles matter. Direct reflections can distort measurements, so avoid placing lights head-on. A common setup is 0/45° geometry, where lights are angled to minimize glare. For non-flat subjects, multiple lights can reduce shadows and produce a more even illumination. Ambient lighting from outside your setup (e.g. unwanted sunlight or overhead LEDs) can introduce variability and should be minimised with shades, covers or careful alignment of the camera system. 

Uneven illumination creates hot spots and dark corners, which skew results and make it more difficult for your analysis algorithms or machine learning models to get good results. As a rule, you should aim for consistent brightness across the entire field of view. If possible, check uniformity visually or by viewing the image histogram within the camera software: a well-lit scene should sit within the histogram bounds without large peaks at either end. 

Finally, you should choose the right spectral profile for your lights. This is less important (and may actually be beneficial) when you are using cameras with fewer spectral channels or low IR sensitivity but can become critical as the number of spectral channels increases or you want to look for features in the infrared region.  

  • RGB cameras need visible light (roughly 400–700 nm). 
  • VNIR systems (400–1000 nm) require broadband sources like halogen lamps or custom broadband LED systems. 
  • Longer wavelengths may need specialized lamps with an enhanced profile in the infrared. 
  • Fluorescence imaging generally requires a narrow-linewidth excitation source that can be filtered out at the camera to allow the (weaker) fluorescence signal to be seen. 

From a machine learning perspective, the spectral profile of your illumination also defines what information is even available to your models. If a wavelength isn’t present in the lighting, it won’t be in your training data, and your model can’t learn from something it never sees. 

Simple Lighting Checks for Machine Vision and Hyperspectral Imaging 

How can you know that your lighting is good enough? Here are some simple checks that you can perform: 

  • White card test: Place a neutral white card in the scene. Does it look evenly lit? Is there any colour cast? Is the image too bright and blown out? 
  • Histogram check: A good exposure fills most of the histogram without large peaks at either side (which would indicate highlights or shadows). 
  • Lux meter (optional): For indoor setups, aim for roughly 3000–6000 lux for static subjects. Moving subjects might need more light so that you can use shorter exposures to freeze motion.  

These simple checks not only improve image quality on screen but also help reduce lighting induced noise and unwanted variability in the datasets you use to train and validate your models. 

Lighting and Data Quality: The Bottom Line for Machine Vision and Machine Learning 

Lighting isn’t an afterthought for teams building machine learning or computer vision systems on RGB, multispectral, or hyperspectral data; it’s the foundation of good quality imaging. The right spectrum, geometry, and uniformity will make the difference between usable data that can mean less debugging models and data recollection. Before you tweak your camera settings, check your lights first. 

If you’re developing machine vision or machine learning workflows with RGB and find yourself burning time on image acquisition and annotation, consider adding hyperspectral to your solution. Living Optics can deliver a low-cost, real-time system that reduces development effort and iteration cycles—shortening your time to market while giving you a machine-vision ecosystem ready for your next challenge.

We would love
to hear from you