Transforming Computer Vision Capabilities with AI-Integrated Hyperspectral Imaging

AI is changing how we view and analyse images. Trained on vast datasets, AI learns to understand visual information, spotting patterns and correlations that previously went unnoticed using past analysis. This allows machines to derive new information and insights from images to recreate and even go beyond human capabilities.

AI computer vision capabilities now include:

  • Image segmentation: Divide an image into different sets of pixels for object detection or other related tasks.
  • Object detection and classification: Identify objects and understand them within the specific context of the image.
  • Scene understanding: Interpret the scene as a whole based on the combination and arrangement of the objects within it.

Taking AI to the Edge

Similar to other branches of AI, early AI computer vision systems required significant compute and long inference times to return meaningful information. As AI algorithms and training methods have improved, tasks can now be completed faster using lighter-weight, more efficient models.

Combining these software advances with hardware upgrades and the ability to pack more compute into smaller volumes with lower power requirements, computer vision has moved from centralised resource-intensive environments to the network edge.

This allows AI to be deployed in the field, capturing visual data and delivering real-time inference locally. Examples include artificial intelligence in robotics, analysing visual data to understand the environment and make decisions on how to perform tasks optimally.

Using AI to develop smart fleets of robots that can analyse, report, and respond to visual information enables new use cases across agriculture, environmental monitoring, food inspection, and other industries.

This is all possible by developing and training AI models on standard colour images or simple RGB inputs. But what could these algorithms do if they had over ten times more information per pixel? What is possible if we combine AI analysis with hyperspectral imaging data?

Enhancing  Perception through AI-Integrated Hyperspectral Imaging

While a standard colour image captures light using three spectral bands (red, green, and blue), a hyperspectral imaging system uses many more to see and analyse the world in far greater detail. This enables in-depth spectral analysis and the identification of signatures that reveal new information related to the physical and chemical properties of the materials present in the image.

Integrating AI analysis with hyperspectral data could:

  • Improve the performance of computer vision systems, delivering next-generation systems based on spectral analysis
  • Simplify the analysis required to complete computer vision tasks, utilising spectral data to enable the use of light-weight AI models

Rather than analysing and finding patterns based on simple RGB pixel values, AI algorithms could be trained to unlock new capabilities by integrating hyperspectral imaging analysis methodologies. For example, identifying subtle spectral variations to understand images based on the chemical properties of the material present as opposed to just colour and shape.

Instead of deploying large and sophisticated AI models trained on vast datasets to identify and classify objects in an image, simpler alternatives could be used to achieve the same results based on the spectral signatures present in the image. This can reduce development time and inference computational costs for standard computer vision applications.

AI-integrated hyperspectral imaging has the potential to increase the efficiency and capabilities of computer vision systems. But to make this a reality and bring it to the real world, we need a new approach to hyperspectral imaging.

The Challenges of Taking AI-integrated Hyperspectral Imaging to the Edge

Just as AI underwent technological advances for use in the field, so must hyperspectral imaging. Previously, hyperspectral imaging systems have been characterised by slow frame rates, delicate instrumentation, and complex setups. Blocking mass adoption of the technique.

Meaningful data was often only possible in the lab with controlled parameters. Hyperspectral imaging systems needed lengthy setups under fixed lighting conditions and long capture times, leading to low framerates. With much more data to output compared to an RGB image, hyperspectral imaging requires a more sophisticated approach to capturing and recording light.

The most popular approach in the past was line-scanning hyperspectral imaging. This technique pans across the scene, gradually building up a high spectral resolution image line by line. However, this gradual approach makes it impossible to achieve real-time frame rates while also requiring delicate optical elements.

A different method is needed to build a robust and fast hyperspectral imaging system for use in the field.

The Living Optics Camera: Video-Rate Hyperspectral Imaging in the Real-World

Powered by snapshot technology that captures all the data it needs simultaneously, the Living Optics camera delivers video-rate (30Hz) hyperspectral imaging in a small, portable, and easy-to-use device.

The camera uses advanced sensors to capture spectral, spatial data at video frame rates.

The Living Optics camera and development kit is the perfect solution for proof-of-concept AI-integrated hyperspectral imaging systems.

The snapshot hyperspectral camera incorporates high-power embedded compute to enable real-time analysis and decision-making for real-world applications. The Living Optics development kit gives users the flexibility to implement their own analysis, and users can run AI models locally as part of the camera’s application layer.

Then, the simple, portable, light-weight hardware can be mounted on sensing platforms in the field, including mobile or robotic systems. Example use cases for different industries include:

  • Agriculture: Artificial intelligence in robotics with access to hyperspectral data to monitor crop health and identify stress factors in real-time.
  • Environmental Monitoring: Sensing platforms to track pollutants and deliver actionable insights from AI-powered spectral analysis.
  • Food Inspection: Guarantee the integrity of food packaging or automatically identify the quality of food products using hyperspectral imaging and advanced computer vision models trained for these specific use cases.

The Living Optics team has already published AI-integrated hyperspectral imaging results using our camera link to paper. This includes integrating AI analysis with hyperspectral imaging datasets taken in the field for detecting, segmenting, counting, and sizing fruit in orchards to estimate fruit ripeness and potential crop yields.

Photo kindly provided by The National Robotarium, at Heriot-Watt University, Edinburgh, UK’s Centre for Robotics and Artificial Intelligence.

Find Out More About the Living Optics Camera 

Want to learn more about the Living Optics Camera and how you could integrate hyperspectral data into your computer vision workflows? Fill out a form, and our technical sales team will get back to you with all the information you need.

We would love
to hear from you