Real-Time Hyperspectral Imaging for Autonomous Navigation and Ground Reconnaissance

AUGV

Militaries around the world are developing Autonomous Unmanned Ground Vehicles (AUGV) for a variety of applications. However, deploying truly autonomous ground vehicles in real-world scenarios requires significant research to enhance both hardware (sensing technologies) and software (AI models) capabilities.

Consider the challenges that self-driving cars face in implementing safe autonomous navigation on modern roadways. Then consider overcoming these challenges when the vehicle is traversing highly variable terrain in uncontrolled environments, identifying hazards, and responding to the actions of enemy combatants.

AUGVs require autonomous navigation and ground reconnaissance technology that can understand the environment in real-time and immediately determine the best course of action for a given scenario.

An active area of research aimed at delivering these capabilities is the trialing of different sensing techniques for terrain identification and analysis. While several methods are under investigation, including machine vision, 3D mapping, and deep learning, a potential solution that offers advanced imaging datasets in real-time is snapshot Hyperspectral Imaging (HSI).

With HSI, modern models are trained to analyse an environment based on detailed spectral information. This can significantly improve the accuracy and reliability of their outputs, and with real-time snapshot hyperspectral images, it doesn’t sacrifice speed for performance.

Before going into HSI in more detail and the early field test of the technology, let’s first discuss the development of AUGVs and the challenges they present.

Developing Autonomous Unmanned Ground Vehicles (AUGVs)

AUGVs are unmanned vehicles in contact with the ground that operate autonomously. AUGVs are being developed and deployed for a range of civilian and military use cases where it is impractical, dangerous, or too expensive to rely on human operators.

Within the military, this includes various applications that replace manned vehicles and remove soldiers from dangerous situations. Examples include:

  • Handling explosives and disabling bombs
  • Surveillance, reconnaissance, and target identification
  • Drawing first fire from enemy combatants

One of the biggest challenges in the deployment of autonomous vehicles is the ability to identify and characterise terrain in real-time while traveling across uncontrolled environments. Any solution needs to deliver fast and reliable data, then perform accurate analysis for real-time decision-making.

The Challenges of Autonomous Navigation for Ground Vehicles

Upgrading from an unmanned vehicle to an autonomous unmanned vehicle requires replacing a remote human controller with an autonomous control system. This system utilises sensors to collect and input environmental data into an AI model. The model then interprets the surrounding landscape to determine the best actions to take for moving across uncontrolled environments.

Most existing autonomous vehicles operate within controlled environments where they travel over fixed terrain. For example, current self-driving cars like Waymo navigate within fixed geofences that contain known street layouts and traffic control devices. While there are variables and hazards they must react to, they do this within defined roadways.

In contrast, AUGVs must navigate uncontrolled environments with highly variable terrain. This requires a much broader analysis of the environment, based on a suite of real-time sensors and advanced AI models. All this while also considering the practical requirements of being mounted on an AUGV.

One of the most active areas of research for AUGVs is the testing of different sensing technologies to solve the challenges of

  • Terrain classification
  • Identifying hazards, for example locating landmines.

Any solution must be able to return real-time, accurate, and reliable analysis of the environment for optimal decision-making in the field.

Technologies under investigation to solve these challenges include machine vision, 3D mapping, and deep learning for terrain classification, as well as conventional machine vision, Ground Penetrating Radar (GPR), and metal detection for landmine identification.

An exciting imaging technique currently under investigation and showing significant potential, is real-time Hyperspectral Imaging (HSI). With hyperspectral analysis, autonomous control systems can gain a deeper understanding of their environment, thereby improving navigation capabilities in complex environments.

Overcoming the Challenges of AUGV Deployment with Hyperspectral Imaging

Hyperspectral imaging records light using tens of spectral bands instead of just three (red, green, and blue). Therefore, hyperspectral imagers can identify spectral signatures and features across an image, revealing a wealth of information on material composition and environmental factors that was previously invisible. Feeding this information into AI models enables the development of a clearer, high-resolution (spectral and spatial) understanding of a vehicle’s surroundings.

Traditional hyperspectral cameras based on scanning techniques (push broom, whisk broom, etc.) are limited due to their cost, fragility, and low frame rates. However, new snapshot devices represent a potential solution to AUGV deployment, offering real-time analysis in smaller, more robust, and portable hardware.

While traditional push-broom hyperspectral cameras have uses across defence for mapping applications using satellites and aerial reconnaissance. Snapshot cameras bring hyperspectral imaging down to earth for a range of national security applications, including ground-based vehicles and tactical units.

Snapshot hyperspectral imagers capture both spatial and spectral information in a single frame, or snapshot, to enable video-rate readout. This is crucial to enable real-time, autonomous decision-making.

Developing an AUGV that relies on hyperspectral data requires fast, accurate, and reliable snapshot technology housed in a robust device. The Living Optics Camera, based on next-generation snapshot hyperspectral technology, provides just that, and the device is already being assessed in field tests for terrain classification. Below, we present preliminary field tests using the Living Optics Camera to demonstrate its capabilities for classifying different terrain types in near real time.

Terrain Classification Tests with the Living Optics Camera

The Living Optics team used few-shot supervised learning to develop a classification model that can differentiate between three types of terrain (track, mud, and vegetation) based on spectral radiance. Videos were captured with a manually operated mobile platform on a sunny winter’s day in an open field in the United Kingdom.

The experimental setup was designed to recreate the movement of a ground vehicle while the camera captured video at 30 frames per second. The data was split into a ratio of 70% for training and 30% for testing. Various models were used to train classifiers, with the best results achieved (an F1-score of 89.5%) using a multi-layer perceptron model.

AUGV

 The results obtained using the Living Optics Camera demonstrate that it can reliably distinguish between these three classes of terrain under dynamic conditions, including shifts in lighting and moisture levels that can impact performance. Further testing is planned to test performance in variable weather conditions and against different terrain changes.

Additional testing has been performed for identifying landmines at range using the Living Optics camera. However, the work requires additional datasets captured under varying operational conditions to achieve a field-deployable landmine detection algorithm.

The Living Optics Hyperspectral Imaging Camera for Defence Applications

For an in-depth review of the Living Optics Camera terrain classification and landmine detection tests, read our paper that was presented at SPIE Defense and Commercial Sensing in April 2025.

Alternatively, you can fill out our contact form, and a member of our team will get back to you to discuss your hyperspectral defence use case and the value the Living Optics Camera can bring to it.

FAQs 

What is hyperspectral imaging, and how is it different from standard cameras?

Hyperspectral imaging captures a large number of narrow wavelength bands to reveal spectral information of objects and materials throughout the image. This allows it to identify spectral signatures invisible to the naked eye or standard cameras that rely on three broad wavelength bands: red, green, and blue.

Why is hyperspectral imaging valuable for Autonomous Unmanned Ground Vehicles (AUGVs)?

Hyperspectral imaging provides detailed spectral data to train more advanced AI models that could improve AUGV operations. Hyperspectral data enables precise material identification, terrain classification, and the detection of hidden objects or hazards, that would be impossible to achieve using equivalent traditional computer vision systems. Spectral detection results are not dependent on the shape or texture of the objects only the surface material signature.

What are the main challenges in deploying hyperspectral imaging on AUGVs?

Hyperspectral imaging cameras must be able to operate in variable conditions and consistently provide accurate analysis. Additionally, hyperspectral cameras need to handle the large volumes of data generated, analysing images in real-time for immediate autonomous decision making.

We would love
to hear from you