You are currently using an outdated browser.

Please consider using a modern browser such as one listed below:

Cambridge Pixel Customer Login

Customers

Fusing Video and Radar Tracks in Multi-Sensor Military Security

Fusing Video and Radar Tracks in Multi-Sensor Military Security

Insights Article

Effective fusion of multiple sensors such as radar video and cameras is the key to presenting a situational display in military security applications that informs an operator and supports critical decision-making.

However, while track display with track fusion offers the benefits of simplifying the display presentation based on an assessment of threat, this is only as effective as the rules used to process, filter and select the information. Complementing the processed display with the capability to show primary sensor data allows for simplified presentation of complex information where there is confidence in the data interpretation, whilst still permitting the operator to observe raw sensor data for manual interpretation, verification or just reassurance.

A complex military security system uses multiple overlapping sensors to provide coverage of an area of interest. Sensors include radars and cameras, which may be co-located and combined in range to provide near, medium and long-range detection, or which may be at different locations to enlarge the geographical coverage.

A moving target that is acquired by one sensor may then be tracked, with continuity of identity, across multiple sensors, ensuring the operator is presented with a consistent view of the target moving through the coverage of multiple sensors. The challenge with this is to present sensor and processed data in a way that supports an operator in the interpretation of the situation, with neither too much data so that there is a risk of confusion, nor too little data where critical information may be missing.

Automatic identification of targets from sensor data and subsequent fusion of those tracks across overlapping sensors is the key to presenting a high-level interpretation of the scene. Removing the raw sensor data and presenting processed, filtered and prioritised reports will simplify the display and ensure that an operator is presented with relevant information. What’s important in this is getting the processing right so that there is not too much rejection of real targets of interest (probability of detection is maximised) and there is not too little rejection...

Subscribe to continue reading this article, it's free.

Free access to Engineering Insights, authored by our industry leading experts. You will also receive the Cambridge Pixel newsletter which includes the latest Engineering Insights releases.

Fill in the form below and you will be sent an Instant Access link.

Your email address will be added to our mailing list. You may unsubscribe at any time. We take data protection seriously. See our Privacy Policy.

Interested in Cambridge Pixel and our Radar Processing Solutions?

Follow us on LinkedIn and be first to hear about our latest releases and projects.

Cambridge Pixel Radar Processing on LinkedIn

 

events calendar   Next event: Visit us at AUSA, Booth #8309 (with EIZO Rugged) | 14-16 Oct, Washington, USA.