Upload to the cloud or edge any video or photo – including meta information of position and others and add any number of annotations or tagging to create or enhance an incidence.

The component synchronizes any kind of metadata with the frames that compose a footage. The metadata is composed of those transmitted by the UAV's telemetry system, i.e. those measured by the various flight support sensors and those generated from the image analysis systems, as boxes that show the objects recognized in the image or metadata associated with the identified event. Data can also be included manually through a visual interface.



Synchronisation can be carried out using various techniques depending on the situation.

  1. Based on common video channel and metadata tagging time.
  2. Analysing the motion flow in the video and correlating this signal with the motion data described by telemetry variables such as angular velocities or geographic position among others.
  3. Generating events that will be included as auxiliary signals in the metadata channel and that will correspond to the beginning and end of the video recording.


tagging components
  1. Video Editor: Component that allows the manual and automatic edition of the frames that compose the video, allowing to put any kind of information on the image.
  2. Metadata Decoding: Component that decodes the UAV's telemetry metadata channel, Mavlink or KLV (key length value) format.
  3. Fusion Module: Metadata and frame synchronization component.

Execution and deployment

  • Amazon Web Services
  • Microsoft Azure
  • Google Cloud
Ground Control Station:
  • Atos Bull Sequana
On-Board Edge device:
  • NDIVIA Jetson Nano
  • Raspberry PI 4 – 2GB RAM
  • Rapsberry PI 3 B+