Terrain 3d

This service takes as input video/photos with metadata (in KLV or Mavlink format) of drone flight (live or recorded) and generates a 3d overflight scenario of the same route in different time periods.

Starting from an azimuthal video of the area under analysis, all its frames are synchronized with the latitude and longitude data measured by the UAV's GPS. Once labelled in this way, a georeferenced orthophoto of the scene is generated. The georeferenced images generated in this way are used to make automatic comparisons of the terrain, allowing us to find changes in its structure, vegetation and any other alteration phenomena.


Terrain components
  1. Video Splitter: Separates the video into frames at a specific number of frames per second.
  2. Metadata Decoding: Decodes the UAV's telemetry metadata channel, Mavlink or KLV (key length value) format.
  3. Geo Annotation: Labels each frame with the latitude, longitude and altitude values of the camera that captured them.
  4. Georeferenced Orthorectified Imagery: Joins the frames of the scenario that has been flown over creating a georeferenced orthophoto.

Execution and deployment

deployment 3d

  • Amazon Web Services
  • Microsoft Azure
  • Google Cloud

Ground Control Station:
  • Atos Bull Sequana