Create orthomosaic from classified images based on originals

Hi there,
I wondered if there is a way to do the following in WebODM:

  1. Load drone images and see how they fit together
  2. [outside WebODM: Classify the images]
  3. Put together the classified images instead of the originals to generate the orthomosaic (here, pixel values should not be averaged but selected via nearest neighbor/ most frequent value)

This would be a really powerful tool when applying semantic segmentation using fully convolutional neural networks, since the original images contain more detailed information than a mosaic and they show objects from various angles.

1 Like

Welcome!

Really interesting workflow and feature request.

I’m not sure it could work, but it’d be interesting to try. We have a number of parameters to tweak the texturing process, but I don’t think we expose controls to that level.

Have you thought about making that mosaic using QGIS or other tools post-classification with whatever tools you’re classifying with?

Semantic segmentation will be done in Python. However, I don’t have the classified images ready yet and I didn’t try to create the ortho mosaic in QGIS. In Agisoft Metashape, it could be possible to generate the which-pixel-ends-up-where first and then exchange the drone images against the segmented versions. I am not sure about the possible interpolation methods, though. Ideally, all interpolation should be nearest neighbor approach in this use case…

Mvs-texturing (library used in OpenDroneMap) generates an indexed image of whence each pixel, and I’m pretty sure we expose the settings to turn off blending (thus giving an NN-like approach).

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.