I wondered if there is a way to do the following in WebODM:
Load drone images and see how they fit together
[outside WebODM: Classify the images]
Put together the classified images instead of the originals to generate the orthomosaic (here, pixel values should not be averaged but selected via nearest neighbor/ most frequent value)
This would be a really powerful tool when applying semantic segmentation using fully convolutional neural networks, since the original images contain more detailed information than a mosaic and they show objects from various angles.
Semantic segmentation will be done in Python. However, I don’t have the classified images ready yet and I didn’t try to create the ortho mosaic in QGIS. In Agisoft Metashape, it could be possible to generate the which-pixel-ends-up-where first and then exchange the drone images against the segmented versions. I am not sure about the possible interpolation methods, though. Ideally, all interpolation should be nearest neighbor approach in this use case…