Hi there, I appreciate ODM work and have pretty good results in processing small / medium datasets of different sites with a few dozens … hundrets of photos. This performs still pretty well on my single desktop node.
But I ask myself what would be necessary, that we can support large (city wide) datasets with thousands of photos?
My usecase would be the city of Schwerin / Rostock which my flight plan would result in >2000 images, covering a area of ~200km².
With the current version, I guess it would not be possible to process this amount of data, as it would always process the whole area and create all steps. This is a pretty high complexity which costs a lot of CPU and RAM resources.
I’m not sure if my idea is the right idea, but I suggest some changes to support the processing of that large datasets:
split image preprocessing step (metadata analysis, find passpoints, align local, align global, …) from generating products like DOPs, 3D mesh, …
user can add large datasets, add GCPs and trigger only preprocessing
user can later on request single products from all the area, or a smaller boundary
this jobs can be splitup to different nodes, but also sharing the product preprocessing data to avoid redundant calculations
But I’m not very familiar with the internals and steps within ODM and if this is a suitable solution, or if I wrongly missed some hard constraints or bottlenecks?
In the end, this would allow also municipality GIS teams create up-to-date aerial imagery using e.g. BVLOS drones