Split preprocessing and product generation? - Support large scale datasets

Hi there, I appreciate ODM work and have pretty good results in processing small / medium datasets of different sites with a few dozens … hundrets of photos. This performs still pretty well on my single desktop node.

But I ask myself what would be necessary, that we can support large (city wide) datasets with thousands of photos?

My usecase would be the city of Schwerin / Rostock which my flight plan would result in >2000 images, covering a area of ~200km².

With the current version, I guess it would not be possible to process this amount of data, as it would always process the whole area and create all steps. This is a pretty high complexity which costs a lot of CPU and RAM resources.

I’m not sure if my idea is the right idea, but I suggest some changes to support the processing of that large datasets:

  • split image preprocessing step (metadata analysis, find passpoints, align local, align global, …) from generating products like DOPs, 3D mesh, …

  • user can add large datasets, add GCPs and trigger only preprocessing

  • user can later on request single products from all the area, or a smaller boundary

  • this jobs can be splitup to different nodes, but also sharing the product preprocessing data to avoid redundant calculations

But I’m not very familiar with the internals and steps within ODM and if this is a suitable solution, or if I wrongly missed some hard constraints or bottlenecks?

In the end, this would allow also municipality GIS teams create up-to-date aerial imagery using e.g. BVLOS drones

2 Likes

Really great stuff!

Reminds a bit of the multi-stage workflow in Pix4D.

Having Areas Of Interest/Processing Extents is also something we see people ask for a bit.

Do you think your Municipality would be able to assist in developing and/or funding this work?

1 Like

That would be very low resolution, flying very high! I’m currently working through a 15km² job, and I will need well over 20000 images to cover at 2cm GSD.

In any case, 2000 images is no big deal, my computer is currently working on an ortho covering a small part of this job with 5000 images, although I have resized by 0.25, so close to 10cm effective GSD.

2 Likes

https://docs.opendronemap.org/large/

4 Likes

Currently I just play around with flightplanning using QGIS and apmplanner2 GCS to determine some key numbers. I’m looking to realize a 5cm GSD and overlap for 3D point cloud reconstruction. I try different heights >100m <300m as the usage of BVLOS in Germany is difficult but seems to be open for official partners.

@sajin_naib I will try to dive into this possibility but can’t promise anything. Are there any infos how the projects would need to receive the sponsoring? (it’s difficult to do non scheduled expenses esp. for international companies etc.)

3 Likes

No promises needed!

That’s a great question, and I’m not sure exactly how it would work. I’d imagine we would be as flexible as possible.

If small/local government is anything there like it is here, I know what you mean by difficult to secure and plan funding :grimacing:

3 Likes

Thanks for sharing this docs and also @gordons experience. But I can’t beliefe, that ODM can currently manage a city wide point cloud? I learned there is a splitup and yes the (time consuming) tasks work pretty local. But is it really possible to extract tons of GB of a merged 3D point cloud? :astonished:

2 Likes

90,000 images over 80 square km is the largest dataset I’ve seen processed. It scales just fine…

5 Likes