Largest Dataset Processed

Hi All,

I am working on processing a 92,000 image dataset using split-merge, 72 cores, and 1.5 TB of RAM (across 6 hosts, although half the compute capacity is on one host alone). Needless to say, I am learning a lot.

As I am doing this, there are some peculiarities to the process: it’s predominantly not RTK data, it’s 4 different cameras (which is super helpful for lens correction), and we are working toward one seamless product, both elevation model and orthophoto.

I have been searching for equivalent projects. DroneDeploy proudly announced a while back the capacity to do 10k image datasets. Searching the Pix4D forums, I find some references to some 30k image datasets (which I have crossed that threshold on a previous project), but I am not seeing anything larger than 30k.

So, my question to all of you: have you seen anything larger than 92k images (roughly 1800 gigapixels of imagery) regardless of software (Pix4D, Agisoft, etc.) for producing a seamless product?

And if not: what’s the largest you have encountered?

BTW: and yes, I will be sharing what I learn in this project in the docs. :smiley:

5 Likes

I think DroneDeploy did a story on processing 70k images, but I don’t think this was merged as a single product. They mention that “The next day the data was delivered in 75 maps”. https://medium.com/aerial-acuity/supporting-the-relief-efforts-of-californias-most-destructive-wildfire-from-above-7c50c1128108

So seems like they just run ~1,000 images per batch (maybe due to timing constraints, given the emergency situation).

2 Likes

Well, it certainly would scale horizontally very nicely that way. Also, I think we were the first to use a split-merge type approach: even Agisoft and Pix4D came out with a split-merge solution after us.

But this is helpful, and pretty similar other than the merging portion. Classic crisis mapping too: 75 maps was good enough and faster to do, so just get it done.

2 Likes

How in the world did you get 1.5TB, even across six hosts???

When you buy instead of rent, you can do a lot more…

2 Likes

No, I’m wondering how you physically address that much memory, even across 6 hosts? Must be a new motherboard I’ve not heard of.

1.5 TB / 6 = 256 GB

I think that’s well within normal server specs.

2 Likes

I did not switch mental gears into server territory. Silly me.
Back to the drawing board…

2 Likes

Yeah, getting that in a laptop or desktop would be unusual. But server motherboards commonly can support a fair amount of RAM.

When you get to those system levels, desktop systems seem puny. 3 of my workstations (my NodeODM nodes) max out at 1 TB RAM (that’ll take a lot of money!) and my WebODM, soon to be ClusterODM workstation maxes out at 2 TB. These are older used systems, that are out dated by today’s standards.

2 Likes

The differences with the desktop world are quite stark these days.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.