Help a newbie with how to process 3000-5000 images?

Hi Guys,
I am using WEBODM lightning with great results to process image sets of up to 1500 images. Its been working great!
My question is that I now have a image dataset from a P4P v2 with 3200 images, and even the upgrade to business is limited to only 3000 images. So how can I process this with Webodm Lightning?
In the past I have been using DDeploy in a previous business to process my datasets up to 8000 images from P4Pv2. But its just miles out of my price range while I am starting a very small business.

Is there any way at all to get a lightning monthly account that could do up to 5000 images?
Im happy to pay/support the project a bit more for the months where I have the bigger datasets, and I can build that into the client costs so long as its reasonable.

Most of my datasets are less than 3000 images, but maybe 4-6 times a year it would be between 3000-8000 images. I would really like to stay with lightning processing if I can?

I am hesitant to try local processing as my Laptop is only i7 with 32gb ram and geforce card, and am not sure this will process ok with a local webodm install and 5000 or more images?

I am totally happy to buy/support the local webodm setup if this might work on my hardware, but I dont really want to tie up my laptop for long periods if I can avoid it?

Are there any people running services with ODM or DroneDeploy that can process the bigger datasets for me, on the odd time that I do get them?

Thanks
Brian

1 Like

Welcome!

We do not have any Service Tiers above Business for Lighting, so currently 3,000 image Tasks are our maximum.

32GB RAM will be insufficient for 3,000-8,000 image Tasks, figure 128GB at a bare minimum (what we have for Lightning nodes).

Some folks self-host WebODM on things like Azure, Amazon Web Services, Google Cloud Platform, DigitalOcean, Hetzner, etc. to be able to size their VMs as needed.

1 Like

Thanks Saijin,
Do you think 32GB RAM will do 3200 images?

What about if I went to 64 GB RAM?

1 Like

The amount of memory required for a run has more variables than just number of images. I’ve posted a few threads where I set up Grafana and Prometheus to attempt to identify the impact of some of the options available when starting a job, and it was harder than it sounds.

My current theory is that given a static configuration (i.e., ultra/ultra but no other changes) the factors with the largest impact are number of images and something accounting for photo density/overlap/etc. but I have not had time to refine that theory. Fingers crossed that others with more spare time can come up with additional arguments and data to support them!

2 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.