Unable to compete large datasets

I have been trying to create an ortho map using 790 input drone images. The project keeps running out of memory. I used the following option on the last run: split: 280, fast-orthophoto: true, rerun-from: dataset

Docker resources are as follows 6 cpu’s of 8, 20 gm memory of 32, 1.5 gb page and 104 gb disk.

The program ran for 7hr 48 minutes. Needless to say very disappointing and un-usable for anything requiring a timely time.

Drone images are rgb images of crop cover.

WEBODM has been successful with default setting and 350 images.

1 Like

What resolution are you requesting?

Related, you might want to help us try to fund better feature filtering:
https://fund.webodm.org/#/fund/1

1 Like

Thanks for the speedy response.

I’m using the default setting. I’m not sure which parameter is the resolution to which you are referrig.

I ran again last night with only fast ortho and split=200 instead of split=280. It completed successfully in 9 hrs 42 min. The output looks good.

I only use ortho output so I guess that solves this problem for now

Thanks

4 Likes

Glad you were able to find settings that work comfortably with your rig.

The split value is a little confusing, and now that you have run it with a smaller split it reveals what the likely issue was: you ran with a split of 280 assuming that, given you had successful 350 image processing, that 280 was a safe number. The tricky part is depending on the overlap between different submodels a 280 split could result in something much larger.

I am glad you got this working, and I will add this to troubleshooting recommendations for split merge.

2 Likes

Thank you for replying

Later

2 Likes