Failed image processing, more of a general question

I saw the giant notice, but this is more of a general set of questions:

I have 589 photos, taken with Phantom 4 Pro V2, resized to 3072 because of memory expectations, still failed with the “node went offline or ran out of memory” error. Doing another run with resize at 2048.

I have 16 cores, 76GB RAM, and I would really like to not resize these images at all… Running in a hyper-V instance as an ubuntu server linux box.

I know that splitting the dataset works, but as far as I know it only works with a cluster.

I feel like sub 600 photos should be able to be processed without fail and no need to resize, but maybe I’m mistaken. Any insight?

I could grossly use photoshop and photomerge all of these into one, inaccurate mapping of the site with no memory issues. Full size images getting processed on my main computer with 32GB, in about 5hours.

Is there any secret to using webodm that maybe makes the process take longer, but can use 1k+ images without resizing?

1 Like

Add swap, maybe 2x what you have for RAM:

1 Like

I just came across a thread mentioning swaps, so thank you for affirming this.

Currently, I added a swap for 1x RAM as it was only 8GB. This still might not be enough but it’s a small enough dataset to confirm either way.

I’ll report back

2 Likes

1x is quite likely going to help. Cheers.

1 Like

what feature quality are you using? Ultra?

1 Like

Using the built in high resolution preset, and it’s still processing the matching stage of these images.

EDIT: As of the time of this edit it had finished the matching stage and is finalky reconstructing. I think something with my VM settings is slowing things down a bit

1 Like

So it went well! I didn’t have a failure this time with a larger swap, and no resize needed!

2 Likes

w00t! Awesome.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.