Code 137 on cluster processed task

Trying to process a 10,104 image dataset on a cluster of 8 Intel NUCs. Running WebODM installed via POSM. The task terminated with a Process exited with code 137 message. I was wondering if this is an issue with a submodel and something that can be solved by reducing the split parameter? Or if it has to do with a step later in the overall task? (Note that it was set to 450 for the task, and I had edited it preparing for a rerun before taking the screeshot.) The full log is in this gist.

The nodes in the POSM setup have --parallel_queue_processing set to 1.
I’m realizing that --max-concurrency is a different setting. Is that what I might try adjusting down (to something below 8) for this issue?

AFAIK --max-concurrency will reduce memory usage for some steps, but not necessarily all processing steps. I would recommend changing your split size instead.

That’s a memory problem in entwine. You can reduce memory consumption in exchange for longer runtime by using --subset (see for an example and

Note that we’re moving away from entwine as a point cloud merging mechanism, so in a future release the process will finish (but the point cloud might not display in WebODM if it’s too large to be merged by entwine).

There’s nothing you can do in the current release without making code changes aside from choosing a master node with more RAM. A workaround would be to lower --depthmap-resolution to 320, which will result in less points to be merged.

So Subset isn’t something I can adjust for a task from the WebODM interface. I’m only interested in the GeoTIFF. I’ll try lowering the --depthmap-resolution and see what happens. Thank you for the tips and suggestions!

1 Like

If I set merge to just orthophoto will that help?

Setting --merge to orthophoto will skip the point cloud merging, so yes. Note this will skip the DEM merging as well, but if all you need is the orthophoto, you’re good.

1 Like