Changing Number of Projects Running Concurrently in WebODM?


I’m running WebODM via Hyper-V in Docker on Windows 10 Pro on a fairly beefy machine. I am wanting to change the number of active projects that are able to run concurrently at the same time on the same local node (not split-merge). Currently, it only runs a single project at a time when queued up. Can someone help me figure out what’s the easiest way to make this change?

1 Like

Oof… This is a good question.

Let me dig a bit and see if I can find an answer.

1 Like

Mm, probably by modyfing WebODM/docker-compose.nodeodm.yml at master · OpenDroneMap/WebODM · GitHub and adding an entrypoint override to the node-odm service:

entrypoint: ["/usr/bin/node", "/var/www/index.js", "-q", "4"]

This will launch a Node with a queue of 4.


Or launch WebODM with:

./ restart --default-nodes 4

This will add 4 nodes to WebODM. So long as you select “Auto” from the processing node list when creating tasks, tasks will be distributed to each node.


Yup! One more way:

You can spin up webodm without nodes ala:

./ down && ./ update && ./ start --default-nodes 0&

And then add a few nodes manually splitting up your number of cores evenly between them:

docker run -p 3000:3000 opendronemap/nodeodm --max_concurrency 10 --max_images 1000 --parallel_queue_processing 1&
docker run -p 3001:3000 opendronemap/nodeodm --max_concurrency 10 --max_images 1000 --parallel_queue_processing 1&
docker run -p 3002:3000 opendronemap/nodeodm --max_concurrency 10 --max_images 1000 --parallel_queue_processing 1&
docker run -p 3003:3000 opendronemap/nodeodm --max_concurrency 10 --max_images 1000 --parallel_queue_processing 1&

And tune memory use vs. cores with max_images vs max_concurrency, in which case you can tune the heck out of core vs. memory use and concurrent projects.

You’ll need to manually add those nodes to WebODM as a one-time operation, and you might want to mount your docker drive to persist the data, ala
docker run -p 3000:3000 -v /data/node0:/var/www/data opendronemap/nodeodm&

edit: BTW, whatever route you use, I’ll be curious about your results and decision making.


Wowzas! Thanks for the responses everyone, I honestly wasn’t expecting as many options!

I’ve tried and tested Piero’s second suggestion involving the restart with flags as it was the simplest and quickest method to implement, and it’s working great so far. I’ll likely give his first suggestion a crack later as it would likely be a bit more streamlined in the long run.

The reason I went with the second suggestion: I have a dual CPU (E5-2697v3’s) with 28 cores/56 threads and 256gb of RAM on my workstation. I’ve been doing more small projects which aren’t as RAM intensive, and noticed that a significant portion of the workflow involves single-threaded processing that isn’t using the CPU’s as effectively as I would hope. Using the --default-nodes flag allows me to concurrently process projects on a single system with multiple single-threaded workflows running simultaneously while still easily allowing full access to all CPU threads and RAM when the work calls for it.

I actually really like your solution smathermather-cm for some potential future node restructuring, I’m going to save that info and likely try it out in the future as it looks like it would give more control and less room for error if I had other users loading up projects. I’ve been wanting to put together a more intricate multi-system ClusterODM driven system for a while now.


This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.