I am new to all this, so please bare with me. I have 2 Ubuntu machines with Ram 32 GB and 128GB storage each. I have installed docker, python, git, pip in all these machines. I need to implement Cluster ODM, but before I begin and create any issues/troubles, I have couple questions:
Do I install web-odm, node-odm and cluster-odm in both the machines,
OR
in 1st, I need all 3(ll web-odm, node-odm and cluster-odm) and in the other one, I need just Node-ODM and WEB-ODM?
I need to use command line node-odm. How do upload the images to these machines(is it using node-odm API), does it have to be uploaded separately to both the machines? or they can be uploaded on a single machine, and while processing, I provide the --split parameter to automatically split the load of images to both the machines?
Good questions. We can use this as some structure to extend the docs. The easiest (and a good enough) way is the following:
Install WebODM on your primary machine. As both machines are the same size, choose your favorite. You will run WebODM without a node. This gives you a little more flexibility: ./webodm.sh down && ./webodm.sh update && ./webodm.sh restart --default-nodes 0
Then run a Node separately on each instance. You’ll set your max-concurrency to the number of cores in your machine.
For max images, on the primary machine, you will set this value arbitrarily high: docker run -p 3001:3000 opendronemap/nodeodm --max_concurrency 8 --max_images 1000000&
On the secondary node, for 32GB of RAM (assuming you have 32GB of swap), you can set this value as high as 1500 images: docker run -p 3001:3000 opendronemap/nodeodm --max_concurrency 8 --max_images 1500
Finally, you need to add your ClusterODM node to WebODM, under Processing Nodes -> Add New.
Now, if you process through WebODM, it will act as a load balancer for the jobs on the two machines. If you add a split parameter, it will distribute the submodels to each of the nodes. And it will do so as follows:
Preparatory step
Send data to Nodes for feature extraction and matching
Bring data back to primary node to perform alignment and finalization of structure from motion steps.
Send data back to Nodes for meshing, texturing, orthophoto, elevation models, etc.
Bring data back to primary node for merging of point clouds, orthophoto, and elevation models. Meshes are not merged.
Give it a run through, and feel free to expand docs.opendronemap.org as you feel confident in the steps. You using the above will give my memory of the process a good test run, and make for a better set of docs. Good luck!
You said this ./webodm.sh down && ./webodm.sh update && ./webodm.sh restart --default-nodes 0
command
what will this do?
And when you said “Then run a Node separately on each instance.”, what did this mean?
And I also want to know, like with this command ./webodm.sh start and the docker command, what is the difference?
like if I run this command, docker run -p 3001:3000 opendronemap/nodeodm
does the ./webodm.sh start command has to be executed previously or these both are exclusive and independent?
Normally, when you run WebODM, it looks like this:
WebODM <---> NodeODM
So when we run:
./webodm.sh down && ./webodm.sh update && ./webodm.sh restart --default-nodes 0
we are starting WebODM on it’s own with no processing node. We do this, because we want our processing node to be a ClusterODM instance, so that ClusterODM can be a proxy for the different NodeODM instances.
So to complete the cluster, you need one machine with WebODM, ClusterODM, and NodeODM running, and another machine (since you have two) that just has NodeODM running on it.
Thanks @smathermather-cm for the information. Your answers have helped me a lot.
However, I am sticking to WEBODM as of now(for figuring out options for processing), but will quickly use the docker aspect of it after reading the missing guide.
Excellent. Yes, the WebODM deployment process is really simple and usable, and is a good place to start while you are getting up to speed. Good luck! And thanks for the conversation, I have some ideas on how to improve the documentation now.