This is basically two questions in one post. I recently got ClusterODM working across two local nodes. When testing split-merge across the nodes, it appeared that for some reason, the non-local node (not the one running ClusterODM or the controlling WebODM instance) was getting the bulk of the work, but was the weaker machine. I thought it was because the stronger machine was node #2, but I swapped them in ClusterODM and I don’t think that it fixed that issue (I’m still testing to be sure). This brings up my question:
How does ClusterODM prioritize nodes for processing? Is it strictly the node # in ClusterODM, available queue, and the node’s --max_images parameter’s size? Or is there more to it?
Second, when going through each program’s parameters, I saw that not only does WebODM have the --Split-Merge image count, but also NodeODM has the --max_images parameter. How do these two parameters play together?
If I set a project with 1000 images and --split merge set to 500 images, but the node’s --max_images size is 400 images, what will happen?
If a 500 image project is thrown into ClusterODM with Node #1 having --max_images @ 400 and Node #2 having --max_images @ 450, will the project start with Node #1 or #2?
Any other things to be aware of in regards to setting up distributed processing across multiple nodes iwth these parameters? I’m running a test project or two at the moment to experiment, but would love to hear from someone with more expertise than myself to help clarify the inner workings.