IaC build of WebODM, ClusterODM, NodeODM using Terraform, cloud-init and GitHub Actions

TLDR: How does one incorporate autoscaling with containers?

I’ve posted two repositories that will provision a WebODM setup to include ClusterODM and at least one NodeODM instance on a separate server. The repositories are GitHub - kendrickcc/odm-aws-wf1 and GitHub - kendrickcc/odm-DO-wf1: OpenDroneMap in DigitalOcean using Terraform and cloud-init. The repos utilize GitHub Actions which permit the provisioning directly into AWS and DigitalOcean, respectively. They can be modified to adjust the server instance size, and the number of servers needed based on the project. Once keys and tokens are entered into the repo, then initiate the actions to “Apply” and about 5 minutes later a fully functional environment is ready. There is no need to log into the servers (ssh over port 22) to configure. Access the two web consoles, add in the nodes, then add in the cluster and should be good to start processing data. While may not be much call for it, I am still working on a build for Azure. That work is at GitHub - kendrickcc/odm-azure-wf1.

With this approach, I can provision servers for a specific project, and because they are each isolated builds, costs to run the projects are more easily obtained. Once data are processed, download and post to DroneDB or archive, then destroy the environment. This provides a fresh new environment to run every time.

The builds utilize Docker containers. I was not following how to manually install ClusterODM (node.js version 14.x, then npm install) to where I could get a build working without getting a terminal tied up with running ClusterODM. So I was wondering how one adds in the appropriate autoscale setup if only using containers. I believe I have a method of passing the edited json file into the build, but I don’t understand how it is added to the ClusterODM server.

If to utilize autoscaling it requires a manual install of ClusterODM, then has anyone setup the server to run it as a service or put it into the background?

Also are there any options to supply the node IPs and ports from a command line when kicking off the container run?



Thanks for sharing your repos, and the associated GitHub actions. Seems like a very powerful, but user-friendly setup thus far.

Someone with more expertise in this area of the ODM ecosystem should hopefully drop by shortly to help with your implementation questions.


+1 for interest in your Azure build. I will check it out!


Just finished the Azure build. Let me know if any questions. I still need to edit the README.



May I ask how you would envision deploying this in Azure, and what made you choose it over other hosts?

Right now the deployment is the same method as for DigitalOcean and AWS. My preference is Azure as I currently using cold archive for photo storage. And I have more experience using Terraform with Azure. I suppose I imagined a workflow where my data would go to Azure then import into ODM for processing. This is not working yet, of course.

I understand the autoscaling setup is not there yet for Azure. Hopefully that will come.

I find that in terms of service offering, that AWS and Azure are better than DigitalOcean. I’ve not tried anything in Google yet. Going forward I will probably process in Azure or AWS. No hard data, I just like those service offerings better. Building the DigitialOcean environment was a challenge; not a lot of code examples to lean on.

1 Like

I use Azure for other workloads, so in my case the choice is convenience.

My ideal workflow would upload photos and GCPs to a data disk, then have a script that starts a high performance VM and ODM to process the job, copy the results to the data disk, and then shut down the VM when complete. For me there’s no need to destroy the resource but also dont see any harm in doing so.

Does such a workflow exist?


I’m not aware of one but that is sort of where I’m headed since cloud import isn’t really available outside of GitHub and Piwigo. I’m thinking of using rclone to copy to a data disk and doing what you suggested, and likely forgoing the WebODM interface.

1 Like

If you’re feeling bold, we’re always accepting PRs to expand plugin functionality, and clearly there’s a desire to have direct import/sideload of assets via Cloud Hosting services, which seems like a neat addition to the existing sources of GitHub and Piwigo.

I’ve looked and it is over my head. I wish I knew where to start with it.

1 Like

Wish I could help there!

If nothing else, the wonderful framework you’re establishing here will make further explorations in this space easier for everyone, so again, thank you!


@summitbri I may have stumbled onto a solution. I have a new repo started at kendrickcc/odm-azure-wf2 on GitHub and it uses blobfuse which is a Microsoft connector to Azure Storage. I’m able to boot a VM and scripted the connection where it maps a folder to the VM. I’ve yet to complete it, but should now be simple to launch ODM from the command line and have it process the data. Will keep you posted.

I also may try a solution where it simply uses rclone where it copies the data set onto the VM. With rclone, then it can be on a number of hosts, not limited to Azure, and then could also be duplicated onto AWS.

1 Like

Sounds promising! Keep us posted!


I think I’ve taken kendrickcc/odm-azure-wf2 as far as I can for now, or I need to step away from it to get some other projects going. But I am able to use Rclone to copy over data from a number of resources, and map a drive back to the VM for images. Then running ODM from the command line to process. I believe now with Rclone I can duplicate this build onto AWS and DigitalOcean if needed. Similar to you, I tend to lean more towards Azure as that is where my photo archive is. And now with Rclone, I can even tap into that archive (after I rehydrate data).

I’m not sure if I will edit the AWS or DO projects, but the main difference is in the template, so should be simple enough to copy that over if needed.

Best wishes.

1 Like

Awesome, I will check it out. I haven’t used github actions before so I have some learning to do…


@CKen, thanks for the work here! I think I have it mostly figured out.

I’ve been able to get the terraform workflow to provision and then destroy the expensive VM resources;

I’ve been able to get rclone working to mount a onedrive directory on a cheap ‘always-on’ VM.

I haven’t yet figured out how to connect the dots with how blobfuse plays into this. I set up the fuse container in the storage account, but I guess I’m missing something on the workflow. Is that container intended to store the ODM output? And how do the ODM compute resources connect back to my rclone folder? Is the idea to initiate the remote node processing from the always-on VM? I guess that would make sense.

Thanks again.

1 Like

@summitbri I was just starting to connect blobfuse then quickly switched to the Rclone route.

With blobfuse, it mounts the storage container to something under /mnt/resource, if I recall. Then you make another mount somewhere on the VM. Never quite understood why two mount connections, but that is the documentation.

My direction would have been to set this up like a regular ODM project i.e. datasets/project/images in the storage container, then blobfuse would mount and ODM would read/write to that location. However, I became concerned with the access speed; a little laggy perhaps, and this would only increase the processing time. Therefore I decided to just go with Rclone and copy data back and forth.

1 Like

@CKen Oh! Well that makes sense. I decided to just upload the images to the blob with azure storage explorer as transfer speed doesn’t seem to be throttled the way OneDrive is. Rclone copy from an azure blob is super fast.

Would be great to figure out how to automate the rclone.config file creation and this would be dialed in!

1 Like

Automating the config file is simple enough, but best not through GitHub. The build file will have to change to using remote exec so it can read in a local file from your computer. I can try and code that up in a couple of days.

The reason not in GitHub is that the secrets needed for the config get complex and difficult, for me, to try and parse all the possibilities.

This will require to have Terraform installed locally.

1 Like

That makes sense. No issues with running locally for me!

1 Like