Storage Location - Using external source still fills up Docker VHD

I’m currently using "./ start --media-dir D:\WebODM\data" so i can keep my outputs in plain view on my HDD and not have to worry about VHD size, but the Docker VHD still seems to be filling up… I’m going to assume this is due to files created during processing which are stored in the database location? and not the “media-dir” location?

Is this correct? I’ve been caught off guard and had projects sat doing nothing for hours when WebODM ran out of space because the VHD filled up, but my HDD (media-dir) still had plenty of space left.

You might have to (manually) map the /var/www/data folder of the NodeODM container (see docker-compose.nodeodm.yml) to a disk location also. See Recommended volumes for nodeodm and webodm · Issue #776 · OpenDroneMap/WebODM · GitHub


I’ll check this out thanks.

Thought i’d just come back and update on my experience with this.

I’ve had luck using the following -


image: opendronemap/nodeodm
- /host_mnt/DriveLetter/Windows/Folder:/var/www/data


(under webapp)
- /host_mnt/DriveLetter/Windows/Folder:/webodm/app/media

(under worker)
- /host_mnt//DriveLetter/Windows/Folder:/webodm/app/media

I had to remove the :z from the end of the default option as it caused errors. Not sure what it does, but a quick google said remove it.

Still need to test Volume sizes etc after running a few projects, but at least it passed the installation part haha


Looks like this is working very nicely!

I’ve run 1 job, and then re-run it with different parameters to give different results and there’s been no change to the Docker VM size. The Node and Data folders I made in my windows environment have changed in size though.


Have you tried pruning docker?
You can use…
docker system prune -a
to prune docker and remove:

  • all stopped containers
  • all networks not used by at least one container
  • all images without at least one container associated to them
  • all build cache

It wasn’t a solution, but thanks anyway. I wanted a long term solution with minimal maintenance and files that were accessible within windows own file system - my reply above solved this.

1 Like

I ran into this same issue, and didn’t realize until there was 187GB of projects that I want to keep. Any tips on how to modify the docker configuration files, get the server working again with the root volume storing the docker container cleared out, but keep the linkages to the media stored on the mounted storage volume?


  • Issuing ./ stop gave an error of [562652] INTERNAL ERROR: cannot create temporary directory! - really no space left.
  • So I used docker ps to get the IDs of the docker containers and issued docker stop CONTAINER_ID commands to stop each one in turn.
  • I then modified the docker configuration files as noted above.
  • Ran docker image prune.
  • Ran a git pull for good measure.
  • Ran ./ start --default-nodes 0 --media-dir /home/my/path/webodm_data --ssl --hostname
  • Everything seems to be good?

This is fantastic. It might make a good addition to the docs,


So either the actual problem, or possibly an additional problem, was ClusterODM filling up a folder with folders of images. But before I figure that out, I was messing around with Docker some more and accidentally wiped all the application data? So starting WebODM now takes me to the prompt to create a user and the project dashboard is empty.
I still have the media folder…

$ ls external/webodm_data/
CACHE  plugins  project  settings  tmp
$ ls external/webodm_data/project/
1  10  11  12  13  14  15  16  17  18  19  20  21  8  9

Any way to get those projects read back in and showing up on the dashboard again?

1 Like

You could hack a script that inserts the proper Task objects in the database (look at app/models/, but there’s nothing built-in in WebODM that will do it for you, unfortunately!


I’ve been trying to follow this method with no success.
Currently I am using the $MEDIA_DIR option to remap the storage locations specified in docker-compose.yml, which is working well.

When attempting to do something similar for docker-compose.nodeodm.yml, the folder I specify gets created, and if I drop a test file inside, it is visible from within the container, but when I try to process data, the task fails after “Uploading to Processing Node” with an Error: “ENOTDIR: not a directory, scandir ‘data/86f779ce-f4c3-494c-b2cc-21f122d11829/images’”

Could someone suggest the correct format for mapping the volume in docker-compose.nodeodm.yml? I have plenty of external HDD space, so would prefer to save the intermediary steps rather than having to reprocess.

1 Like

This is the line that I tried adding to docker-compose.nodeodm.yml (running Docker on a Windows machine).

image: opendronemap/nodeodm
- /mnt/d/nodeodm_data:/var/www/data

When inspecting the containers from docker and opening the CLI for each instance, I noticed that the root directory for webapp is webodm (with access to app/media) while the default root directory for my nodeodm instance is /var/www (with access to data).
Maybe this difference in the working directory is why the process fails when I attempt to remap the storage for nodeodm?

1 Like

Sorry for up this thread, I appreciate your answer

Your suggestion works with my computer too. The previous solution in this website WebODM github wiki still increasing my root partition where docker was installed

./ restart --media-dir /home/user/webodm_data

Modification of docker-compose.nodeodm.yml (or nodeodm.gpu.nvidia.yml) and docker-compose.yml) as you wrote, it solve my problem.


In my experience I have found that webodm still stores intermediate files in your home directory despite moving the media-dir to somewhere else. if you check optimize disk when selecting options for a project process, this data is deleted from webodm directory after a project is processed.

1 Like