Docker and GCP files


From @dakotabenjamin on Wed Dec 07 2016 19:03:58 GMT+0000 (UTC)

as per #426

When running the following Docker command and intending to use GCPs, ODM fails at georeferencing:

docker run -it --user root\
     -v $(pwd)/images:/code/images\
     -v $(pwd)/odm_orthophoto:/code/odm_orthophoto\
     -v $(pwd)/odm_texturing:/code/odm_texturing\
     --rm odm_image --odm_georeferencing-useGcp

The reason being that by default ODM looks in the project root (in this example, $(pwd)) which is not being shared using the -v command.

There is a current workaround. Move the gcp_list.txt file to $(pwd)/images, then use the --odm_georeferencing-gcpFile /code/images/gcp_list.txt command. I don’t think this is a long-term fix so I’m looking for suggestions to best handle this situation.

Copied from original issue:


From @dakotabenjamin on Wed Dec 07 2016 20:02:04 GMT+0000 (UTC)

@clercj this temporary solution worked for me


From @clercj on Wed Dec 07 2016 21:06:13 GMT+0000 (UTC)

Well, your solution worked for me also! I was also thinking that the GCP file wasn’t uploading correctly.
Regarding georeferencing, that’s probably a point to add more GCP’s. The more the better! But for testing purposes, I’ve used the minimal solution.
Thanks a lot for your help!


From @pierotofy on Wed Dec 07 2016 22:21:43 GMT+0000 (UTC)

In my opinion, perhaps a “convention over configuration” approach could simplify ODM usage. Allow the user to specify a --project-path as usual, but then implement logic in the Python code that:

  • Checks for the presence of a /images subfolder.
  • If there’s no /images subfolder, but there are JPEGs, it means the user forgot to create the images directory. Perhaps create the images/ subfolder and move the JPEGs there.
  • If there’s a gcp_list.txt file in the project directory, automatically set odm_georeferencing-gcpFile (unless the user ovverrides it, perhaps) and turn --odm_georeferencing-useGcp

This is what I currently do with node-odm.

In the end, it would be easier to mount a single volume (the project path) instead of multiple volumes (one for images, one for orthophoto outputs, one for texturing outputs, etc.).

But perhaps this should just be implemented by a layer of abstraction (node-odm, webodm), there’s nothing wrong with things as they are.


From @dakotabenjamin on Thu Dec 08 2016 16:44:21 GMT+0000 (UTC)

Admittedly I’m not that familiar with “convention over configuration” but I think I’ve been considering this idea unknowingly for a bit when planning input UI changes. I generally do not like the number of characters it takes to run ODM, either natively or in docker and have been talking about an input overhaul.

Rather than specify a project path, instead there should be a universal default “projects” folder that is set in some configuration file. Then the user runs ODM by specifying the images folder and any other inputs (gcp file, etc. ), which are copied over to a new project so that the application is setting up the project rather than the user. Along with that simplifying all the parameter names (removing “odm_” etc.).

Also, what’s preventing us from mounting a single volume for docker run?


From @pierotofy on Thu Dec 08 2016 17:59:39 GMT+0000 (UTC)

Nothing really:

docker run -v $(pwd)/images:/project/images --rm odm_image --project-path /project

Nice thing to note, for new users all of these commands:

git clone
cd OpenDroneMap
docker build -t packages -f packages.Dockerfile .
docker build -t odm_image .
docker run -it --user root\
     -v $(pwd)/images:/code/images\
     -v $(pwd)/odm_orthophoto:/code/odm_orthophoto\
     -v $(pwd)/odm_texturing:/code/odm_texturing\
     --rm odm_image 

Are not needed, since we now have an up-to-date, portable nightly build of opendronemap on

All a user needs to type after installing docker:

docker run -ti -v ABSPATHTOIMAGES:/project/images --rm opendronemap/opendronemap --project-path /project

At that point you could probably just create a .sh script that runs the command:

./ ABSPATHTOIMAGES <other options>

Having convention over configuration would allow people to worry less about the command line parameters (for example, specifying a gcp path, which is now relative to a docker container, which people have a hard time wrapping their head around).


From @pierotofy on Thu Dec 08 2016 18:01:57 GMT+0000 (UTC)

Uh, actually there’s a problem with mounting a single volume the way I wrote above: if you map only the /images directory, the results won’t be accessible.

A user needs to have a project directory in place:

– /project
------- /images
-------------- /1.jpg
-------------- /2.jpg

Then needs to run docker with:

docker run -v $(pwd)/project:/project --rm odm_image --project-path /project

But I can see people getting some confusion over this.


From @pierotofy on Thu Dec 08 2016 18:06:23 GMT+0000 (UTC)

Perhaps a .sh script can just do a sanity check for the project directory, so that when you run:

./ PATH <other options>

  1. PATH is converted to an absolute path
  2. Check for the presence of a images directory. If not, create it and move all .JPG, .JPEG files into it (confirm that with the user before doing so).
  3. Run docker with the proper parameters.


From @dakotabenjamin on Fri Dec 09 2016 14:57:07 GMT+0000 (UTC)

I’ve mocked up what I have in mind for loading images:

Still need to work on GCP though.


From @dakotabenjamin on Fri Dec 09 2016 15:21:08 GMT+0000 (UTC)

I did this because I don’t necessarily think creating another script is going to make the process any simpler for users, it has to be implemented in code. We should make any parameter and big ui changes early so that we aren’t creating breakages in API or external things like WebODM when they are out of development. Ergo now is the time to mess with it, and I think it’s due.


From @pierotofy on Fri Dec 09 2016 16:45:05 GMT+0000 (UTC)

Sounds good to me!


From @dakotabenjamin on Wed Dec 14 2016 19:06:37 GMT+0000 (UTC)

@pierotofy I completed my work on that branch except for the Dockerfile. If you look through the commits you will see some big changes, but mostly relevant to this discussion is handling GCP files.

  • If a file called gcp_list.txt exists in the project directory, then it will be used. That requires no changes to the Dockerfile, I think.
  • If a user does specify the gcp_list.txt AND it is outside the shared volume, how do we make sure it gets over into the Docker container?
  • The same question should apply when using the new -i <path/to/images> tag.


From @pierotofy on Wed Dec 14 2016 19:19:15 GMT+0000 (UTC)

The changes on the branch look good! Nice work.

  • Correct
  • The user needs to mount the gcp_list.txt file in the docker run command. For example: docker run ... -v /path/to/gcp_list.txt:/code/gcp_list.txt ... (yes you can mount single files as volumes in docker)
  • The path/to/images needs to be mounted as well docker run ... -v /home/user/images:/images ... -i /images


From @dakotabenjamin on Mon Dec 19 2016 16:12:47 GMT+0000 (UTC)

How much are these changes going to affect the API if at all?


From @pierotofy on Mon Dec 19 2016 16:23:26 GMT+0000 (UTC)

I don’t think these changes are going to affect the API at all.


Is this still the method for gcp_list?

docker run -it --rm \
-v "$(pwd)/images:/code/images" \
-v "$(pwd)gcp_list.txt:/code/gcp_list.txt" \
-v "$(pwd)/odm_orthophoto:/code/odm_orthophoto" \
-v "$(pwd)/odm_texturing:/code/odm_texturing" \
opendronemap/opendronemap --rerun-all \
--use-25dmesh --orthophoto-bigtiff=YES \
--orthophoto-resolution=200 --build-overviews

So no need to use --gcp /gcp_list.txt as a parameter? Just mount the file as a volume? If so, why are the documents not updated?


Unfortunately sometimes the documentation doesn’t get updated when changes are made. Can you point out where the outdated info is so I can fix it?


Hi Benjamin,

This is Dean. We met at FOSS4G this week. You showed me the solution. I lost your card, so sent an email to your gmail account.



(FYI, his first name is Dakota). Cheers!