Large dataset (3300 photos)


I have a huge dataset, about 3000 pictures. I have no problems running it… but it took 48h on a m4d10xlarge aws instance.

I just need few generated files (odm_orthophoto.original.png odm_orthophoto.png, odm_orthophoto_log.txt, odm_georeferencing_model_geo.txt)

I have already seen some arguments to use, like --fast-orthophoto or --skip-3dmodel)

the fast-orthophoto results in a differente orthomosaic and the skip-3dmodel does not work.

I just need the png files and these two txt files. Can anyone help me to figure it out on how can I proccess it faster?


If you want an ugly process that has you bouncing between checked out versions of ODM in order to use split-merge, I can provide that. Or else wait a month or so for @dakotabenjamin to finish split-merge into the ODM-OAM workflow, that will be a cleaner process.


Well, I am not into an ugly process, but I can try… how can I do it ??

Just to execute few tests… and then when @dakotabenjamin finish the process I try it again.

Thaaank you for an answer.


@accferronato the two options that come to mind are: --use-hybrid-bundle-adjustment and --use-opensfm-dense. The second will change your point cloud and perhaps the orthophoto, the first one should improve runtime with not much output change.

Please note that your orthophoto will never look the same between two runs, even with the same options, the texturing step is non-deterministic.

Run local bundle adjustment for every image added to
                      the reconstruction and a global adjustment every 100
                      images. Speeds up reconstruction for very large

If you let us know the runtime by passing these other options we could perhaps make further suggestions.


If the suggestions Piero suggested dont work, the process Stephen is referring to is documented here:


Yes, but it’s broken on the current master, and has been since July. So to make it work, you need to do the following:

See post below instead (edited)


Thank you guys, I’ll try it out!! And I’ll post the results here!


Somehow this doesn’t work. I am working on writing a new version. Standby.


Try this instead:

Remove final two lines in the run_all as:

#/usr/bin/env bash

RUNPATH="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"/../..
export PYTHONPATH=$RUNPATH:$RUNPATH/SuperBuild/install/lib/python2.7/dist-packages:$RUNPATH/SuperBuild/src/opensfm:$PYTHONPATH
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$RUNPATH/SuperBuild/install/lib

set -e

DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )

python $DIR/ "$@"
python $DIR/ $1
python $DIR/ $1
python $DIR/ $1
python $DIR/ $1
#python $DIR/ $1
#python $DIR/ $1

Now when we run this, it will run the structure from motion portion but not the multi-stereo view.

# Run split-merge 
cd ~/OpenDroneMap/scripts/metadataset
./ ~/ODMProjects/projectname/

Now that this is complete, when need to clean up some paths in our list.txt in our mve subdirectory:

find ~/ODMProjects/projectname/submodels/submodel_????/opensfm/mve -name 'list.txt' -exec rpl "../images" "../../images" {} +

Now we can finish running through our dense point clouds:

for i in {0..9}
  do python submodel_000$i --project-path ~/ODMProjects/projectname/submodels

for i in {10..99}
  do python submodel_00$i --rerun-from smvs --project-path ~/ODMProjects/projectname/submodels

for i in {100..999}
  do python submodel_0$i --rerun-from smvs --project-path ~/ODMProjects/projectname/submodels


The last step can also be rewritten as:

for i in {0..999}
  do python submodel_0$(printf "%03.f" $i) --project-path ~/ODMProjects/projectname/submodels


Haha. Of course. V. nice.


Here’s a simpler solution that I will propose to OpenSfM-

Edit SuperBuild/src/opensfm/opensfm/large/ Line 155-156:

                    # dst_relpath = os.path.relpath(dst, submodel_path)
                    txtfile.write(dst + "\n")

Then you can run as normal. This changes to relative paths to absolute in the images text file.