Hello everyone!
I want to process big datasets, like 5000+ images. Since this amount of data is probably going to need a lot of resources and time, I would like to know how to process these amounts of data the most efficient way.
I was thinking about using this command
docker run -ti --rm -v $(pwd):/datasets/code opendronemap/opendronemap --project-path /datasets --time --verbose --min-num-features 2000
Is there a way in ODM to split the dataset into smaller subsets and process these first and at last kind of combine all subresults to a big result?
Any advices would be appreciated!