Suggestions to match Dronedeploy orthophoto map?

Hi all!

I’m new to ODM, I’m using it with docker in an EPYC 128 cores and 315GB ram server. I care about lots of photo detail at low altitude (10 meters). I did a field test using a Mavic 2 Pro with the Hasselblad L1D-20c camera and all updates on the vehicle. I did the photo gathering using DD, I also uploaded the same images to their server and got the final ortho in jpg mode the next day.

This is the DD result of the flight:

DD jpg

And this is the best result I have been able to produced using ODM:

ODM tif

The settings in ODM run was:
resize-to: 3000, end-with: odm_orthophoto, rerun-from: dataset, feature-quality: ultra, depthmap-resolution: 1000, ignore-gsd: true, mesh-size: 600000, mesh-octree-depth: 10, crop: 0, texturing-data-term: area, texturing-nadir-weight: 6, dsm: true, orthophoto-resolution: 2, verbose: true, debug: true

First of all, I’m not able to do a resize to -1 (to get the full resolution of 5472x3648) as the process always exists with error code. Checking the server, the ram gets full (315GB) and all of the threads are at 100%…
Screen Shot 2020-10-14 at 12.03.25 PM

So, the generated tif is with settings above but if you zoom in the DD image, the detail is much much better than the tif.

What settings do you recommend on my task so I can generate an ortho similar in quality to DD?

Thanks a lot!!

3 Likes

Hola Aldo,

Maybe splitting so you can process using --resize-to -1.

msedge_gLImLs3ie8

What’s your use case?

Why do you need such a detailed orthophoto?

2 Likes

Hola Israel!

Thanks for the suggestion, I tried using: split: 10, split-overlap: 100 but I still got the Process exited with code 1, the log is as follows:

2020-10-15 15:03:38,372 DEBUG: Computing sift with threshold 0.1
2020-10-15 15:03:38,953 INFO: Extracting ROOT_SIFT features for image DJI_0184.JPG
Traceback (most recent call last):
File "/code/SuperBuild/src/opensfm/bin/opensfm", line 34, in <module>
command.run(args)
File "/code/SuperBuild/src/opensfm/opensfm/commands/detect_features.py", line 31, in run
parallel_map(detect, arguments, processes, 1)
File "/code/SuperBuild/src/opensfm/opensfm/context.py", line 41, in parallel_map
return Parallel(batch_size=batch_size)(delayed(func)(arg) for arg in args)
File "/usr/local/lib/python3.6/dist-packages/joblib/parallel.py", line 934, in __call__
self.retrieve()
File "/usr/local/lib/python3.6/dist-packages/joblib/parallel.py", line 833, in retrieve
self._output.extend(job.get(timeout=self.timeout))
File "/usr/local/lib/python3.6/dist-packages/joblib/_parallel_backends.py", line 521, in wrap_future_result
return future.result(timeout=timeout)
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 432, in result
return self.__get_result()
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/usr/local/lib/python3.6/dist-packages/joblib/externals/loky/_base.py", line 625, in _invoke_callbacks
callback(self)
File "/usr/local/lib/python3.6/dist-packages/joblib/parallel.py", line 309, in __call__
self.parallel.dispatch_next()
File "/usr/local/lib/python3.6/dist-packages/joblib/parallel.py", line 731, in dispatch_next
if not self.dispatch_one_batch(self._original_iterator):
File "/usr/local/lib/python3.6/dist-packages/joblib/parallel.py", line 759, in dispatch_one_batch
self._dispatch(tasks)
File "/usr/local/lib/python3.6/dist-packages/joblib/parallel.py", line 716, in _dispatch
job = self._backend.apply_async(batch, callback=cb)
File "/usr/local/lib/python3.6/dist-packages/joblib/_parallel_backends.py", line 510, in apply_async
future = self._workers.submit(SafeFunction(func))
File "/usr/local/lib/python3.6/dist-packages/joblib/externals/loky/reusable_executor.py", line 151, in submit
fn, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/joblib/externals/loky/process_executor.py", line 1022, in submit
raise self._flags.broken
joblib.externals.loky.process_executor.TerminatedWorkerError: A worker process managed by the executor was unexpectedly terminated. This could be caused by a segmentation fault while calling the function or by an excessive memory usage causing the Operating System to kill the worker. The exit codes of the workers are {SIGKILL(-9), SIGKILL(-9), SIGKILL(-9), SIGKILL(-9), SIGKILL(-9), SIGKILL(-9), SIGKILL(-9)}
Traceback (most recent call last):
File "/code/run.py", line 68, in <module>
app.execute()
File "/code/stages/odm_app.py", line 95, in execute
self.first_stage.run()
File "/code/opendm/types.py", line 356, in run
self.next_stage.run(outputs)
File "/code/opendm/types.py", line 337, in run
self.process(self.args, outputs)
File "/code/stages/splitmerge.py", line 59, in process
octx.feature_matching(self.rerun())
File "/code/opendm/osfm.py", line 272, in feature_matching
self.run('detect_features')
File "/code/opendm/osfm.py", line 22, in run
(context.opensfm_path, command, self.opensfm_project_path))
File "/code/opendm/system.py", line 79, in run
raise Exception("Child returned {}".format(retcode))
Exception: Child returned 1

I basically need a super high resolution, super clear and super big image of the field… thats why I’m flying at 10 meters… I’m considering even flying at 7meters. I still cannot get the same result as dronedeploy :pensive:

Careful with going too low. You’re going to run into a point where the processing pipelines won’t be able to reliably extract tiepoints, especially when you’re dealing with surfaces that are mostly homogenous like flat fields.

3 Likes

agreed! will stay safe at 10m then…

1 Like

This is the problem right now, I cannot get the same quality resolution as when dronedeploy produces the map, check the zoom I did in both images on the water pipe:

Screen Shot 2020-10-15 at 3.48.33 PM

Do you think is directly related to the resizing of the image that I have to forcefully do?

2 Likes

Almost certainly.

Locally, without resizing, I’ve found OpenDroneMap’s orthomosaic output to be just as sharp as Pix4D’s, or better, sometimes.

1 Like

Now I feel a bit confused on the usage of the split and split-overlap.

As far as I know, for split you need a number of images for the subset (positive integer). Maybe 60% of the total dataset and for split-overlap you need the required overlaping distance. I think the overlaping distance is dependable of the image footprint and for this case maybe you can start with 15 or 20 meters.

Whats the number of images in the dataset?

1 Like

oh wow, so split-overlap is in meters?

my dataset is 198 images…

1 Like

Sure, split-overlap is in meters.

I can’t see how 198 images coming from a 1 inch sensor can make this machine run out of memory (315GB). What’s the HDD and RAM allocation for Docker?

1 Like

Note that the orthophoto-resolution was set to 2 cm/pixel, and in for this case you want it to match the GSD. Let’s say that orthophoto-resolution must be something near 0.4

1 Like

HDD is 1TB, there is a lot free and docker can use all resources as you saw in my htop screenshot.

I’m still unable to successfully create higher resolution maps… currently I’m trying a default run as everything was failing… will let you know how it goes.

I also cannot understand how 198 images from a 1in sensor can make this server crash hehe, for sure I’m not configuring properly something…

1 Like

Yeah, this shouldn’t be crashing. Out of curiousity: are you running Windows?

no of course not, ubuntu 18.04…

Ok, here’s the likely issue: you have a lot of cores, and a modest amount of RAM. Ok, it’s a lot of ram, but I generally recommend 4GB per core.

So, either try reducing --max-concurrency to 78 cores or add 315 to 630GB swap.

2 Likes

I got good results, similar to dronedeploy, my settings for the orthomap where:

Options: rerun-from: dataset, dsm: true, orthophoto-resolution: 1.0

You can see it here

The issue I saw is that in some sections I have “bad” areas, check the photo:

Screen Shot 2020-10-23 at 10.17.39 AM

but yes, in general, those options made it very close to DD!

4 Likes