Less sharp, distorted orthomosaic. Difference in results

I recently processed data in WebODM and Dronedeploy and results were very surprising. I read few of the articles and tweaked few of the parameters:

dsm: true, dtm: true, min-num-features: 50000, optimize-disk-space: true, pc-geometric: true, pc-quality: high, use-3dmesh: true

I feel that WebODM has capability to do much better. Maybe I have gone with the parameters. Can someone help with the parameters?

Welcome!

You can try increasing --pc-quality and --feature-quality, as well as increasing --mesh-size and possibly --mesh-octree-depth.

Are you able to share that dataset on somewhere like dronedb.app (or another Cloud Service Provider) so we can work with it as well?

Thanks!

  • –pc-quality and --feature-quality were High. Should I set them to Ultra?
  • –mesh-octree-depth was set to 11 (by default). Should I set it to 12? 12 is the maximum value possible according to the document.
  • What value should I set to --mesh-size?

Drone dataset link:
https://drive.google.com/drive/folders/1tWtO7y6bqN_-dlEoWEn1MqSNkQuRB30_?usp=sharing

1 Like

Should I try with following settings:

dsm: true, dtm: true, feature-quality: ultra, mesh-octree-depth: 14, mesh-size: 1000000, min-num-features: 50000, optimize-disk-space: true, pc-geometric: true, pc-quality: ultra, use-3dmesh: true

?

2 Likes

I don’t think you need to push --min-num-features that high, but the rest looks good to try, sure!

Also, do not set --use-3dmesh. Orthophotos (typically) look a lot better when you use the 2.5D mesh (the default).

2 Likes

Got the following error:

===== Dumping Info for Geeks (developers need this to fix bugs) =====
Child returned 1
Traceback (most recent call last):
File “/code/stages/odm_app.py”, line 94, in execute
self.first_stage.run()
File “/code/opendm/types.py”, line 341, in run
self.next_stage.run(outputs)
File “/code/opendm/types.py”, line 341, in run
self.next_stage.run(outputs)
File “/code/opendm/types.py”, line 341, in run
self.next_stage.run(outputs)
[Previous line repeated 3 more times]
File “/code/opendm/types.py”, line 322, in run
self.process(self.args, outputs)
File “/code/stages/odm_meshing.py”, line 24, in process
mesh.screened_poisson_reconstruction(tree.filtered_point_cloud,
File “/code/opendm/mesh.py”, line 207, in screened_poisson_reconstruction
system.run(’"{reconstructmesh}" -i “{infile}” ’
File “/code/opendm/system.py”, line 106, in run
raise SubprocessException(“Child returned {}”.format(retcode), retcode)
opendm.system.SubprocessException: Child returned 1

Here’s the full console:
https://drive.google.com/file/d/1qZ7U4ussJHNlcKqqZZGavCJIIbqMYzTc/view?usp=sharing

1 Like
Created on: 9/30/2021, 2:35:40 PM
Processing Node: Lightning - UAV4Geo (manual)
Options: cog: true, crop: 0, debug: true, dem-gapfill-steps: 4, dem-resolution: 1, dsm: true, dtm: true, matcher-neighbors: 16, mesh-size: 300000, min-num-features: 16000, orthophoto-resolution: 1, pc-classify: true, pc-ept: true, pc-filter: 1, pc-geometric: true, pc-quality: high, use-3dmesh: true, verbose: true
Average GSD: 1.47 cm
Area: 41,949.24 m²
Reconstructed Points: 31,490,227


Much closer!

Looking at your flight plan, I would say a bit more “overfly” over those buildings likely would have helped a lot. The density of images here is a bit variable.

In looking at your images, I’m seeing that they are slightly over-exposed if your focal point was the buildings instead of the field/forest around them. You’re getting highly-spectral surfaces which are hard to reconstruct:

Even if I duck everything, you’re still 100% clipped to white on a LOT of the building features:

Are you able to adjust your exposure parameters a bit?

Much closer I should say. :grinning:

Yes, I am able to adjust the exposure parameter of the camera and will take care of it next time. I will also make sure to overfly over the building.

I have few doubts related to some of the parameters which you set:

  1. Do we really need to set --dem-resolution and --orthophoto-resolution to 1? Or Does WebODM estimates the resolution quite correctly?
  2. As suggested by @pierotofy, do we need to set --use-3dmesh?

What is the general rule of thumb for processing urban area dataset where it mostly contains buildings and rooftops?

1 Like
  1. It does in general an excellent job, yes. I have it in my preset for edge cases.

  2. I tend to like the output better with it, but Piero does not recommend it for building DSMs as it can make edges messy.

I would say pushing features and pointcloud quality high, as a start.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.