DSM quality optimisation for very large images from airplane

We are trying to use ODM for producing DSMs and true ortho imagery from stereo images taken from an airplane.
The images are taken using the Vexcell UltraCam Eagle Mark 3 from a height of ~4500m. Images are 26460 x 17004 pixels.

A previous topic was getting ODM to work with the camera and large images, which we managed to do with your help (https://community.opendronemap.org/t/dsm-are-always-not-realistic-very-high-values-output-projection-always-utm/9192/6).

Currently, we are trying to improve the output quality using ODM options and flags, as the default settings do not get close to what commercial software (Simactive’s Correlator3D) is able to create.

There are many options; we would like to know what are the most appropriate to play with. Here are screenshots of a reference DSM generated using Correlator3D:
image

and the generated DSM using ODM:
image

I have the feeling it has to do with the relatively large images (450 megapixel).
The options I so far have identified that I think might need tweaking:

  • pc-quality
  • mesh-octree-depth
  • mesh-size
  • pc-geometric

Am I missing any? Do you have recommendations based on the image / sensor properties?

1 Like

feature-quality may help, and it looks like you may want to tweak dem-resolution and/or orthophoto-resolution

1 Like

Thanks!
The resolution of the DSMs shown above is actually equal, 15cm / pixel.
I will run some experiments and report here later.

1 Like

DSM resolution does appear to be the same but not the depthmap resolution.

I agree. I would start with:

Feature quality ultra,
Pc quality ultra
Mesh Octree depth 14.

2 Likes

Intermediate update: memory challenges

I am testing on a dataset with 16 images on an AWS EC2 instance with 190GB RAM, running out of memory with max-concurrency 4 during the detect_features step in OpenSfM. Reducing max-concurrency to 2 helps, so far.

The capping of memory in OpenSfM does not seem to do entirely what it should do:

[INFO]    running /code/SuperBuild/install/bin/opensfm/bin/opensfm detect_features "/datasets/project/opensfm"
2021-11-23 19:38:40,306 INFO: Capping memory usage to ~ 94390.828125 MB
2021-11-23 19:38:40,307 INFO: Expecting to process 54 images.
2021-11-23 19:38:40,309 INFO: Reading data for image 295900.tif (queue-size=0
2021-11-23 19:38:48,840 INFO: Reading data for image 295901.tif (queue-size=1
2021-11-23 19:38:48,841 INFO: Extracting ROOT_SIFT features for image 295900.tif
2021-11-23 19:38:52,157 DEBUG: Computing sift with threshold 0.066
2021-11-23 19:38:56,958 INFO: Reading data for image 295902.tif (queue-size=1
2021-11-23 19:39:05,058 INFO: Reading data for image 295903.tif (queue-size=2
2021-11-23 19:39:13,154 INFO: Reading data for image 295904.tif (queue-size=3
2021-11-23 19:39:21,266 INFO: Reading data for image 295905.tif (queue-size=4
2021-11-23 19:39:29,375 INFO: Reading data for image 295906.tif (queue-size=5
2021-11-23 19:39:37,464 INFO: Reading data for image 295907.tif (queue-size=6
2021-11-23 19:39:45,566 INFO: Reading data for image 305832.tif (queue-size=7
2021-11-23 19:39:53,666 INFO: Reading data for image 305833.tif (queue-size=8
2021-11-23 19:40:01,788 INFO: Reading data for image 305834.tif (queue-size=9
2021-11-23 19:40:09,899 INFO: Reading data for image 305835.tif (queue-size=10
2021-11-23 19:40:17,981 INFO: Reading data for image 305836.tif (queue-size=11
2021-11-23 19:40:26,057 INFO: Reading data for image 305837.tif (queue-size=12
2021-11-23 19:40:34,144 INFO: Reading data for image 305838.tif (queue-size=13
2021-11-23 19:40:42,235 INFO: Reading data for image 305839.tif (queue-size=14

Memory Usage: 174485/191298MB (91.21%)

To me it appears the “Reading data for image …” to prepare the queue might add to the challenge, as the images it reads are very large. I see there is no way to control this queue length at the moment: the queue length is calculated (https://github.com/OpenDroneMap/OpenSfM/blob/265/opensfm/actions/detect_features.py#L25).

Edit: I think the cap on memory usage (calculated as 50% of total RAM?) is applied to the reading of images in the queue, not to the feature extraction itself. In my case it is a bit of a waste to load already all images, as feature extraction is way too slow too keep up and I need all RAM I can get for feature extraction.

1 Like

The reconstruction stage in OpenSfM fails. Based on what I can see from previous steps, I don’t see any issues, but maybe you do?
Running on the same dataset, but with feature-quality high, pc-quality medium and mesh-octree-depth 11 worked fine.

Could it again be a memory issue?

2021-11-23 20:32:49,604 DEBUG: Matching 305837.tif and 305838.tif.  Matcher: FLANN (symmetric) T-desc: 22.924 T-robust: 0.291 T-total: 23.378 Matches: 151398 Robust: 1501      82 Success: True
2021-11-23 20:33:16,821 DEBUG: Matching 295902.tif and 295904.tif.  Matcher: FLANN (symmetric) T-desc: 45.607 T-robust: 0.151 T-total: 45.868 Matches: 90150 Robust: 87935       Success: True
2021-11-23 20:33:18,412 DEBUG: Matching 295906.tif and 295902.tif.  Matcher: FLANN (symmetric) T-desc: 28.754 T-robust: 0.033 T-total: 28.807 Matches: 22118 Robust: 19592       Success: True
2021-11-23 20:33:39,472 DEBUG: Matching 305833.tif and 305834.tif.  Matcher: FLANN (symmetric) T-desc: 20.658 T-robust: 0.265 T-total: 21.059 Matches: 139685 Robust: 1387      90 Success: True
2021-11-23 20:33:53,417 DEBUG: Matching 305837.tif and 305835.tif.  Matcher: FLANN (symmetric) T-desc: 36.293 T-robust: 0.177 T-total: 36.595 Matches: 112550 Robust: 1110      11 Success: True
2021-11-23 20:33:53,418 INFO: Matched 67 pairs (brown-brown: 67) in 858.59511862 seconds (12.814852524582095 seconds/pair).
[INFO]    running /code/SuperBuild/install/bin/opensfm/bin/opensfm create_tracks "/datasets/project/opensfm"
2021-11-23 20:34:27,708 INFO: reading features
2021-11-23 20:34:53,331 DEBUG: Merging features onto tracks
2021-11-23 20:35:21,268 DEBUG: Good tracks: 1527330
[INFO]    running /code/SuperBuild/install/bin/opensfm/bin/opensfm reconstruct "/datasets/project/opensfm"
2021-11-23 20:36:11,170 INFO: Starting incremental reconstruction
2021-11-23 20:36:42,129 INFO: 0 partial reconstructions in total.
[ERROR]   The program could not process this dataset using the current settings. Check that the images have enough overlap, that there are enough recognizable features an      d that the images are in focus. You could also try to increase the --min-num-features parameter.The program will now exit.

The reconstruction.json file in the opensfm/reports folder has the following:

{
    "num_candidate_image_pairs": 0,
    "reconstructions": [],
    "wall_times": {
        "compute_image_pairs": 30.957974203999584,
        "compute_reconstructions": 0.00018982900019182125
    },
    "not_reconstructed_images": [
        "295905.tif",
        "305832.tif",
        "295902.tif",
        "295907.tif",
        "295904.tif",
        "295901.tif",
        "305833.tif",
        "305835.tif",
        "305836.tif",
        "305839.tif",
        "305834.tif",
        "305838.tif",
        "295906.tif",
        "305837.tif",
        "295900.tif",
        "295903.tif"
    ]
}

It seems to indicate that the matching was not successful?

1 Like

Same issue as the one I mentioned in the other thread, you need to use the small tweak I did on the branch fix-adaptive-rotation-threshold.

1 Like