Why is survey data always red in quality report?

I have a small square region roughly 120 meters on a side over which I flew three back-to-back flights. This region is relatively flat and rural but includes two ponds, several buildings, some fencing, and a cell phone tower.

All flights were double-grid with front overlap 85% and side overlap 75%.

Mission #1: 230 feet, survey direction 0, camera pitch -90.
Mission #2: 245 feet, survey direction 20, camera pitch -85.
Mission #3: 245 feet, survey direction 160, camera pitch -85.

I used a DJI Mavic Air 2 set to 48MP mode (images were 8000x6000) flying at 2.5 mph with auto-focus on. Cursory visual inspection of all 278 photos shows nothing out of the ordinary (i.e., no horizon shots, etc.) but some minor changes in lighting were observed due to clouds blocking the sun at different points in time. The images themselves have been uploaded to DroneDB: mathuin/20220414-center-area for the curious, and I’m happy to upload any other product that will help with the issue.

The images were processed with ODM 2.8.2 (tip of master branch as of April 5) compiled locally on an MacBook Pro running macOS 12.3.1 with an M1 Max chip running Docker Desktop 4.7.0 with 8 processors and 48 GB of RAM allocated to Docker. The command line arguments were: --rerun-all --pc-quality high --ignore-gsd --pc-csv --pc-las --pc-ept --pc-tile --pc-classify --pc-geometric --dtm --dsm --dem-euclidean-map --orthophoto-png --orthophoto-kmz --tiles --build-overviews --time --pc-rectify. The ODM processing took a reasonable amount of time – here is the benchmark file:

dataset runtime: 5.299881 seconds
split runtime: 2.4e-05 seconds
merge runtime: 2e-05 seconds
opensfm runtime: 2709.533958 seconds
openmvs runtime: 2948.958996 seconds
odm_filterpoints runtime: 53.467494 seconds
odm_meshing runtime: 223.551807 seconds
mvs_texturing runtime: 1439.270936 seconds
odm_georeferencing runtime: 351.206476 seconds
odm_dem runtime: 400.56708 seconds
odm_orthophoto runtime: 41.066727 seconds
odm_report runtime: 60.617345 seconds
odm_postprocess runtime: 6.5e-05 seconds

The quality report shows all three missions on top of each other, as expected. The previews look reasonable: the orthophoto is nice, the DSM includes the trees and buildings while the DTM does not, and the big pond’s surface looks odd which always seems to happen.

The survey data section is a big red blot, with empty spots for the big pond and the tops of trees. After looking at the source code (specifically ./stages/odm_report.py), I found that the survey data image is from ./opensfm/stats/overlap.png, which results from a gdaldem run against ./opensfm/stats/overlap.tif, which itself is generated from a pdal translate command using the point cloud file as input – and that’s where the trail stops.

Why is the value always 2? Shouldn’t three double-grid overflights result in mostly 5+ values? What am I doing wrong, and is it on the processing side or the data collection side?

2 Likes

What feature type or matcher type did you use?

I get 2 matches only when using binary features (AKAZE, ORB) and BRUTEFORCE matchers against any feature type, which seems correct, but it may be a bug, as you’ve been poking at.

2 Likes

I suspect it is, as all my recent drone imaging tasks using SIFT, many with better than 90% overlap always show as red over the whole scene, ie a value of only 2.

The recent use of Bruteforce for a 3D model of my cycling shoes however, was nearly all 5+ = green across the scene.

2 Likes

So what matcher should I try, and what’s the right command-line argument?

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.

I can’t seem to replicate this on v2.8.5 and current WebODM Docker. Can anyone else please test/confirm that this problem no longer occurs?

SIFT, HAHOG, ORB, and AKAZE all show normal/proper Survey Data maps (and consistent between each other).

1 Like

I tried again with the only significant differences being v2.8.5 (compiled from tip of master as of this morning) and Rancher Desktop with dockerd (Moby) instead of Docker Desktop. Still with 8 CPUs and 48GB of RAM, still with the same images and command-line arguments.

Still the same red splotch.

2 Likes

And you’re positive this data has sufficient overlap to get 3+ ratings?

Can you test with known-working data like Brighton Beach?

2 Likes

It’s the exact same files as from the top of the post. I will set up a job to run the Brighton Beach files with those same arguments and will post the survey data.

2 Likes

Still the red blotch!

What are you seeing in the report with these command-line options?

Alternatively, if you are getting good results and seeing more than red, what are your command-line options?

2 Likes

I don’t always get the red coverage map, but here, with tons of overlap, I did, done only a week ago with build 61

3 Likes

Here’s one instance:

auto-boundary: true, debug: true, feature-quality: ultra, min-num-features: 15000, pc-filter: 0, pc-quality: ultra, skip-3dmodel: true, skip-orthophoto: true, verbose: true

image

Here’s another with pretty poor survey + binary features/bruteforce:

auto-boundary: true, debug: true, feature-quality: ultra, feature-type: orb, min-num-features: 15000, pc-quality: ultra, skip-3dmodel: true, skip-orthophoto: true, verbose: true

image


So, now what I’m wondering is if it only occurs with excessively high overlap/counts per pixel, like an overflow or something…

2 Likes

Ah hah! Success! CLI args: --pc-quality high --min-num-features 15000 --feature-quality high --pc-filter 0 --auto-boundary

Now to figure out what’s different between them.

3 Likes

I’m finding that the ortho quality is good, no matter if the entire survey data map is red or green, so I suspect it is a problem with the generation of that file, rather than a real problem with the data. If the map is green with red patches, then there is an issue with the red areas- almost always where the trees are relatively dense, or at the edge where there genuinely is less image overlap.

1 Like

In the example I posted, the bit with yellow and red is the surface of a pond, and still water doesn’t seem to translate well. The rest is the edges, and I try to make sure my flights include enough of a buffer so the majority of the area of interest is green.

2 Likes

Just had this crop up on a recent installation from the Windows binary, ODM ver 2.8.8, WebODM 1.9.15.

1 Like

I wonder if it’s a bug caused by the --pc-classify flag.

1 Like

Recently I have experienced similar results on jobs on two different sites: survey area totally red, even with near perfect flight patterns. However, the overall quality of the ortho and other results are great.

I didn’t investigate further, but I notice this seens to happen when the point cloud is denser. I mean, when I increase parameters like “min features”, “point cloud quality” or (obsolete?!) “dephtmap”. As if WebODM does creates more points, but couln’t match those points on a lot of pictures…
This hipothesys makes sense when we look at the count of “tracks”, however the statistics about tracks on the report could be wrong, because it didn’t match my own calculations…
report-1-min.pdf (439.8 KB)


Task Output:

[INFO] DTM is turned on, automatically turning on point cloud classification
[INFO] Initializing ODM 2.8.7 - Mon Jul 18 23:46:08 2022
[INFO] ==============
[INFO] 3d_tiles: False
[INFO] auto_boundary: False
[INFO] boundary: {}
[INFO] build_overviews: False
[INFO] camera_lens: brown
[INFO] cameras: {}
[INFO] cog: True
[INFO] copy_to: None
[INFO] crop: 3
[INFO] debug: False
[INFO] dem_decimation: 1
[INFO] dem_euclidean_map: False
[INFO] dem_gapfill_steps: 3
[INFO] dem_resolution: 10.0
[INFO] depthmap_resolution: 640
[INFO] dsm: True
[INFO] dtm: True
[INFO] end_with: odm_postprocess
[INFO] fast_orthophoto: False
[INFO] feature_quality: high
[INFO] feature_type: sift
[INFO] force_gps: False
[INFO] gcp: None
[INFO] geo: /var/www/data/563b712d-7a8d-49d8-83bb-90a60a508ca9/gcp/geo.txt
[INFO] gps_accuracy: 2.0
[INFO] ignore_gsd: False
[INFO] matcher_neighbors: 0
[INFO] matcher_type: flann
[INFO] max_concurrency: 16
[INFO] merge: all
[INFO] mesh_octree_depth: 12
[INFO] mesh_size: 1000000
[INFO] min_num_features: 10000
[INFO] name: 563b712d-7a8d-49d8-83bb-90a60a508ca9
[INFO] no_gpu: True
[INFO] optimize_disk_space: False
[INFO] orthophoto_compression: DEFLATE
[INFO] orthophoto_cutline: False
[INFO] orthophoto_kmz: False
[INFO] orthophoto_no_tiled: False
[INFO] orthophoto_png: False
[INFO] orthophoto_resolution: 2.0
[INFO] pc_classify: True
[INFO] pc_copc: False
[INFO] pc_csv: False
[INFO] pc_ept: True
[INFO] pc_filter: 2.5
[INFO] pc_geometric: False
[INFO] pc_las: False
[INFO] pc_quality: medium
[INFO] pc_rectify: False
[INFO] pc_sample: 0
[INFO] pc_tile: False
[INFO] primary_band: auto
[INFO] project_path: /var/www/data
[INFO] radiometric_calibration: none
[INFO] rerun: None
[INFO] rerun_all: False
[INFO] rerun_from: None
[INFO] resize_to: -1
[INFO] rolling_shutter: False
[INFO] rolling_shutter_readout: 0
[INFO] sfm_algorithm: incremental
[INFO] skip_3dmodel: False
[INFO] skip_band_alignment: False
[INFO] skip_orthophoto: False
[INFO] skip_report: False
[INFO] sm_cluster: None
[INFO] smrf_scalar: 1.25
[INFO] smrf_slope: 0.15
[INFO] smrf_threshold: 0.5
[INFO] smrf_window: 18.0
[INFO] split: 999999
[INFO] split_image_groups: None
[INFO] split_overlap: 150
[INFO] texturing_data_term: gmi
[INFO] texturing_keep_unseen_faces: False
[INFO] texturing_outlier_removal_type: gauss_clamping
[INFO] texturing_skip_global_seam_leveling: False
[INFO] texturing_skip_local_seam_leveling: False
[INFO] texturing_tone_mapping: none
[INFO] tiles: False
[INFO] time: False
[INFO] use_3dmesh: False
[INFO] use_exif: False
[INFO] use_fixed_camera_params: False
[INFO] use_hybrid_bundle_adjustment: False
[INFO] verbose: False
[INFO] ==============

1 Like

I also experienced this problem recently. In my test, the --pc-rectify flag seems to cause the report overlap preview showing all red.

2 Likes