Image pre-processing

Is anyone doing image pre-processing and/or upscaling? what can be expected from pre-processing?

Please share your experience and advice on software, settings, etc…

1 Like

Right now I’m using XNConvert with these settings:
image

On one dataset, I used Affinity Photo to batch-correct a wildly changing White Balance.

So far, it seems to help pretty significantly with troublesome datasets, but certainly isn’t a panacea. In a few rare cases, it actually leads to LESS images being used in the reconstruction, though the images chosen are in more contiguous blocks with fewer overall gaps, just slightly larger when there is one.

2 Likes

are you using the default settings?

As far as I understand best results come from increasing dynamic range and contrast, is it right?

How are results impacted?

It seems contrast cause bigger impact but mostly on the sparce point cloud.

image

I have created two datasets, one increasing the dynamic range and the second increasing contrast. I deliberately exaggerated these settings to see if can produce dramatic changes.

original_vs_enhanced

I need a problematic dataset to do more experimentation.

1 Like

These particular adjustments in XNConvert aren’t tunable, so yeah, defaults.

I think having a pretty well equalized histogram and good local contrast, so levels/contrast and sharpness/focus.

With JPEG you’re not going to be able to do much with dynamic range expansion except clip black/white and lower overall scene contrast. There just simply isn’t enough information in the input files, unless they were captured in something like ProTune or another Log-Gamma colorspace meant to maximize dynamic range within the 8bits per channel lossy format of JPEG.

With 12bit+ TIFF or PNG… Who knows.

I believe, even using RAW increasing dynamic range will reduce contrast and will negatively impact the resulting point cloud.

could you share one of those problematic datasets?

1 Like

Here’s one:

1 Like

Error: Unauthorized
:pensive:

image

1 Like

I can share my limited experience. I am running on a fairly primitive system, a decent CPU but only 16 GB RAM. I have been experimenting with one dataset acquired with P3A drone.
The dataset details are here:
415 photos
Processing Node: node-odm-1 (auto)
Options: fast-orthophoto: true
Average GSD: 1.87 cm
Area: 243,875.85 m²
Reconstructed Points: 675,095
This runs in just under 60 minutes
I have uploaded the result here:
https://tiles.openaerialmap.org/6139216418b2fb000574fb47/0/6139216418b2fb000574fb48/{z}/{x}/{y}

The original pictures look a bit overexposed and flat so I wanted to enhance the colours before processing into an orthophoto.
The method I use is the redist script and Imagick. The exact command I use is:
redist -s gaussian 30,30,30 DJI_0594.jpg DJI_0594_gaussian.jpg

The photo looks sharp and nice on the display afterwards, now I run this on the whole 415 images and upload to webodm with exactly the same settings. Then I run out of memory after 5 hours of number crunching. Obviously the images have lost some neccessary information. As I am not an expert (or even a novice for that matter) on image processing I have probably murdered the dataset with this pre-processing. Is there a “proper way” to boost colours and balance before processing with webodm? I am processing each image individually, should I for instance run through the whole dataset to detect some kind of normalization factor that I can apply to all images to allow the dataset to “hang together”.

Hope this info is useful for others!

Thanks

Stupid, stupid stupid! I just realized that the conversion strips the images of the EXIF information - including the location causing the whole thing to melt down! Experimentation continues!

2 Likes

So far it seems that sharpness/focus/local contrast are most important for enhancing matches. Color helps, but seems secondary.

Overall image contrast/levels also seems important (likely because good levels/contrast adjustments should increase local contrast).

Color doesn’t matter much because opensfm only uses one band for matching.

I’ve long thought fixing an IHS color space rotation into the SfM feature extraction step would really help with more difficult datasets, but it would require some careful thought about how to hand the duplicate / transformed images.

2 Likes