Raw photo preprocessing (to png or jpg) - in order to maintain high level of detail

Hello,

I have a dataset of 235 images captured with a zenmuse p1. I processed the JPG files and was not very impressed with the quality of the orthomosaic created. I was ok with the result of the DSM/DTM however.

I understand that RAW photo files need to be preprocessed to PNG and I did so using an application called luminar neo. The average file size of a single PNG file ranges between 67,000 KB to 102,000 KB (67 Megabytes to 102 MB).

ODM seems to be pushing through and adding png files to the reconstruction, however I am approaching 12 hours of processing and this is not an ideal workflow for me. I’m sure I missed something here. Just hoping for some guidance with preprocessing raw files. I would love to be able to print a large orthomosaic for a client that shows off the capabilities of the P1 camera at 45megapixels and my estimated GSD of <1cm.

Thanks in advance

Just an update, processing failed at 12 hrs 9 min. Pretty disappointing but I’m sure it’s just a learning curve. Appreciate any feedback.
Thanks

I’ve processed 48mp jpg from Mini3pro raw without issues.

What’s your settings?

INFO] DTM is turned on, automatically turning on point cloud classification
[INFO] Initializing ODM 3.0.1 - Sun May 07 12:35:51 2023
[INFO] ==============
[INFO] 3d_tiles: False
[INFO] auto_boundary: True
[INFO] auto_boundary_distance: 0
[INFO] bg_removal: False
[INFO] boundary: {}
[INFO] build_overviews: False
[INFO] camera_lens: brown
[INFO] cameras: {}
[INFO] cog: True
[INFO] copy_to: None
[INFO] crop: 3
[INFO] dem_decimation: 1
[INFO] dem_euclidean_map: False
[INFO] dem_gapfill_steps: 3
[INFO] dem_resolution: 1.0
[INFO] dsm: True
[INFO] dtm: True
[INFO] end_with: odm_postprocess
[INFO] fast_orthophoto: False
[INFO] feature_quality: ultra
[INFO] feature_type: sift
[INFO] force_gps: False
[INFO] gcp: None
[INFO] geo: None
[INFO] gps_accuracy: 10
[INFO] ignore_gsd: False
[INFO] matcher_neighbors: 0
[INFO] matcher_type: flann
[INFO] max_concurrency: 20
[INFO] merge: all
[INFO] mesh_octree_depth: 11
[INFO] mesh_size: 300000
[INFO] min_num_features: 10000
[INFO] name: 8bf99c30-b90f-4bbf-ab1d-cfc691e74a93
[INFO] no_gpu: False
[INFO] optimize_disk_space: False
[INFO] orthophoto_compression: DEFLATE
[INFO] orthophoto_cutline: False
[INFO] orthophoto_kmz: False
[INFO] orthophoto_no_tiled: False
[INFO] orthophoto_png: False
[INFO] orthophoto_resolution: 1.0
[INFO] pc_classify: True
[INFO] pc_copc: False
[INFO] pc_csv: False
[INFO] pc_ept: True
[INFO] pc_filter: 2.5
[INFO] pc_las: False
[INFO] pc_quality: ultra
[INFO] pc_rectify: False
[INFO] pc_sample: 0
[INFO] pc_tile: False
[INFO] primary_band: auto
[INFO] project_path: C:\WebODM\resources\app\apps\NodeODM\data
[INFO] radiometric_calibration: none
[INFO] rerun: None
[INFO] rerun_all: False
[INFO] rerun_from: None
[INFO] rolling_shutter: False
[INFO] rolling_shutter_readout: 0
[INFO] sfm_algorithm: incremental
[INFO] skip_3dmodel: True
[INFO] skip_band_alignment: False
[INFO] skip_orthophoto: False
[INFO] skip_report: False
[INFO] sky_removal: False
[INFO] sm_cluster: None
[INFO] sm_no_align: False
[INFO] smrf_scalar: 1.25
[INFO] smrf_slope: 0.15
[INFO] smrf_threshold: 0.5
[INFO] smrf_window: 18.0
[INFO] split: 999999
[INFO] split_image_groups: None
[INFO] split_overlap: 150
[INFO] texturing_keep_unseen_faces: False
[INFO] texturing_skip_global_seam_leveling: False
[INFO] texturing_skip_local_seam_leveling: False
[INFO] tiles: False
[INFO] use_3dmesh: False
[INFO] use_exif: False
[INFO] use_fixed_camera_params: False
[INFO] use_hybrid_bundle_adjustment: False
[INFO] ==============
[INFO] Running dataset stage
[INFO] Loading dataset from: C:\WebODM\resources\app\apps\NodeODM\data\8bf99c30-b90f-4bbf-ab1d-cfc691e74a93\images
[INFO] Loading 235 images
[INFO] Wrote images database: C:\WebODM\resources\app\apps\NodeODM\data\8bf99c30-b90f-4bbf-ab1d-cfc691e74a93\images.json
[INFO] Found 235 usable images
[INFO] Parsing SRS header: WGS84 UTM 11N
[INFO] Finished dataset stage
[INFO] Running split stage
[INFO] Normal dataset, will process all at once.
[INFO] Finished split stage
[INFO] Running merge stage
[INFO] Normal dataset, nothing to merge.
[INFO] Finished merge stage
[INFO] Running opensfm stage
[INFO] Maximum photo dimensions: 8192px
[INFO] Photo dimensions for feature extraction: 8192px
[INFO] CUDA drivers detected
[WARNING] Image size (8192x5460px) would not fit in GPU memory, falling back to CPU
[INFO] Altitude data detected, enabling it for GPS alignment
[INFO] [‘use_exif_size: no’, ‘flann_algorithm: KDTREE’, ‘feature_process_size: 8192’, ‘feature_min_frames: 10000’, ‘processes: 20’,

Also, you said that you processed a 48mp photo in JPG. I was attempting to process PNG files…

And maybe that’s my issue? I read that ODM supports png files. However maybe there is a file size limit I missed.

I am exporting the dng files to jpeg now and will try again.

Try that and I would use Hahog instead of Sift as Hahog is way faster and as accurate.

If you captured the pictures as JPEG there is not much point to go PNG or anything afterwards.
When I read the title, I thought you meant preprocessing like white-balance, color curves etc.
I had some cases where adjusting the color curves or even just contrast helped with the reconstruction. I often use XnView (Image Viewer | Photo Viewer | Image Resize | XnView) for that. It has very powerful batch functions to process tons of images with just a few clicks.

Otherwise be aware that 40MP images can swallow huge amounts of memory while processing. If you did a good job acquiring the images, at that resolution and depending on your flight height, you should also not need to set pc_quality to ultra.
You did not mention your machine specs, but be prepared to have upwards of 150 Gbyte of memory (physical and pagefile) available for such a job.
Processing time increases also significantly. Maybe start with default settings and if something comes out quirky, step up the processing parameters.
With a well taken dataset at that resolution, default settings should already give you impressive results.

Maybe also run an update, you are still using a slightly older version of ODM.
The developers are pushing the software these days to new heights :cowboy_hat_face:

Hi Shiva, again I captured images in raw and jpeg. My goal was to attempt to convert the raw images to png once I read that ODM can handle png files. I was under the impression that the larger file size would enable a higher quality orthomosaic after processing.

I just updated my ODM version, appreciate the heads up on that one! Didn’t realize the developers were still at it. Glad my contributions are helping haha.

Maybe you can steer me in the right direction here. I am aiming for a high quality orthomosaic and more importantly, high relative accuracy DEM/DSM. The P1 camera can provide smart oblique image capture functions as well as nadar image capture with an elevation optimization route at the end (this flies the drone on a single flight path that is diagonal to the mapped routes. ***Would you suggest that one will provide better elevation data than the other?? I assume the smart oblique feature would produce a better 3d point cloud but looking to hear from others while I work through it myself.

Thanks

Thanks APOS80, I don’t have much knowledge on hahog vs sift. Did you figure that out over time through trial and error?
I attached a screenshot to compare the results of processing JPEG vs processing JPEG from raw DNG files. In case you’re curious.
Just trying to create a solid workflow here that gets undistorted and high quality results.

1 Like

Yes I’ve tried many settings and cameras.

I use Hahog - Flann, 24 neighbours, 15000 futures,…

Sift is slow but very good, orb is fast but unreliable and Hahog as good as sift what I have noticed and much faster than sift but not as fast as orb.

2 Likes

Sweet, I’ll give that a go and compare on my end. Thank you.

1 Like

Try changing that to 0.5 or lower.

Personally I’d use the original jpeg’s (along with the ortho resolution setting) before trying other things.

3 Likes

By now I fly all of my missions oblique with an 70-85° angle.
Orthophotos turn out great and the lower the angle the more elevation/heights are visible, reconstructed in the model.
Only when I fly multiple missions over the same object do I use nadir images, usually for the higher elevation flights. With more missions oblique at lower heights. But that only when I want to create detailed 3D models of a house or something.

You did not describe exactly what your concern (discontentment) with the orhtophoto / orthomosaic was.
Especially straight lines, like roof edges can easily distort when not providing enough overlap. Also flying just too low can cause trouble with the stitching.

If you have some screenshots that display the areas of concern, it might help figure out what to improve.

But again, watch out with those settings. You are using high resolution images and setting something to high or ultra can easily overwhelm even powerful computers or cause processes to run for days.
When I started out with WebODM and played around with the settings, I sometimes was waiting for 4 or more days. Just to see, that I didn’t understand the setting I was tempering with :sweat_smile:

3 Likes

APOS80 and Shiva;

Thanks for the heads up helping me out on this. These photos were captured at a 100 ft. altitude, mapped with nadir images and elevation optimization turned on with the m300. I ended up using jpegs converted from the raw DNG files. I use luminar neo to do this.
These are the settings I used in ODM;
DSM/DTM : true
Feature type: hahog
Force gps : true
matcher-neighbors : 24
mesh size : 300,000
Min. # features : 15,000
Orthophoto resolution : 0.5
PC quality : high
skip 3d model : true
The result gave me an average GSD of 1.22cm and around 66,000,000 reconstructed points.

I found that the roof lines have been straightened, and overall the map is a closer match to the satellite image. I still feel like I’m confused on how to choose the correct settings in ODM to obtain a perfect result. Shouldn’t my result match the satellite layer image better since I’m using RTK?

The next round I will be using a smart oblique data set also captured at 100 ft altitude, and the same settings recommended to me earlier.

1 Like

The satellite images isn’t very accurate in placement, your output is much more accurate.

You can also try making an ortophoto from the pointcloud in CloudCompare. There you can also set if you like to see more of the ground in relation to trees basing the colour on the lowest(z) point.

To get higher resolution I would suggest flying lower. When I aim for 10mm GSD I fly lower than needed to be sure.

1 Like

I feel like 100 ft. is the lowest I feel comfortable with in neighborhoods with a M300. There was a palm tree that I barely cleared by 15 ft. on the edge of my map too. That’s just how it is in Los Angeles. I want to try using the smart oblique feature with 75-80 degree angle though, I typically leave it at 45 degrees.
Thanks for the tip to use cloudcompare, I’ll try that as well.

2 Likes

Hey I just wanted to share the quality report that ODM produced for me and get your feedback. Take into consideration that I was only using a DTRK2 base station and M300 with no ground control points. Surveyors are beginning to ask me accuracy questions and I’m not sure how to justify my deliverables. The boundary points I request from them are intended to be placed on the topography report I am creating. How can I guarantee that their boundary points will align with my results?

Quality report for review

1 Like

I think the best is to compare measured(gnss/robotics) points with your output.

1 Like

I have found it necessaey to understand coordinate systems and which one is conventionally used in my area. Then you need to understand which coordinate system your base stations and drone are configured to use. If necessary transform the data into the the correct coordinate system, and then reference that coordinate system in your deliverables.

Horizontal and vertical components of coordinate systems are often varying depending where you live. There has been a lot of modernization of coordinate systems in recent decades. In my province there are two different vertical datums depending on client needs.

And yes confirmation with a few check points is usually essential unless your required tolerances are fairly loose.

1 Like