I have a dataset of 235 images captured with a zenmuse p1. I processed the JPG files and was not very impressed with the quality of the orthomosaic created. I was ok with the result of the DSM/DTM however.
I understand that RAW photo files need to be preprocessed to PNG and I did so using an application called luminar neo. The average file size of a single PNG file ranges between 67,000 KB to 102,000 KB (67 Megabytes to 102 MB).
ODM seems to be pushing through and adding png files to the reconstruction, however I am approaching 12 hours of processing and this is not an ideal workflow for me. I’m sure I missed something here. Just hoping for some guidance with preprocessing raw files. I would love to be able to print a large orthomosaic for a client that shows off the capabilities of the P1 camera at 45megapixels and my estimated GSD of <1cm.
If you captured the pictures as JPEG there is not much point to go PNG or anything afterwards.
When I read the title, I thought you meant preprocessing like white-balance, color curves etc.
I had some cases where adjusting the color curves or even just contrast helped with the reconstruction. I often use XnView (Image Viewer | Photo Viewer | Image Resize | XnView) for that. It has very powerful batch functions to process tons of images with just a few clicks.
Otherwise be aware that 40MP images can swallow huge amounts of memory while processing. If you did a good job acquiring the images, at that resolution and depending on your flight height, you should also not need to set pc_quality to ultra.
You did not mention your machine specs, but be prepared to have upwards of 150 Gbyte of memory (physical and pagefile) available for such a job.
Processing time increases also significantly. Maybe start with default settings and if something comes out quirky, step up the processing parameters.
With a well taken dataset at that resolution, default settings should already give you impressive results.
Maybe also run an update, you are still using a slightly older version of ODM.
The developers are pushing the software these days to new heights
Hi Shiva, again I captured images in raw and jpeg. My goal was to attempt to convert the raw images to png once I read that ODM can handle png files. I was under the impression that the larger file size would enable a higher quality orthomosaic after processing.
I just updated my ODM version, appreciate the heads up on that one! Didn’t realize the developers were still at it. Glad my contributions are helping haha.
Maybe you can steer me in the right direction here. I am aiming for a high quality orthomosaic and more importantly, high relative accuracy DEM/DSM. The P1 camera can provide smart oblique image capture functions as well as nadar image capture with an elevation optimization route at the end (this flies the drone on a single flight path that is diagonal to the mapped routes. ***Would you suggest that one will provide better elevation data than the other?? I assume the smart oblique feature would produce a better 3d point cloud but looking to hear from others while I work through it myself.
Thanks APOS80, I don’t have much knowledge on hahog vs sift. Did you figure that out over time through trial and error?
I attached a screenshot to compare the results of processing JPEG vs processing JPEG from raw DNG files. In case you’re curious.
Just trying to create a solid workflow here that gets undistorted and high quality results.
By now I fly all of my missions oblique with an 70-85° angle.
Orthophotos turn out great and the lower the angle the more elevation/heights are visible, reconstructed in the model.
Only when I fly multiple missions over the same object do I use nadir images, usually for the higher elevation flights. With more missions oblique at lower heights. But that only when I want to create detailed 3D models of a house or something.
You did not describe exactly what your concern (discontentment) with the orhtophoto / orthomosaic was.
Especially straight lines, like roof edges can easily distort when not providing enough overlap. Also flying just too low can cause trouble with the stitching.
If you have some screenshots that display the areas of concern, it might help figure out what to improve.
But again, watch out with those settings. You are using high resolution images and setting something to high or ultra can easily overwhelm even powerful computers or cause processes to run for days.
When I started out with WebODM and played around with the settings, I sometimes was waiting for 4 or more days. Just to see, that I didn’t understand the setting I was tempering with
Thanks for the heads up helping me out on this. These photos were captured at a 100 ft. altitude, mapped with nadir images and elevation optimization turned on with the m300. I ended up using jpegs converted from the raw DNG files. I use luminar neo to do this.
These are the settings I used in ODM;
DSM/DTM : true
Feature type: hahog
Force gps : true
matcher-neighbors : 24
mesh size : 300,000
Min. # features : 15,000
Orthophoto resolution : 0.5
PC quality : high
skip 3d model : true
The result gave me an average GSD of 1.22cm and around 66,000,000 reconstructed points.
I found that the roof lines have been straightened, and overall the map is a closer match to the satellite image. I still feel like I’m confused on how to choose the correct settings in ODM to obtain a perfect result. Shouldn’t my result match the satellite layer image better since I’m using RTK?
The next round I will be using a smart oblique data set also captured at 100 ft altitude, and the same settings recommended to me earlier.
I feel like 100 ft. is the lowest I feel comfortable with in neighborhoods with a M300. There was a palm tree that I barely cleared by 15 ft. on the edge of my map too. That’s just how it is in Los Angeles. I want to try using the smart oblique feature with 75-80 degree angle though, I typically leave it at 45 degrees.
Thanks for the tip to use cloudcompare, I’ll try that as well.
Hey I just wanted to share the quality report that ODM produced for me and get your feedback. Take into consideration that I was only using a DTRK2 base station and M300 with no ground control points. Surveyors are beginning to ask me accuracy questions and I’m not sure how to justify my deliverables. The boundary points I request from them are intended to be placed on the topography report I am creating. How can I guarantee that their boundary points will align with my results?
I have found it necessaey to understand coordinate systems and which one is conventionally used in my area. Then you need to understand which coordinate system your base stations and drone are configured to use. If necessary transform the data into the the correct coordinate system, and then reference that coordinate system in your deliverables.
Horizontal and vertical components of coordinate systems are often varying depending where you live. There has been a lot of modernization of coordinate systems in recent decades. In my province there are two different vertical datums depending on client needs.
And yes confirmation with a few check points is usually essential unless your required tolerances are fairly loose.