I am trying to derive changes in sedimentation and erosion between two timesteps using point clods. For the image acquisition I used a Mavic Pro (flight height 80 m). Reading some posts in this forum I figured I need to calibrate my images first since I did not do the additional perpendicular flight paths for self calibration.
Here, I used lensfun with the python wrapper lensfunpy.
However, I did a parallel run without calibrating the pictures. While the latter (image 1) now shows promising results concerning the point cloud density (~ 67 000 000 points) it also shows the anticipated doming effects. The run with the undistorted images, however, outputs a point cloud that is so much smaller (for comparison: it is that tiny blue box in image 3) and sparser (~280 000). Also, the elevation values and the orientation (see image 2) donât match at all. What am I missing here?
For the image upload I used the WebODM Lightning (Version 1.1.0) on a 64-bit Windows 10 Pro machine. However I ran the analyses on the Lightning node (Pro plan), since my machine doesnât have enough capacities for the amount of pictures (~500). I used the settings for High Resolution imagery (âdepthmap-resolution: 1000, --dem-resolution: 2.0, --orthophoto-resolution: 2.0) and fine tuned the smrf parameters according to the flat, grassy area with little occurence of bushes (as read in the ODM documentation):-- smrf-threshold: 0.2, --smrf-window: 10.5, --texturing-nadir-weight: 6
You can find the task output here:
At this point I am at my witsâ end. Any help is highly appreciated!
Cheers
My guess would be that the undistortion done with lensfun didnât correctly undistort the images.
You could try to fly another dataset with the same camera, this time making sure to capture various angles, different altitude, etc., then export the camera calibration model (cameras.json) and set the --cameras option using the exported camera calibration model.
Yes. Lensfun has general calibration, but itâs not good enough (not specific enough to your camera) for photogrammetry. You might be better off either doing as Piero suggested, or use no calibration fixes at all. The automated fixes will probably be enough. If not, add an orbit to all of your flights: for smaller flights, you donât have to go too crazy with calibrated flights. Just adding an orbit is adequate.
Thank you both for the quick reply. Unfortunately I donât have the means to do flights again since the area is thousands of kilometres away and not easily accessible. I will keep it in mind for the next time though.
Do you think I could do a âmanualâ calibration using a chessboard pattern and use the derived values within lensfunpy? Maybe some of you had to do a similar workaround due to the lack of data?
Checkerboard patterns are notoriously fraught for a variety of reasons. Instead, I would recommend instead flying the recommended pattern over a small area, and doing as Piero recommended: export the cameras.json and use that in the reprocessing of your dataset.
Excuse my meddling with my short experience. I would think that you should look for an area with very similar characteristics (if not identical) such as slope of the land, vegetation, construction or characteristic elements, etc. The homologous configuration of the flight you made and the flight time or natural lighting is also important to try to recreate the textures on the elements. If Iâm wrong, please donât miss his shotđ .
Ah no: for putting together similar camera calibration, you would want to match temperature as much as possible and match the flight height to ensure the same focus settings (part of why a checkerboard pattern doesnât work well), and then hope there hasnât been too much drift in the camera optics since the flights. Fortunately, this isnât a belly-landing fixed wing that beats up on the camera with each landing, so thereâs a good chance you can replicate the conditions pretty well.
Thank you for all your suggestions. I indeed found some suitable images for a self-calibration like you suggested:
It was a POI flight at a different height and angle, circling a small part of the study area. I ran the nadir images together with the POI ones and downloaded the cameras.json file to apply it on the whole area (again using the default High Resolution settings on Lightning). The result looks very well!
Since that worked all out, I did the same (chossing the same area extent for generating the cameras.json file) for the imagery of the following year (since I captured nadir and that POI flight again) to finally do my change detection analysis. However, the results are way off:
Also, the TangentialDistortParam1 (p1) of the cameras.json file is completely off. And since I guess this might happen due to environment and vibration changes etc. it doesnât represent the landscape.
Using the âoldâ cameras file on the current time step didnât work either.