Point Cloud density decrease after calibration with lensfun

Dear all,

I am trying to derive changes in sedimentation and erosion between two timesteps using point clods. For the image acquisition I used a Mavic Pro (flight height 80 m). Reading some posts in this forum I figured I need to calibrate my images first since I did not do the additional perpendicular flight paths for self calibration.
Here, I used lensfun with the python wrapper lensfunpy.

However, I did a parallel run without calibrating the pictures. While the latter (image 1) now shows promising results concerning the point cloud density (~ 67 000 000 points) it also shows the anticipated doming effects. The run with the undistorted images, however, outputs a point cloud that is so much smaller (for comparison: it is that tiny blue box in image 3) and sparser (~280 000). Also, the elevation values and the orientation (see image 2) don’t match at all. What am I missing here?


For the image upload I used the WebODM Lightning (Version 1.1.0) on a 64-bit Windows 10 Pro machine. However I ran the analyses on the Lightning node (Pro plan), since my machine doesn’t have enough capacities for the amount of pictures (~500). I used the settings for High Resolution imagery (–depthmap-resolution: 1000, --dem-resolution: 2.0, --orthophoto-resolution: 2.0) and fine tuned the smrf parameters according to the flat, grassy area with little occurence of bushes (as read in the ODM documentation):-- smrf-threshold: 0.2, --smrf-window: 10.5, --texturing-nadir-weight: 6

You can find the task output here:

At this point I am at my wits’ end. Any help is highly appreciated!
Cheers

2 Likes

My guess would be that the undistortion done with lensfun didn’t correctly undistort the images.

You could try to fly another dataset with the same camera, this time making sure to capture various angles, different altitude, etc., then export the camera calibration model (cameras.json) and set the --cameras option using the exported camera calibration model.

1 Like

Yes. Lensfun has general calibration, but it’s not good enough (not specific enough to your camera) for photogrammetry. You might be better off either doing as Piero suggested, or use no calibration fixes at all. The automated fixes will probably be enough. If not, add an orbit to all of your flights: for smaller flights, you don’t have to go too crazy with calibrated flights. Just adding an orbit is adequate.

3 Likes

Thank you both for the quick reply. Unfortunately I don’t have the means to do flights again since the area is thousands of kilometres away and not easily accessible. I will keep it in mind for the next time though.

Do you think I could do a “manual” calibration using a chessboard pattern and use the derived values within lensfunpy? Maybe some of you had to do a similar workaround due to the lack of data?

2 Likes

Checkerboard patterns are notoriously fraught for a variety of reasons. Instead, I would recommend instead flying the recommended pattern over a small area, and doing as Piero recommended: export the cameras.json and use that in the reprocessing of your dataset.

3 Likes

Great. If that works with a different area, I will definitely do that. Thank you for your help!

2 Likes

Hello @fraukaiser and all.

Excuse my meddling with my short experience. I would think that you should look for an area with very similar characteristics (if not identical) such as slope of the land, vegetation, construction or characteristic elements, etc. The homologous configuration of the flight you made and the flight time or natural lighting is also important to try to recreate the textures on the elements. If I’m wrong, please don’t miss his shot😅.

Regards.

2 Likes

Ah no: for putting together similar camera calibration, you would want to match temperature as much as possible and match the flight height to ensure the same focus settings (part of why a checkerboard pattern doesn’t work well), and then hope there hasn’t been too much drift in the camera optics since the flights. Fortunately, this isn’t a belly-landing fixed wing that beats up on the camera with each landing, so there’s a good chance you can replicate the conditions pretty well.

4 Likes

Thank for you correction and clarification @smathermather.

1 Like

Thank you for all your suggestions. I indeed found some suitable images for a self-calibration like you suggested:
image
It was a POI flight at a different height and angle, circling a small part of the study area. I ran the nadir images together with the POI ones and downloaded the cameras.json file to apply it on the whole area (again using the default High Resolution settings on Lightning). The result looks very well!

The cameras.json file looks like this:
image

Since that worked all out, I did the same (chossing the same area extent for generating the cameras.json file) for the imagery of the following year (since I captured nadir and that POI flight again) to finally do my change detection analysis. However, the results are way off:

image

Also, the TangentialDistortParam1 (p1) of the cameras.json file is completely off. And since I guess this might happen due to environment and vibration changes etc. it doesn’t represent the landscape.
Using the “old” cameras file on the current time step didn’t work either.

Maybe someone has another suggestion?

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.