Self Calibrating Camera

I posted a question a little while back about improving accuracy (here). That post was aimed the LAS data ODM produced but I applied the advice given in that post to the orthophotos.

The advice was to fly a two pattern grid at 20 degrees. So, a friend and I devised a project to apply this advice and created a video of what we did using a Mavic 2 Zoom and Pix4D for mission planning. I compared the single pattern mission to the double pattern mission and there appeared to be little difference in the images. There was an offset of around 30cm, similar distortion (trees) but more holes in the double pattern mission image.

Don’t get me wrong, I’d be happy with the single pattern mission for heads up digitising for GIS but I’m wondering if I should have done something different with double pattern mission?

My parameters were:
docker run -it --rm
-v “$(pwd)/images:/code/images”
-v “$(pwd)/odm_georeferencing:/code/odm_georeferencing”
-v “$(pwd)/odm_meshing:/code/odm_meshing”
-v “$(pwd)/odm_dem:/code/odm_dem”
-v “$(pwd)/odm_orthophoto:/code/odm_orthophoto”
-v “$(pwd)/odm_texturing:/code/odm_texturing”
-v “$(pwd)/opensfm:/code/opensfm”
–mesh-size 100000
–orthophoto-resolution 1.5
–dtm --dem-resolution 2 --smrf-threshold 0.4 --smrf-window 24
–pc-csv --pc-las

Add --camera-lens brown. This should get you enough parameters (much more than the two parameters for the perspective (effective) default camera type to improve the overall results.