Camera Angle and Accuracy

Yesterday I flew two missions using the same flight path on a quad with a GoPro Hero 9. During the first flight the camera tilt shifted rearward just a bit and by the end of flight the camera was facing more towards the rear of the quad and not straight down. The second flight I made adjustments for this and it ended up pointing straight down. I processed both sets of images with two different results. Does the camera angle make that much of a difference between the two results?

EDIT: Both flights were flown with 85% overlap and sidelap.

Flight 1:

Flight 2:

I know I need to sort out my gimbal, its still a work in progress.


1 Like

Well for mapping, and from my experience, the commercial industry was always 90 degrees NADIR only and about two years ago, 65 degrees started to become a request. seems 90 degrees is the shortest distance to a Ground Control Point and that may be why the history but the 65 degree angle definitely provides nicer 3D viewing of the models.
I feel like this isn’t the answer you were looking for but it might be the answer others are looking for when they search for the same topic. Plus its a bump to your query.

You’re making othos with a quad and gopro for fun? or is this hardware setup something you are trying to use for business?

1 Like

Mostly for fun, not for business. I am working on my degree for archaeology, so this might be a useful skill set to have.

Oblique images produce a variable scale in the image. Which is never recommended. But in analog photogrammetry it was less so due to the limits of the restitution machine.

In digital photogrammetry, the starting point for a numerical solution algorithm, simplifies sines and cosines to linearize a rotation matrix, therefore it assumes that the angle of inclination between the camera systems and the solution system is very little. The first few computational steps also assume a fixed scale on the image. If the additional slack constraints allow it, the algorithm converges to a better solution.

If there are no control points that constrain the orientation of the model, it will orient itself based on the positions of the images. But the restriction on the orientation of the images is more relaxed because the positioning device is considered to have significant random uncertainty.

Therefore, the model produced by digital photogrammetry with oblique cameras, without initial rotation parameters and without control points, can be oriented and scaled in an unpredictable way.

Finally, OpenDroneMap projects the model onto a horizontal plane to produce the orthophoto. What I intuit in your processing is that the model was inaccurately oriented and scaled and that translated into the projection.

1 Like

Nadir images also have variable scale. The edges of the field of view of the camera have a larger GSD than directly below the camera, the wider the FOV, the greater the scale variation across the image. With my M2P the GSD in the corners is ~1.5X what is is directly below the camera.

1 Like

That is interesting. So that P4P with the wider FOV would yield more variation than an aircraft with a smaller sensor… makes me think :slight_smile:

Yes, a telephoto looking straight down from a aircraft would have the minimum GSD variation across the image.
When using a wide angle lens the GSD varies significantly and can approach infinity if there is any sky in view! :astonished:

It’s true, but the less the scale varies the better.

As a starting point, Pix4d has some good documentation on what different settings they recommend for capturing data for a wide range of applications


Thank you all for the replies. Lots to think about.


The short answer is: for well collected data with a suitably fast shutter, gimbal angle doesn’t matter as much as it used to, and nominal deviations from nadir are beneficial. Our orthophotos are true orthophotos derived from a detailed surface model and projected from the best available pixels (the most vertical of the available pixels), which serves in contrast with most historic aerial images processes prior to drone imagery.

In the case where there are strong rolling shutter effects, automated onboard optical correction (which introduced impossible to remove artifacts particularly from non-nadir shots), and other challenges, having off nadir images may exacerbate the other issues. But, as others have indicated above, off nadir images help reduce bowling lens calibration issues in datasets that are otherwise robust.

So, questions about these datasets:

  • Did you apply any rolling shutter correction when processing in OpenDroneMap? I’d also be interested to see the residuals on the two camera estimates.

  • Were these converted raw images with no distortion correction or distortion corrected images? For GoPro, using the least processed images is critical.


Both datasets were resized to 2048px and processed with the default settings in WebODM. I did not apply any corrections to the images themselves.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.