Does OpenDroneMap take in count pitch, roll, yaw from exif when processing orthomosaics. Found nothing in tutorial. In 2015 user forum messages was information that for that time ODM not supported pitch, roll, yaw.
It’s still not supported. It’s a rare and extremely expensive IMU that gives accurate enough values for us to use. You would need something on the order of IMUs that are used for Lidar, and it still wouldn’t gain much if any accuracy over photogrammetric matching and structure from motion.
Yep, no support as of yet. Although inaccurate, other software uses the IMU values to set a priori value to the bundle adjustment problem (as a “best-guess” initial estimate), which would help the program converge to a solution more quickly.
Yes, I suppose much as even poorly know GPS positions are useful for initiating the model, any additional pose graph info could help.
Is this still the case?
That’s fantastic!
Doesn’t pitch yaw and roll (Omega, Phi, Kappa) get calculated in the early steps once the keypoints are identified? I thought that was the whole point of bundle adjustment to accurately nail down camera orientation. Some aerial cameras have very accurate inertial gps which accurately identify these variables.
Does the raw exif data from the drone cameras include rough estimates of pitch yaw and roll that get tweaked during the process?
All photogrammetry is a balancing act of intrinsics and extrinsics: how to come up with a model that best explains the mix of information we have with respect to positions, orientation, lens calibration, etc… How we handle the inputs depends on our confidence in them. For example, if we have GPS positional estimates, we can use those as a starting point for constructing our model, and refine those positions. If we have high confidence in those positions, we can constrain our models to limit the amount of position refinement to our expected GPS error, and then change our other model parameters such as orientation and lens calibration a larger amount.
I should add to this: the description of GPS applies equally to other parts of the pose, including camera orientation. If we have some initial estimate of that, it is useful. I don’t think OpenSfM yet has a way to specify the quality of these inputs, so the weighting factor is probably just fixed, but if you have the data, it is potentially beneficial to the quality of the final model.
Can no longer access the hyperlink shared above. Quick question, however: which of the two orientation combinations does WebODM use? Omega Phi Kappa or Roll Pitch Yaw? I know they are slightly different concepts.
Hehe, documentation restructuring.
Image Geolocation Files — OpenDroneMap 2.6.0 documentation
Looks like we use omega/phi/kappa