General Photogrammetry Question - Transition photos

Hello all.

I’m having a debate with someone. Say I am orbiting a 50ft tower. I’m doing orbits from the top to the bottom of the tower. I’m doing 2 minute orbits, with 2 second intervals between photos per orbit. Once I complete 1 orbit, I drop down 10 ft and do another orbit, until I get to the bottom.

  1. I usually pause the photos when I finish one orbit, drop down 10ft, then start the camera again and do the next orbit. The person I’m debating says if I don’t keep capturing photos when I do the transition to the next lower orbit, the processing of the photos won’t work.

  2. Lets say as I’m coming down the tower I forget to do an orbit at 30ft. I get to the bottom and realize I forgot the orbit at 30ft. After I finish the ground level orbit, I go back up to 30ft and do the orbit. The person I’m debating says the processing will fail because the photos are out of order, and the big jump from the last ground level orbit photo and the first 30ft orbit photo will mess it up.

I say in both these generic examples, the processing will be fine. While having the extra photos showing the transition between orbit levels won’t hurt, not having them won’t cause it to fail either. Same with the out of order photos. Just because the photos are out of order doesn’t really matter, as long as you have enough photos of the object with enough overlap between all the shots. It’s not like photogrammetry software looks at the photos linearly and throws it’s hands up if they aren’t in order, correct? So who is right?


1 Like


Interesting queries.

In my opinion:

  1. They may be correct depending upon the gimbal angle and how far you are from the object being imaged. Maintaining proper overlap/sidelap is critical to a consistent reconstruction and it sounds like you’re doing fully manual capture, so this is even more crucial.

  2. They may be correct depending upon the matching strategy. For instance, our default matching strategy is an 8 image window around the location of the image being matched to. If the distance is sufficient between “levels”, it may not match those images to another level. Again, we return to requiring sufficient sidelap/overlap.

I guess to clarify, in my examples…

  1. During the orbit, and in moving down to the next orbit, there’s probably 70% overlap/sidelap between one photo to the next on each side within the orbit, and between one level of orbit to the next up or down.

  2. In this scenario, I guess there argument is that the software is looking at the files in a linear fashion, and always expects that DJI001 will contain an image, and DJI002 will logically contain the next image (with proper overlaps of DJI001 and such), DJI003 will contain the next image which overlaps DJI002, and so on. My (admittedly very thin) understanding is generally photogrammetry software would take the location metadata from the image files to build a “map” of where the camera was in the world when the photo was taken, then use that virtual map to decide which photos are close to each other and attempt to stitch them together. In essence, I’m auguring generally the software is using the location metadata to determine what images to match together, not something such as file name or timestamps.

Or do some software use location data, and others use timestamps?

1 Like
  1. With this assumed to be true in capturing, then not likely an issue.
  2. Yes, our default matching is based upon location, not datestamp or file name.

Thanks Saijin, you just won me a dinner… :smile:


Sweet! You owe us a WebODM processed pointcloud of the dinner :rofl:


This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.