Reconstructing models from multiple levels (aerial and ground), initial findings

Ok my friends. I present to you my findings on combining ground level photos with aerial imagery in a cohesive and fantastic model. Now, there are some limitations to this model: one set of photos was taken on a sunny day at sunset and the other on a partially cloudy day a couple hours before sunset. In addition to lighting differences, we can expect challenges from differences in GPS between days (we cannot rely on lower relative error for short flights). But the challenges are an important part of understanding how to do this even under less-than-ideal conditions.

I have been trying to think through the problem space of combining aerial and ground photos in an effective and scalable way for a while. At first I thought it would be easy (circa February 2021), and now I know it is not as easy as it could be.

But let’s start with some results and the successful endpoint(cloud):
ezgif-4-c49b83c717

And mesh:
ezgif-4-43b79ee594

So, let’s talk about failures when this doesn’t work. This was the first result I got when I combined ground images and low flight images with higher orbits:

The ortho isn’t quite… I mean, I suppose it is orthorectified, but the position from which its been orthorectified is way off… .

And this has everything to do with the fact that OpenSfM is now doing a more meaningful job integrating all possible data into the model (yay!) but not doing a good job initializing the model (awwww!).

To test this theory, I tried setting the GPS accuracy for the imagery to encourage the SfM to prioritize appropriately the good data when setting up the model. This didn’t work well:

See how we can see the side of the buildings? It is better, but not good enough. What we need to do instead is initialize the models using the best available data so that the orientation is better fixed from go.

Baring changes to OpenSfM (which I will be proposing, I suspect it’s at worst a 10-line fix), I simply stripped all the GPS tags out of the less accurate data, and reprocessed. This ensures that no orientation data is taken from the images with poor GPS. It means we have to use BOW matching, which isn’t as fast usually as the current incremental approach.

8 Likes

Reading through the OpenSfM docs here:
https://opensfm.org/docs/reconstruction_module.html?highlight=initialization#finding-good-initial-pairs

I suspect we need to add a weighting factor for GPSDOP/GPSXYAccuracy/GPSZAccuracy right around here (assuming that is feasible):

Opened an issue on github:

4 Likes

If the goal is to get a better orthophoto, have you tried using --use-3dmesh?

2 Likes

The goal is the 3D model itself correctly oriented. It’s just easier to show the model initialization issues and how they propitiate through by using the orthophoto.

2 Likes

Ah, I see; how about patching/forcing ODM/osfm.py at master · OpenDroneMap/ODM · GitHub align_method: orientation_prior (which by default is auto?)

3 Likes

I’ll give it a wing.

1 Like

Forcing orientation prior resulted in the same / similar outcome:
image

2 Likes

My try naïve next just for giggles.

1 Like

Point cloud and mesh look great. Are you using the same camera?

is the model tilted or deformed? is it possible to control model orientation and position by using GCP’s?

1 Like

Yes, point cloud and mesh are frankly fantastic. No distortion, just rotation.

One camera for all, including carrying the drone around the yard with rotors removed with a child operating the trigger for the lower shots.

GCPs are an option, but seem a bit silly when most or all the gps info is good enough, but orientation priors are just estimated wrong.

3 Likes

Naïve is well named:

Models themselves are quite fantastic. One additional anecdote is that matching results in 1 model if I force BOW, but generates 3 models (which get blissfully merged) otherwise.

2 Likes

Aside from orientation, the model is soooo good:

(BTW, the cars moved several times, so they are a mess)

4 Likes

Any more thoughts / alternatives for this?

1 Like

I’d try to remove the GPS coordinates from the ground shots. It’s strange that OpenSfM can’t orient the model properly though, but I haven’t seen the dataset.

1 Like

I can share the dataset later today.

1 Like

Hello folks!

I was also going to ask if ODM could work for on ground photogrametry. I mean walking around with my phone taking pictures.

What would be the best processing configuration?

Thanks

2 Likes

It sure can. There are a number of posts about doing so. Here is one of mine: My simple WebODM project - print a rock! . I think I processed with mostly default settings, probably changing the initial option to don’t resize photos.

Also, look at more detail on settings in Kerrowman’s thread here: Creating 3D model from phone photos - best settings .

3 Likes

I did another ground one a few days ago using my iPhone 12. I also find it helps using a remote for the shutter in one hand and the phone in the other.

Here is the end result: Kerrow Lintel Stone - Download Free 3D model by Kerrowman [2bde0e3] - Sketchfab

It took 261 images and I set my RAM useage to 15GB out of 16 available and these settings:

Pixel: 2,000, Feature Quality: Ultra, PCQuality: High, Mesh: 300,000, Min Features: 50,000 and all others on default settings.

The pixel settings is not the resizing but set underneath the edit button.

2 Likes

Also PC Geometric: True, Use 3D Mesh: True and Auto boundary

2 Likes

Julian, are you finding that some models are tilted when viewed from some angles, such as your lintel, which is tilted when viewed end-on. It’s as if WebODM isn’t finding the ground plane correctly, which I guess is a bit difficult when there is minimal flat area around the object of interest.
I’ve had that with a number of objects, both with drone and phone photos, but I can view them untilted in Meshlab.

1 Like