Process data from terrestrial close range photogrammetry workflows

Hi All,

Well done to all who have contributed to making OpenDroneMap what it is today.

I’ve exhausted all avenues of information before posting, maybe I’m not using the correct key words. I have a question regarding the processing of images taken by hand of the front of a building.

My Experience that lead me to this questions
I’ve been trying to process images capture of a building from the street front and have learnt that in the process of crunching the data the resulting point of view seems to be configured assuming the images are taken from a drone look down as if the Z axis is set to the depth of the image and Y is the top of the image?

My Questions
Is there a way to configure the processing of images to accommodate processing of images from typical terrestrial close range photogrammetry workflows?

Sample Data to reproduce this case
https://drive.google.com/drive/folders/1VxxCS0XPtQHJC-oa3wDmeqd2PI0WI00-?usp=sharing

Maybe this is the point of Libre360/ODM360, Is this in a state for people to start testing?

Conclusion
My ambition at the moment began with openSFM and WebODM seemed like a good natural progression to fast track the use of this data processing pipeline. I would be grateful for any and all assistance or discussion on the topic.

1 Like

Welcome!

Sorry for the trouble.

In many cases it should work fine without a flipped axis, however, in others it can reconstruct flipped. I don’t see a sensor profile for your iPhone11 upstream in OpenSFM, so that might make it more likely to flip the axis. The same was true for my TeraCube TeraCube_One phone before I set a profile up.

I’ll try to tackle that in the next few days.

In the meanwhile, you can try modifying this flag found on Line 228 of osfm.py on-disk and seeing if it helps:

Change:
config.append("align_method: orientation_prior")
To:
config.append("align_method: orientation_prior always")

Location on disk (WebODM for Windows native):

DISKLETTER:\WebODM\resources\app\apps\ODM\opendm\osfm.py
1 Like

Hi,

Thanks for your prompt support on this topic :slight_smile:

I’ll give this a go and report back soon.

In terms of sensor profile, is that about including the iPhone11 here data/sensor_data_detailed.json and associated files?

1 Like

I don’t think this is the case. I suspect that something else is the challenge. Within the ODM pipeline, OpenSfM will take orientation priors and give a best first estimate for configuration. Most of the time, you’ll see a message like:
2022-02-08 00:30:53,470 INFO: Shots and/or GCPs are well-conditioned. Using naive 3D-3D alignment.
Which indicates it isn’t assuming nadir or near nadir photos.

My hunch is that you are running instead into GPS errors. Correct orientation of street level photos requires either really good GPS or enough photos to correct for bad GPS.

Can you share some screen shots of what your seeing or better yet a dataset?

2 Likes

Thanks for following up.


Here is the view of the resulting model from “top” view

Here are some sample images.
Sample Data to reproduce this case
https://drive.google.com/drive/folders/1VxxCS0XPtQHJC-oa3wDmeqd2PI0WI00-?usp=sharing

point cloud from the above images

I’ve also tried with another sensor and used a geo.txt file and for the same result.

1 Like

Here is a sample from the processing log

2022-02-08 04:12:26,939 INFO: -------------------------------------------------------
2022-02-08 04:12:26,947 INFO: IMG_3215.JPG resection inliers: 289 / 477
2022-02-08 04:12:26,949 INFO: Adding IMG_3215.JPG to the reconstruction
2022-02-08 04:12:27,006 WARNING: Shots and/or GCPs are aligned on a single-line. Using vertical prior
2022-02-08 04:12:28,945 DEBUG: Ceres Solver Report: Iterations: 22, Initial cost: 3.687154e+03, Final cost: 1.467742e+03, Termination: CONVERGENCE
2022-02-08 04:12:29,100 INFO: Removed outliers: 226
2022-02-08 04:12:29,109 INFO: -------------------------------------------------------
2022-02-08 04:12:29,116 INFO: IMG_3216.JPG resection inliers: 379 / 565
2022-02-08 04:12:29,119 INFO: Adding IMG_3216.JPG to the reconstruction
2022-02-08 04:12:29,160 INFO: -------------------------------------------------------
2022-02-08 04:12:29,163 INFO: IMG_3194.JPG resection inliers: 154 / 165
2022-02-08 04:12:29,165 INFO: Adding IMG_3194.JPG to the reconstruction
2022-02-08 04:12:29,203 INFO: -------------------------------------------------------
2022-02-08 04:12:29,206 INFO: IMG_3193.JPG resection inliers: 73 / 73
2022-02-08 04:12:29,207 INFO: Adding IMG_3193.JPG to the reconstruction
2022-02-08 04:12:29,233 INFO: -------------------------------------------------------
2022-02-08 04:12:29,239 INFO: IMG_3192.JPG resection inliers: 116 / 120
2022-02-08 04:12:29,241 INFO: Adding IMG_3192.JPG to the reconstruction
2022-02-08 04:12:29,297 INFO: -------------------------------------------------------
2022-02-08 04:12:29,304 INFO: IMG_3191.JPG resection inliers: 193 / 200
2022-02-08 04:12:29,306 INFO: Adding IMG_3191.JPG to the reconstruction
2022-02-08 04:12:29,378 INFO: Shots and/or GCPs are well-conditioned. Using naive 3D-3D alignment.
2022-02-08 04:12:31,185 DEBUG: Ceres Solver Report: Iterations: 18, Initial cost: 3.505069e+03, Final cost: 1.583870e+03, Termination: CONVERGENCE
2022-02-08 04:12:31,370 INFO: Removed outliers: 180
2022-02-08 04:12:31,379 INFO: -------------------------------------------------------
2022-02-08 04:12:31,388 INFO: IMG_3190.JPG resection inliers: 691 / 704
2022-02-08 04:12:31,393 INFO: Adding IMG_3190.JPG to the reconstruction
2022-02-08 04:12:31,474 INFO: -------------------------------------------------------
2022-02-08 04:12:31,481 INFO: IMG_3189.JPG resection inliers: 459 / 460
2022-02-08 04:12:31,484 INFO: Adding IMG_3189.JPG to the reconstruction
2022-02-08 04:12:31,515 INFO: Re-triangulating
2022-02-08 04:12:31,516 WARNING: Shots and/or GCPs are aligned on a single-line. Using vertical prior
2022-02-08 04:12:34,543 DEBUG: Ceres Solver Report: Iterations: 29, Initial cost: 2.059830e+03, Final cost: 1.695187e+03, Termination: CONVERGENCE
2022-02-08 04:12:37,949 DEBUG: Ceres Solver Report: Iterations: 11, Initial cost: 4.100432e+03, Final cost: 2.367485e+03, Termination: CONVERGENCE
2022-02-08 04:12:38,156 INFO: Removed outliers: 803
2022-02-08 04:12:38,166 INFO: -------------------------------------------------------
2022-02-08 04:12:38,169 INFO: IMG_3188.JPG resection inliers: 141 / 143
2022-02-08 04:12:38,171 INFO: Adding IMG_3188.JPG to the reconstruction
1 Like

I think you need to take photo more, I see homogeneous fence motives, I think it will make WebODM confuse.

My tips :
you can combine this 3 ways
a. multiple photos of the fence from far away, do (approx) 80% overlap between photos
b. multiple photos of the fence closer than before, especially for detailed/complex model and better mesh texture, do (approx) 70% overlap
c. Photos from POV that you can’t see from step a & b

2 Likes

This image of the fence in the web view is an example of the issue I am trying to understand.

In theory when the top down perspective is viewed it should be the ground?

In this image I also note the top down perspective is 90 degrees from where it should be and with relatively level (not flat but also not a mountain) ground level.

1 Like

Another aspect I’m thinking about is the ability for the system to know where the horizon is since it is not assuming nadir.

The dataset would also need an accurate IMU wouldn’t it?

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.