360 Pics

I noticed that WebODM claims to be able to process 360 images, So I want to test that.
I sent up the Phantom 4P and took 6 spherical shots in 6 locations at 2 different altitudes.
My question is does WebODM want the individual pictures and it will stitch them all together, or do I stitched them with another program (ICE or DJI"s Pixel) and load that result into WebODM ? Also because they are spherical shots, there’s a LOT of sky ! What should I do with that ?

1 Like

Ahh, these are the questions of the day!

Ok: two ways to handle the question of individual photos vs. stitched and loaded:

  • If you load the photos directly, then you get to take full advantage of the automated calibration in OpenSfM and you will get more accurate results. The disadvantage is, you may have to bump up your matcher neighbors parameter to ensure that enough of the matching neighbors are found. I have, under some conditions, had to bunch matcher neighbors up to 800, which was never an intended use case for OpenSfM, and can be pretty computationally expensive. It means that for a 2-5k dataset of these, I get 50+ hours of OpenSfM and then ~4 hours for the rest of the process.
  • Alternatively, you can stitch these in 360 panoramas. You have two options here: 1) stitch into a rectangular image and throw know that you’ll lose some information at the poles of the stitch. 2) stitch the panoramas right to cubic faces. This is the least lossy of the two and probably the best compromise. It might introduce some small biases to the calibration, but I think the automated calibration in OpenSfM will compensate.

edit: oops! Sky masking:

  • Uh… I don’t know. I have plans to potentially write a tool for sky masking, but no timeline yet, so don’t wait on me! It’s definitely a need. A colleague recently pointed out that looking at leaf area index approaches is a sideways way to get into sky masking, so there may be some pre-built tools that get you much of the way.

I tried uploading all of the individual pictures with a horrible result. Basically a tiny area was put together with little to no detail.
I stitched the panos together using ICE and uploaded them. I selected Spherical camera. After 12 hours it was still “resizing images”…there were 7 of them.
I will try a couple other settings.

Masking : UGGG…I posted some of masking attempts/results in another thread. The more I masked the worse the results were.
Apparently I’m still learning THAT too.

1 Like

For the processing, did you increase your matcher neighbors? If you didn’t, your results would be terrible.

Set the following:
–camera-lens spherical
–matcher-distance 0
–matcher-neighbors 0

For small datasets only: big datasets will require fine tuning. Setting those matcher parameters to 0 ensures that all images are matched to all other images.

1 Like

Ya still horrible results. I’ll bet I don’t have enough overlap, and to be honest I didn’t even consider overlap when shooting.
I’m going to add the 360s to my existing nadir images and see what happens !!
Any suggestions about resizing images since there is a huge difference between the nadir and 360 pics.

1 Like

I think I’m making progress!
I added the individual 360 shots (not the panorama) to the nadir images and I was quite impressed. This is a canyon and the drone was positioned below the lip of the canyon for the 360 shots…at 2 different altitudes. After it was all stitched together you can actually scroll into the canyon and look up under the ridges for details that nadir images wouldn’t get.
Also, the 360 images were taken with a different drone/different camera/different settings, so I think it did a pretty fair job.
Unfortunately the subject matter is crap because there was a LOT of trees to mess things up.

Disclaimer: This was a SAR mission looking for a missing hiker from last year and it wasn’t my initial intention to do a 3d model…but since I was there I figured “what he heck”. I will try this again with a cleaner subject matter.

Here is the output file: