Best ODM 2.3 settings for processing 360 spherical images? (GoPro Max)

I’m experimenting with using a GoPro Max 360 camera to do trail reconstructions. Not surprisingly it isn’t working very well. I’m currently using the TimeLapse image feature on the GoPro Max since that’s the only easy way to get georeferenced images (it isn’t a simple thing to get gps’d images from GoPro Max Video yet)

Is there a list somewhere of the recommended ODM settings for processing datasets like this?

Thanks

Tim

1 Like

There’s no such recommended settings, but I can share my experiences: the GoPro max is pretty low resolution, and as you indicate painful to use in a way that gives GPS.

First thing: set your --camera-lens to spherical. None of this works without doing that, as OpenSfM’s auto doesn’t seem to auto-detect spherical.

With the GoPro Max, I have only had success with relatively dense collections of data. You don’t indicate it here, but I know from our conversation you were bicycling. You might have to take a few passes at that speed to get a dense enough collect, or multiple Gopros sticking out at odd directions on your back.

Here is a somewhat successful collect I did manually (before firmware updates allowed for a time sequence). It has some geometric issues, as it’s too short a baseline to get good average GPS, but it’s an intriguing start:
https://webodm.cmparks.net/public/task/ea5cc8fc-22b7-4ba8-ab38-6bb3de7abd8e/map/
image

In early 2021, I will be testing our odm360 camera ball which should allow for more control and better quality. Specs are favorably close to $15-30k units but likely to cost ~$1000 + time, which is promising.
image

4 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.

Hi Tim,

In addition to the above device, increase your matching distance and your number of matcher neighbors. The defaults are often too low for 360 cameras, we are finding at the moment. I’ll have some more recommendations soon: we just did some work reconstructing the interior of a orangutan exhibit, and I’m getting some insights from that work.

2 Likes

Have been following and reading your posts on libre360 for a while now. Fantastic initiative for new use cases for odm. I hope the reply here is appropriate/the correct location:

I have been testing the code a bit and have the following 3 questions:

  1. Is the capture of sky in the 360 images not an issue anymore for processing the datasets? I have mixed results so far, but wanted to hear what your latest recommendations are.

  2. You mention in your post that it takes days to process for 500m-1km of coverage. How might we push it to capture 10 or 100 km? Could we decrease the image capture interval? or use clusterodm? or something else?

  3. I see lots of GoPro360max attempts? Could I use any 360 image capture/spherical (Insta360, or even Insta360 Pro 2 (with gps), Gopro360 etc) to produce ground based pointclouds/with sky on odm or do you recommend to rebuilt the camera ball as the code is optimised for the raspi rig? I guess the most difficult piece is extracting the accurate gps position from the insta or gopros vs the raspi/gnss rig, is that correct?

  4. Is the image quality of these GoPro and Insta cameras good enough? Looks like less than 20 Mp stitched. Does it create any okay point cloud?

2 Likes