Code snippet for masking out camera mount to improve 360 model outputs

Hi all,

I’ve been playing with ODM for creating 3D models from gopro max still images. It works quite well if you have enough overlap between images (particularly now that ODM has the new “sky-removal” option), although you have to walk slowly since the minimum capture interval on the GoPro Max is 2 seconds. You can accomplish the same thing with video, but this requires extra processing steps using ODMax to pull the frames and add GPS to them, so if you prefer to walk slowly over adding post processing steps, the 2-sec interval timelapse works quite well and the images are generally higher res and better quality (said with a bit of hand waving).
However, one issue that arises is that the bottom of the image will always have your camera mount (or bike helmet) in it as you can see here:
GSAA2900-small

My first effort to fix this was an analog hack where I cut a piece of cardboard in the shape of my hat and the camera mount and mounted it under camera. This worked reasonably well but still leaves artifacts as you can see here:

The better way to do it is with masking… since the camera mount is fixed you should only really need one mask image, but ODM requires a mask for each image. To get around this I wrote a code snippet that takes a single mask image and copies and renames it correctly for each image in your dataset.

Below is an example of the same model before and after masking (click on text to view models on sketchfab).

"Unmasked model:
image

Same model with masking:
image

So this works surprisingly well!
But note that you may still get some artifacts if there is non-standard movement under the camera or do something silly like putting your hand on the camera and then not deleting that image from your dataset… :smirk:

Here is the code snippet (link)

5 Likes

Dose it shoot raw?

I use two GoPro 10 with raw, works okay.

Fantastic. Any license on the file && any objection to adding it to maybe here?: ODM/contrib at master · OpenDroneMap/ODM · GitHub

1 Like

Good idea. I’ve updated the code snippet to include the MIT license (let me know if a different license is preferable)

2 Likes

No, I don’t think so, only jpg.
If you go with the stills they are 15.6MP. Exporting from the 5.1K video, the images are ~12.4MP

I think when I first got the GoPro Max, it had the highest image quality that could do timelapse, but these days you can go a lot higher res and much higher timelapse framerate now with some of the Insta360 cameras.
Like the Insta X3 can shoot 8K (33MP) at 0.2sec intervals or 72MP RAW images at 3 second intervals (which is a bit insane)

1 Like

Should be just fine as far as AGPL compatibility:

Thanks!

1 Like

:flushed::flushed::flushed:

1 Like

That was exactly my response.

Actually, the tech obsessive part of my brain went “OMG, I want that” and then sanity prevaled as I contemplated the workflow and data management nightmare that would be ushered in by timelapses of of thousands of 72mp images.

1 Like

Ha! Well as someone who built a couple (counts on fingers) ahem – higher pixel count 360 cameras, I can tell you it has the potential to drown you in images and data management very quickly indeed.

1 Like

I dropped it into a pull request: Static masking by smathermather · Pull Request #1552 · OpenDroneMap/ODM · GitHub

I’ve wanted to create this for a couple years, so I’m really happy to see it in the contrib directory.

1 Like

Also, I should say, if you want your name on that pull request / commit, I’m happy to close mine and accept another from you.

1 Like

I’ve also been tinkering on setting this up - I’m planning to write a blog post when I figure out a full workflow with my GoPro Fusion 360.

Which lens type did you use and do the images come out of the GoPro Max as equirectangular? With the Fusion I have to run GoPro Fusion Studio 1.2 (for mac M1 compatibility, newer just crashes) to generate the equirectangular images. I haven’t yet tried just using the original two 180 degree images (only the front camera embeds GPS data; I’d have to copy that over).

I had good results without masking but with helmet artifacts!

Though it seems to struggle with the mask (which I also wrote a little script to duplicate). Does the mask have to be the same format as the images? (jpg/png).

Thanks for the info!

1 Like

I was using gopro max images, not video and I ddin’t crop them in ODM so set the lens on ODM to Spherical.

I haven’t done any testing as to whether one gets better outputs by reprojecting the images into non spherical overlapped images and then running them that way.
Have you tried this method?
You can use this code to re-project them if you are interested

Also, if you aren’t familiar with the work being done at trekview, have a read through their blog posts as they have a ton useful information, some of it with links to code.

The Medium posts by pixel8 were also all very interesting (until they got bought by Snap and went quiet)

I have (in the project you funded, Tim!), and it is better to use the spherical images. The reason is simple: the GPS on the GoPro is not fantastic, and processing them as spherical ensures that the resultant 6-face cubic intermediate results that OpenSfM generates are co-registered to the same position and processed as having the same coordinate. This reduces overall noise in the output likely both from inter-frame differences and from model robustness (more matches and control per camera position – 6x more).

I haven’t tested using the raw frames. At the time, OpenSfM had insufficient support for such lenses. I believe this has improved and is worthy a test as there is a notable reduction in information that happens converting from the image pair to spherical projection.

edit: if memory serves me, ODMax handles raw images and could be extended easily to reconstruct the raw two-camera output.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.