Manual fixing meshes and reprocessing

I’m new to ODM this weekend and already getting fantastic results. Thank you to everyone’s hard work on this! I look forward to being able to contribute to it as I become more familiar with it.

I am happy with the results for most of my reconstruction but want to ‘correct’ the model of the main building in my reconstruction. My desire is to take the initial output mesh and manually fix it by cutting out the triangles from the building and putting in a manual modeled one with flat walls, etc, and then re-run the pipeline from then on. Is this feasible and any suggestions? Are the ply files in ofm_meshing used for the input to downstream stages and will be picked up if modified, or is it just an output location? I think I saw another post that talked about command line args to pick up processing from a given stage. I’ll get so I can build and get into the code soon but was hoping for a quick win with just running from a docker image as I am now.

BTW, was thinking of doing a utility to help facilitate this. A simple point and click interface to build flat surfaces and cut and replace areas of geometry when you want to manually fix/augment. Not sure if the texturing phase will create a lot of overshoot errors with modified meshes or not. In the past with reconstruction projects I know that mismatched geometry (slightly inflated or deflated mesh area) can lead to this, and even though the geometry will be more correct with hand modeled, it may not best fit to multiple slightly off alignment of images and need some texture algo tweaks as well.

Thanks for any input.

2 Likes

Perfectly feasible. Assuming you processed with the defaults, after you modify the odm_texturing_25d/odm_textured_model.obj model, run the dataset passing --rerun-from odm_georeferencing. That will resume the pipeline from the proper stage to use your model.

If you want to modify the plain mesh instead and want to re-run texturing after changing the geometry of the model, you can instead modify odm_meshing/odm_25dmesh.ply and --rerun-from mvs_texturing.

Would be amazing to get an interface for this kind of task. Keep us posted :slight_smile:

4 Likes

Yeah: I’ve been meaning to try this myself for a while. I’m super excited to see how it turns out.

1 Like

Thanks for the replies.
--rerun-from mvs_texturing is what I was after and quick test validated it worked as expected for a simple modification to the mesh. I think this justifies the time so I’ll move on to the editor.

I’m a huge Unity fan so I’m going to build some helpers to load an ODM project into Unity and place inferred camera locations for each image output from ODM into Unity and use as projectors to cast the individual images onto the geometry being built. I think that will help in placing surface as I can get some real-time feedback of how it lines up and will texture. Now I just need to dust off my math to understand the camera calibrations. It looks like I should be taking camera params and locations from /opensfm/reconstruction.json and use that to project the images from /opensfm/undistorted if I want to use the projection of individual images as geometry placement aids.

Ultimately I’m guessing the community would probably want something like this as plugin to Blender?

2 Likes

That would be amazing @don. Blender is definitely my go-to in post processing and visualization.

1 Like

Ya, I use Fusion360 and never really liked Blender… but haven’t used it for years and think it went through a major UX overhaul since I last tried it. I’ll give it another go after I’ve done my first pass in Unity.

2 Likes

+1 for blender.

Godot could be an alternative if Unity-like functionality is needed. https://godotengine.org/

2 Likes

Another interesting point could be possibly loading it into QGIS as well, since v3.18 and above should have much better support for 3D data.

That aside, +1 to Blender for me, though Unity does call to me from the distance.

Well that was a complete pain but figured out the format of reconstruction.json so can place cameras correctly with orientation and will soon be able to project original source images onto new geometry to help alignment. The format for reconstruction.json wasn’t documented quite well enough in OpenSfM so I should probably write a clarifying doc on it. Was confusing what translation and rotation was vs gps_position and how to get camera orientation.
image

3 Likes

Yeah! Additions to missing documentation is huge and much appreciated. Thanks for wading through.

1 Like

Building shell editing tool worked pretty good. Now just need to turn the outlines into a mesh and then can run it through and see how it worked.
image

3 Likes

There is just pure joy for me over here watching this unfold.

2 Likes

That was a long road but worked out pretty good. Here is the before and after of the main barn roof (closer building). The texture looks a little washed out but its actually more accurate to the drone photos. Still need to write the code to extrude walls to roof line but was happy with the roof line.
snapshot03 snapshot04

4 Likes

That looks like a significant improvement!

2 Likes

I got the wall creation done so I have a pretty good model. I did hit one major gotcha that took a while to debug. I’m guessing the texturing phase skips triangles that are big enough that they are not seen in entirety by a single camera shot (only considers cameras in texturing that see the whole triangle). Several of my planes where being dropped in texturing until I thought of this. I subdivided the big planar triangles down to smaller sized and it worked.

I still need to do better removal of the old building and stitch the seam, but at a good resting point. I need to go get outside for a bit.

Here are the before and after shots.
snapshot00
snapshot01

5 Likes

Correct; you want to try to keep triangles evenly sized and distributed (as much as possible).

3 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.