Hi I’m using the point clouds to racing track; but i’m constantly finding that the point clouds generated in WebODM scatter the point to much to be able to create a smooth surface in a 3d editor.
I’m always post processing the point cloud in cloud compare but i’m not able to smooth them to the point to remove non existence bumps in the tarmac, specially where are white features like road lines.
The model aligning the meshes to the point cloud, after a lot of smoothing and post post processing, the bump still there and affects the meshes aligned to it:
I think here it is worthwhile to know that even lidar surveyors get these effects and have to spend a lot of effort to smooth stuff out another way - often surface fitting to points rather than meshing based on points. Have you tried to segment out the road, fit a surface to the points, and build the road based on that? (then re-fit to the model you’re using for the virtual race?)
There’s something about the white strip that upsets the relative geometry of keypoints found there - in my experience that is not unusual for photogrammetry or lasers. I’m sure pull requests for road detection and smoothing automations are also welcome
I have spend a lot of time in cloud compare doing postprocessing using mfs smooting multiple times and then using pixelate process to output a uniform cloud whit a define distance between points.
But running those process i lose definition and road detail, so I’m looking for a way to improve the initial point cloud in Webodm to reduce the number of times I have to run smooting process.
Also if you know of any program or routine used to postprocess lidar/laser/photogrammetry point clouds I’ll be thankful. Hopefully open source.
Hi ichsan, thanks for the alternative.
Actually I have found ways inside cloudcompare to smooth even more the resulting cloud but im loosing geometry, specially curve banking but the results has been quite good.
I dont use the resulting 3dmodel because… To tell the true, i’m not that good using blender and I use the point cloud to model the surounding buildings, measure object sizes and placement on track.
Is key for me to optimize the output cloud point as much as I can on Webodm befor exporting it and post processing it.
Any guidelines related on how to optimize the output cloudpoint is welcome
I would use classification of the stripes in the orthophoto to separate the stripes from the rest of the point cloud. Leave the non-striped portion untouched, and just smooth the stripes and areas nearby. Then instead of weird bouncing at the stripes you may get quieter bits of track, but that will be less noticeable and you won’t be over smoothing the remainder of the track.
In Cloudcompare I was thinking about using the quadric 2.5D surface fitter - fit that to road surfaces, compute cloud-mesh distance (signed), remove cloud-mesh distance from points? That should (might?) sit all the points exactly on a smooth interpolated surface. I do similar things (sometimes) to remove bowling if I don’t care about absolute topography, but relative roughness is important.
I’m not sure if its best to fit the quadric to all the road points at once, or segments of it - there’s probably a lot of work to find the best way to slice up the point cloud and fit it back together after. Cloudcompare makes all that relatively easy, thankfully. I’m not sure how to regenerate a textured mesh after that - maybe drop the points back into ODM and run just the meshing and texturing steps?
In other areas lots of expensive software is used (Autocad, Terrasolid etc).
My main point was that I think smooth meshes just don’t happen straight off of photogrammetric processing (or lidar) for lots of reasons. I think I could have said it in a kinder and more constructive way.
I hope the Cloudcompare suggestions are helpful!
While I think of it, does meshlab have useful tools for mesh smoothing? I think in there you could also spend quite a lot of time manually shifting vertices around (or in Blender also)
Stripes are your brightest thing (likely) so you should be able to use a threshold on your orthophoto to separate the stripes from all else. Raster calculator in QGIS is one way to do this (among many).
I would buffer this result by a few cm to get an area beyond the stripes. You can then use pdal.io in combination with your two color raster to add a class to the point cloud differentiating points and non-points.
The rest is then figuring out how to smooth and blend between the two, or just filling smooth points in the gaps that you’ve made where there are stripes. Anything I say beyond this point is supposition and would require playing with a range of options, but between tools in cloudcompare and pdal, you should have all the things you need to manipulate it from there.