I’ve had a difficult time getting the facades of buildings using nadir imagery but when I attempt to include images from an orbital mission I get a lot of area that is quite far from the building of interest. The point cloud ends up including a lot of area away from the building and as a result the mesh that gets created is of poor quality. I’d ideally like to be able to edit the point cloud before running the mesh. I can do this in Cloud Compare but am not sure of the steps involved in running the mesh creation in WebODM. Can someone outline the steps necessary (using Lightning) to do this. Can I run only the mesh? Would I have to edit the point cloud and then load it back to a certain location for it to be used in the mesh processing?
Any info would be greatly appreciated. I process using Lightning since my computer can’t handle larger projects.
Is there any plan to allow editing of point clouds in webodm or Lightning?
Check out auto-boundary and possibly --boundary since they’ll be present in Lightning.
(Docs are underway for these new features, but it looks like the GitHub CI/CD for docs is broken for some reason, so they are not public quite yet.)
| Parameter Type:
| Parameter Domain:
| True: ``--auto-boundary``
| False: ``null``
| Parameter Default:
| False: ``null``
CPU ●○○ | Low
GPU ○○○ | None
HDD ○○○ | None
RAM ●○○ | Low
Time ●○○ | Low
What Is Auto-Boundary?
``--auto-boundary`` is a process that seeks to limit the boundaries of the reconstruction based upon a K-Means filtered Convex Hull buffered by 20x the mean GSD of the dataset.
When Is Auto-Boundary Helpful?
``--auto-boundary`` is appropriate to use on any dataset where one might possibly consider limiting the area of reconstruction due to the presence of sky or far-away background that they would not normally consider part of the desired reconstruction.
``--auto-boundary`` does not have a meaningful impact on nadir (or near-nadir) imagery without sky/background, making it superflous but safe to include.
In other words, if you would consider masking the image, ``--auto-boundary`` is likely a good choice.
Why would one use auto-boundary?
Auto-Boundary is most helpful in preventing the reconstruction area from growing needlessly large when things like sky, clouds, or far-away features like treelines get included in the reconstruction.
By preventing the boundaries of the reconstruction from growing needlessly large, Out-Of-Memory errors become far less likely, and one will likely see a decrease in processing time due to the smaller area being reconstructed.
GeoJSON polygon limiting the area of the reconstruction. Can be specified either as path to a GeoJSON file or as a JSON string representing the contents of a GeoJSON file. Default: ``
Wait I see it. Thanks I’ll give auto-boundary a try first. Will it limit the overall output of the project? In other words only output the area within the auto boundary parameters or will it only use the area inside the boundary for reconstruction but output everything?
It will limit everything, but it has been super super conservative in my testing, sometimes hard to visually notice.
It can have an outsize impact on processing time/ability to process as a few pixels way off in whateverland can grow the reconstruction extent massively, making a big impact on how much data needs to be generated.
If you’ve already got an orthophoto you’ve processed of the site, I like loading that into QGIS and making my --boundary GeoJSON on that as a basemap, and then re-processing the entire dataset to just that boundary I’ve created so everything is super clean/crisp.
I see what you mean about how conservative the --auto-boundary is. It helped only a little. My issue is the mesh isn’t even generated completely probably because of the amount of trees in the area around the building. Here’s a pic.