I’ve been watching the RAM usage and I think you’re right. Peak usage by pdal pipeline is about 10 gigs with the one concurrent process flag. Thats a craaaaazy amount of RAM to multithread that operation. 240 gb + 10-20 for wiggle room to get 24 threads running it.
I found this when I looked for pdal memory issues, looks like installing a large, fast hard drive and enabling about 250 gb of swap space (or comparable large amount) is all that can be done without a PRODIGIOUS RAM commitment. http://osgeo-org.1560.x6.nabble.com/pdal-How-to-handle-las-files-that-don-t-fit-in-memory-td5241680.html
Yes, generally the folks working on PDAL don’t consider it an issue. And they’re right from the point of view they take: why make the software complicated when you can tile up the data and stick it back together if you hit the memory ceiling.
For ODM, we aren’t tiling the data to keep memory reasonable, but feature requests and/or code contributions welcome. The DEM stuff is still young.
Also, with upcoming integration of split merge, this may rapidly become a non-issue, as the data will already be tiled into submodules.
Out of curiosity, as PDAL is the consistent bottleneck when DEMs are generated – do you have a sense of how much max memory the rest of the process took (before DEM generation)?
The image space came out to about 52 gigs of hard drive space. I have a custom script for that that I can share with you if you want. It starts webodm, mounts a drive, and then engages the docker image on the drive.
As far as ram goes, I’ll rerun from dataset with the last options you suggested overnight and make a timelapse of the htop (task and system resources viewer). I’ll mark the peak usage for ram if you want. It might take a day, it might take 3-4 depending on how much parallelism we lose with concurrency = 1. I won’t start it for probably 3 hours if you have some settings you’d like to test, but my node is your node if you think it will help development in any way at all. I love this program and I’ll do anything I can to help out. You’ve already been a great help.
The rerun with your suggestions worked, but it took 3 hours from rerun at DEM to completion. What do you think of this dataset? IE what settings would you change to get better shape reproduction? It was better than the first ones i posted for sure.
I am curious how much RAM it uses when fully paralellized , skipping the DSM/DTM step. I’m wondering if, until we do something nicer, we have the process governed as to it parallism just for the DEM processing, and trying to get a sense of what that ratio should be.
OK, fully parralelized yields this result. it never exceeded 30 GB but it got VERY CLOSE at timestamp 9:10 onwards. Here’s the video with hard drive space, utilization of resources, and the WebODM output running over a timelapse.
Forgot the @xmutantson – try this pull request, if you’re comfortable installing and running on the command line. If not, I think these improvements will be available in WebODM in a few weeks and are well worth testing. The improvements are not incremental but substantial.
I’ll just wait a few weeks. I don’t have the time to install it manually, but I’ve already come a long way from the quality of the first post and that looks like it will make it even better. Do you have any tips to make the walls look better, or is that addressed in the update?
My hunch is that that will be addressed in the update, but it is possible to have a great looking point cloud and still be disappointed with the mesh. Time will tell!