I can’t seem to proceed with a dataset of 635 16.9 mpixel images on a machine with over 100GB of ram because of std::bad_alloc
.
Last log output is:
[INFO] Creating odm_meshing/tmp/mesh_dsm_r2.82842712475e-06 [idw] from 1 files
{
"pipeline": [
{
"type": "readers.ply",
"filename": "/code/smvs/smvs_dense_point_cloud.ply"
},
{
"data_type": "float",
"type": "writers.gdal",
"filename": "/code/odm_meshing/tmp/mesh_dsm_r2.82842712475e-06.idw.tif",
"radius": "2.82842712475e-06",
"output_type": "idw",
"resolution": 2e-06
}
]
}
Pipeline file: /tmp/tmpJDSKFY.json
[DEBUG] running pdal pipeline -i /tmp/tmpJDSKFY.json
PDAL: std::bad_alloc
Traceback (most recent call last):
File "/code/run.py", line 47, in <module>
plasm.execute(niter=1)
File "/code/scripts/odm_meshing.py", line 98, in process
max_workers=args.max_concurrency)
File "/code/opendm/mesh.py", line 35, in create_25dmesh
max_workers=max_workers
File "/code/opendm/dem/commands.py", line 38, in create_dems
fouts = list(e.map(create_dem_for_radius, radius))
File "/usr/local/lib/python2.7/dist-packages/loky/process_executor.py", line 788, in _chain_from_iterable_of_lists
for element in iterable:
File "/usr/local/lib/python2.7/dist-packages/loky/_base.py", line 589, in result_iterator
yield future.result()
File "/usr/local/lib/python2.7/dist-packages/loky/_base.py", line 433, in result
return self.__get_result()
File "/usr/local/lib/python2.7/dist-packages/loky/_base.py", line 381, in __get_result
raise self._exception
Exception: Child returned 1
This is based on commit 002a6f0 and my smvs/smvs_dense_point_cloud.ply is 2.0GB and has 59376826 vertices.
I used to be able to process this dataset before the introduction of smvs/pdal.