Problems processing with --dsm flag


#1

I have problems processing with the --dsm flag using the latest master (72f8ceef02598e9c11b6469fab9faf194bbb8f07). It errors out as follows:

[DEBUG]   running /home/useruser/ODM/build/bin/odm_georef -bundleFile /home/useruser/data/dsm_test/opensfm/bundle_r000.out -inputTransformFile /home/useruser/data/dsm_test/opensfm/geocoords_transformation.txt -inputCoordFile /home/useruser/data/dsm_test/odm_georeferencing/coords.txt -inputFile /home/useruser/data/dsm_test/odm_texturing_25d/odm_textured_model.obj -outputFile /home/useruser/data/dsm_test/odm_texturing_25d/odm_textured_model_geo.obj   -logFile /home/useruser/data/dsm_test/odm_25dgeoreferencing/odm_georeferencing_log.txt -outputTransformFile /home/useruser/data/dsm_test/odm_25dgeoreferencing/odm_georeferencing_transform.txt -georefFileOutputPath /home/useruser/data/dsm_test/odm_25dgeoreferencing/odm_georeferencing_model_geo.txt
[INFO]    Running ODM Georeferencing Cell - Finished
[INFO]    Running ODM DEM Cell
[INFO]    Classify: False
[INFO]    Create DSM: True
[INFO]    Create DTM: False
[INFO]    DEM input file /home/useruser/data/dsm_test/odm_georeferencing/odm_georeferenced_model.laz found: True
[WARNING] Maximum resolution set to GSD - 10.0% (6.09 cm / pixel, requested resolution was 5.0 cm / pixel)
[INFO]    Creating ../data/dsm_test/odm_dem/dsm_r0.03043548895686489 [idw] from 1 files
[DEBUG]   running pdal pipeline -i /tmp/tmplJHDai.json > /dev/null 2>&1
[INFO]    Creating ../data/dsm_test/odm_dem/dsm_r0.06087097791372978 [idw] from 1 files
[DEBUG]   running pdal pipeline -i /tmp/tmpjeJWNh.json > /dev/null 2>&1
[INFO]    Creating ../data/dsm_test/odm_dem/dsm_r0.12174195582745956 [idw] from 1 files
[DEBUG]   running pdal pipeline -i /tmp/tmpPSN1na.json > /dev/null 2>&1
[INFO]    Completed ../data/dsm_test/odm_dem/dsm_r0.12174195582745956 [idw] in 0:09:37.354135
Traceback (most recent call last):
  File "run.py", line 47, in <module>
    plasm.execute(niter=1)
  File "/home/useruser/ODM/scripts/odm_dem.py", line 112, in process
    max_workers=args.max_concurrency
  File "/home/useruser/ODM/opendm/dem/commands.py", line 38, in create_dems
    fouts = list(e.map(create_dem_for_radius, radius))
  File "/usr/local/lib/python2.7/dist-packages/loky/process_executor.py", line 794, in _chain_from_iterable_of_lists
    for element in iterable:
  File "/usr/local/lib/python2.7/dist-packages/loky/_base.py", line 589, in result_iterator
    yield future.result()
  File "/usr/local/lib/python2.7/dist-packages/loky/_base.py", line 433, in result
    return self.__get_result()
  File "/usr/local/lib/python2.7/dist-packages/loky/_base.py", line 381, in __get_result
    raise self._exception
Exception: Child returned 1

This was caused directly by
"""
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/loky/process_executor.py", line 418, in _process_worker
    r = call_item()
  File "/usr/local/lib/python2.7/dist-packages/loky/process_executor.py", line 272, in __call__
    return self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python2.7/dist-packages/loky/process_executor.py", line 337, in _process_chunk
    return [fn(*args) for args in chunk]
  File "/home/useruser/ODM/opendm/dem/commands.py", line 92, in create_dem
    pdal.run_pipeline(json, verbose=verbose)
  File "/home/useruser/ODM/opendm/dem/pdal.py", line 232, in run_pipeline
    out = system.run(' '.join(cmd) + ' > /dev/null 2>&1')
  File "/home/useruser/ODM/opendm/system.py", line 34, in run
    raise Exception("Child returned {}".format(retcode))
Exception: Child returned 1
"""

This happens on a few different datasets. I can share privately those datasets.


#2

Mm, if you reduce --dem-gapfill-steps or --max-concurrency does it make a difference?


#3

Lowering max-concurrency to 1 allows it to process through… .


#4

I wonder if it’s a low memory problem. The PDAL process is pretty memory intensive. But I don’t exclude other causes.


#5

This was on a small dataset that I’ve run through before, so I’m not totally convinced it’s memory. FYI, it’s the same dataset that you tested the new meshing approach on.


#6

But we have a new problem. It’s not processing the entire dataset. See ortho and dsm. We have a large swath missing from the dsm.

buildings dsm


#7

Disregard – the other stuff is just lower elevation… . :slight_smile:


#8

Ok, now I have hit a memory ceiling, so perhaps your intuition on the multiple cores is correct:

[DEBUG]   running pdal pipeline -i /tmp/tmpVQUYDM.json > /dev/null 2>&1
[INFO]    Completed ../../../mnt/volumevolume/msimbasi/focus/all/submodels/submodel_0000/odm_dem/dsm_r0.1 [idw] in 0:07:41.445126
[INFO]    Starting gap-filling with nearest interpolation...

[CImg] *** CImgInstanceException *** [instance(0,0,0,0,(nil),non-shared)] CImg<double>::CImg(): Failed to allocate memory (5.7 Gio) for image (31597,24333,1,1).
Traceback (most recent call last):
  File "run.py", line 47, in <module>
    plasm.execute(niter=1)
  File "/home/gisuser/ODM/scripts/odm_dem.py", line 112, in process
    max_workers=args.max_concurrency
  File "/home/gisuser/ODM/opendm/dem/commands.py", line 52, in create_dems
    gap_fill(fouts[product], fout)
  File "/home/gisuser/ODM/opendm/dem/commands.py", line 122, in gap_fill
    arr[locs] = imgs[i][0].read()[locs]
  File "/usr/local/lib/python2.7/dist-packages/gippy/gippy.py", line 3713, in read
    return _gippy.GeoRaster_read(self, *args, **kwargs)
RuntimeError: [instance(0,0,0,0,(nil),non-shared)] CImg<double>::CImg(): Failed to allocate memory (5.7 Gio) for image (31597,24333,1,1).
[INFO]    DTM is turned on, automatically turning on point cloud classification

#9

I should specify, this memory ceiling is on a larger 180 image dataset.


#10

I think we should automate this choice a bit, much as we do for orthophotos: