Segment Fault while running DensifyPointCloud

I’m running ODM on a collection of about 500 images. When the ‘DensifyPointCloud’ step starts, I get a “Segment fault”. This is my third time running ODM on images from this site. The other two times completed without issue. The only difference that I can think of is that some of the terrain has changed (it’s a series of images from a construction site). Anyone able to determine from the traceback what the potential cause is?

The following is the output from the program:

00:55:08 [App     ] Densifying point-cloud completed: 447441 points (12m30s696ms)
[INFO]    Computing sub-scenes
[INFO]    running "/code/SuperBuild/install/bin/OpenMVS/DensifyPointCloud" "/datasets/2024-08-03/opensfm/undistorted/openmvs/scene.mvs" --sub-scene-area 660000 --max-threads 12 -w "/datasets/2024-08-03/opensfm/undistorted/openmvs/depthmaps" -v 0 --cuda-device -2
00:55:08 [App     ] OpenMVS x32 v2.2.0
00:55:08 [App     ] Build date: Jul 23 2024, 04:52:40
00:55:08 [App     ] CPU:  (12 cores)
00:55:08 [App     ] RAM: 17.54GB Physical Memory 2.00GB Virtual Memory
00:55:08 [App     ] OS: Linux 6.10.0-linuxkit (aarch64)
00:55:08 [App     ] Disk: 
00:55:08 [App     ] warning: no SSE compatible CPU or OS detected
00:55:08 [App     ] Command line: DensifyPointCloud /datasets/2024-08-03/opensfm/undistorted/openmvs/scene.mvs --sub-scene-area 660000 --max-threads 12 -w /datasets/2024-08-03/opensfm/undistorted/openmvs/depthmaps -v 0 --cuda-device -2
00:55:08 [App     ] The camera directions mean is unbalanced; the scene will be considered unbounded (no ROI)
Segmentation fault

===== Dumping Info for Geeks (developers need this to fix bugs) =====
Child returned 139
Traceback (most recent call last):
  File "/code/stages/odm_app.py", line 82, in execute
    self.first_stage.run()
  File "/code/opendm/types.py", line 470, in run
    self.next_stage.run(outputs)
  File "/code/opendm/types.py", line 470, in run
    self.next_stage.run(outputs)
  File "/code/opendm/types.py", line 470, in run
    self.next_stage.run(outputs)
  [Previous line repeated 1 more time]
  File "/code/opendm/types.py", line 449, in run
    self.process(self.args, outputs)
  File "/code/stages/openmvs.py", line 138, in process
    system.run('"%s" "%s" %s' % (context.omvs_densify_path,
  File "/code/opendm/system.py", line 112, in run
    raise SubprocessException("Child returned {}".format(retcode), retcode)
opendm.system.SubprocessException: Child returned 139

===== Done, human-readable information to follow... =====

[ERROR]   Uh oh! Processing stopped because of strange values in the reconstruction. This is often a sign that the input data has some issues or the software cannot deal with it. Have you followed best practices for data acquisition? See https://docs.opendronemap.org/flying/

Are you able to re-run on latest ODM and increase RAM and/or swap/pagefile?

Unfortunately, there was no change in results. Please let me know what additional information I can provide, or what additional steps I can try.

I downloaded the latest version of ODM: docker pull opendronemap/odm

I also increased RAM to 22.5GB and virtual memory to 2.5GB.

After clearing all the generated files (aka starting with just the images directory), I received the following error message:

Filtered depth-maps 517 (100%, 58s789ms)        
03:50:45 [App     ] Densifying point-cloud completed: 447354 points (35m24s13ms)
[INFO]    Computing sub-scenes
[INFO]    running "/code/SuperBuild/install/bin/OpenMVS/DensifyPointCloud" "/datasets/2024-08-03/opensfm/undistorted/openmvs/scene.mvs" --sub-scene-area 660000 --max-threads 12 -w "/datasets/2024-08-03/opensfm/undistorted/openmvs/depthmaps" -v 0 --cuda-device -2
03:50:45 [App     ] OpenMVS x32 v2.2.0
03:50:45 [App     ] Build date: Aug  9 2024, 13:22:14
03:50:45 [App     ] CPU:  (12 cores)
03:50:45 [App     ] RAM: 21.96GB Physical Memory 2.50GB Virtual Memory
03:50:45 [App     ] OS: Linux 6.10.0-linuxkit (aarch64)
03:50:45 [App     ] Disk: 
03:50:45 [App     ] warning: no SSE compatible CPU or OS detected
03:50:45 [App     ] Command line: DensifyPointCloud /datasets/2024-08-03/opensfm/undistorted/openmvs/scene.mvs --sub-scene-area 660000 --max-threads 12 -w /datasets/2024-08-03/opensfm/undistorted/openmvs/depthmaps -v 0 --cuda-device -2
03:50:46 [App     ] The camera directions mean is unbalanced; the scene will be considered unbounded (no ROI)
Segmentation fault

===== Dumping Info for Geeks (developers need this to fix bugs) =====
Child returned 139
Traceback (most recent call last):
  File "/code/stages/odm_app.py", line 82, in execute
    self.first_stage.run()
  File "/code/opendm/types.py", line 470, in run
    self.next_stage.run(outputs)
  File "/code/opendm/types.py", line 470, in run
    self.next_stage.run(outputs)
  File "/code/opendm/types.py", line 470, in run
    self.next_stage.run(outputs)
  [Previous line repeated 1 more time]
  File "/code/opendm/types.py", line 449, in run
    self.process(self.args, outputs)
  File "/code/stages/openmvs.py", line 138, in process
    system.run('"%s" "%s" %s' % (context.omvs_densify_path,
  File "/code/opendm/system.py", line 112, in run
    raise SubprocessException("Child returned {}".format(retcode), retcode)
opendm.system.SubprocessException: Child returned 139

===== Done, human-readable information to follow... =====

[ERROR]   Uh oh! Processing stopped because of strange values in the reconstruction. This is often a sign that the input data has some issues or the software cannot deal with it. Have you followed best practices for data acquisition? See https://docs.opendronemap.org/flying/

If you’ve got enough space to get that RAM + Swap number up to 32GB (or more!) that will be much less likely to run into out of memory issues for 500 images.

I can do more memory. I can get 32GB RAM. What would be a good number for Swap Memory?

Also, I’ve run this with ~560 images twice without issue. Any idea why fewer images would run into a memory issue?

Swap can be as much as 1-2x memory, as long as it’s fast drive(s). That extra swap is used and released very quickly, so it’s not like other typical memory / swap profiles.

Failures happen inconsistently at the margins. 32GB ram/swap combined is outside the margins and should consistently work with most normal settings most of the times. Below that threshold, it will work sometimes (maybe even most of the time) but fail unexpectedly.

Thanks for that explanation.

Good news! This latest attempt succeeded!! For the record (and for anyone else attempting something similar), my Docker settings are as follows:

CPUs: 12
RAM: 34GB
Swap: 3.5GB
Virtual Disk: 128GB

Excellent!