Process exited with code 137 + High Definition Image Processing preset

Hi,

I have a machine with 32 GB RAM and enough storage.

I tried to run a data set with different preset of options.

  1. with fast orthophoto preset
  2. with High Definition preset.
    I was testing what the difference between the O/P would be.

image

While the High Definition image processing exited with code 137, I have got no error with the other one. The error message says this:

It looks like your processing node ran out of memory. If you are using docker, make sure that your docker environment has enough RAM allocated. Alternatively, make sure you have enough physical RAM, reduce the number of images, make your images smaller, or reduce the max-concurrency parameter from the task’s options. You can also try to use a cloud processing node.

This is the excerpt of console.txt file I downloaded from the High Definition processing.

/usr/local/lib/python2.7/dist-packages/OpenSSL/crypto.py:12: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team$
from cryptography import x509
[INFO]    ==============
[INFO]    build_overviews: False
[INFO]    camera_lens: auto
[INFO]    cameras: {}
[INFO]    crop: 3
[INFO]    debug: False
[INFO]    dem_decimation: 1
[INFO]    dem_euclidean_map: False
[INFO]    dem_gapfill_steps: 3
[INFO]    dem_resolution: 2.0
[INFO]    depthmap_resolution: 1000.0
[INFO]    dsm: True
[INFO]    dtm: False
[INFO]    end_with: odm_report
[INFO]    fast_orthophoto: False
[INFO]    feature_type: sift
[INFO]    force_gps: False
[INFO]    gcp: None
[INFO]    gps_accuracy: 15
[INFO]    ignore_gsd: True
[INFO]    matcher_distance: 0
[INFO]    matcher_neighbors: 8
[INFO]    max_concurrency: 8
[INFO]    merge: all
[INFO]    mesh_octree_depth: 10
[INFO]    mesh_point_weight: 4
[INFO]    mesh_samples: 1.0
[INFO]    mesh_size: 200000
[INFO]    min_num_features: 8000
[INFO]    mve_confidence: 0.6
[INFO]    name: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
[INFO]    opensfm_depthmap_method: PATCH_MATCH
[INFO]    opensfm_depthmap_min_consistent_views: 3
[INFO]    opensfm_depthmap_min_patch_sd: 1
[INFO]    optimize_disk_space: False
[INFO]    orthophoto_compression: DEFLATE
[INFO]    orthophoto_cutline: False
[INFO]    orthophoto_no_tiled: False
[INFO]    orthophoto_png: False
[INFO]    orthophoto_resolution: 2.0
[INFO]    pc_classify: False
[INFO]    pc_csv: False
[INFO]    pc_ept: False
[INFO]    pc_filter: 2.5
[INFO]    pc_las: False
[INFO]    pc_rectify: False
[INFO]    pc_sample: 0
[INFO]    project_path: /var/www/data
[INFO]    radiometric_calibration: none
[INFO]    rerun: None
[INFO]    rerun_all: False
[INFO]    rerun_from: None
[INFO]    resize_to: 2048
[INFO]    skip_3dmodel: False
[INFO]    sm_cluster: None
[INFO]    smrf_scalar: 1.25
[INFO]    smrf_slope: 0.15
[INFO]    smrf_threshold: 0.5
[INFO]    smrf_window: 18.0
[INFO]    split: 999999
[INFO]    split_multitracks: False
[INFO]    split_overlap: 150
INFO]    texturing_data_term: gmi
[INFO]    texturing_keep_unseen_faces: False
[INFO]    texturing_nadir_weight: 16
[INFO]    texturing_outlier_removal_type: gauss_clamping
[INFO]    texturing_skip_global_seam_leveling: False
[INFO]    texturing_skip_hole_filling: False
[INFO]    texturing_skip_local_seam_leveling: False
[INFO]    texturing_skip_visibility_test: False
[INFO]    texturing_tone_mapping: none
[INFO]    time: False
[INFO]    use_3dmesh: False
[INFO]    use_exif: False
[INFO]    use_fixed_camera_params: False
[INFO]    use_hybrid_bundle_adjustment: False
[INFO]    use_opensfm_dense: False
[INFO]    verbose: False
[INFO]    ==============
[INFO]    Running dataset stage
[INFO]    Loading dataset from: /var/www/data/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/images
[INFO]    Loading 452 images
[INFO]    Wrote images database: /var/www/data/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/images.json
[INFO]    Found 452 usable images
[INFO]    Parsing SRS header: WGS84 UTM 44N
[INFO]    Finished dataset stage
[INFO]    Running split stage
[INFO]    Normal dataset, will process all at once.
[INFO]    Finished split stage
[INFO]    Running merge stage
[INFO]    Normal dataset, nothing to merge.
[INFO]    Finished merge stage
[INFO]    Running opensfm stage
[INFO]    Altitude data detected, enabling it for GPS alignment
INFO]    ['use_exif_size: no', 'flann_algorithm: KDTREE', 'feature_process_size: 2048', 'feature_min_frames: 8000', 'processes: 8', 'matching_gps_n$
[INFO]    running /usr/bin/env python2 /code/SuperBuild/src/opensfm/bin/opensfm extract_metadata "/var/www/data/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx$

This is the docker stats:

CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
92d1bf032zxz        webapp              0.01%               242MiB / 31.34GiB     0.75%               20.2GB / 9.49GB     5.04GB / 53.2kB     22
1a0e61cabzxz        worker              0.10%               165.1MiB / 31.34GiB   0.51%               41.4GB / 31.2GB     579MB / 20.3MB      12
80b08ceb1zxz        db                  0.00%               14.31MiB / 31.34GiB   0.04%               17.6GB / 41.7GB     593MB / 3.91GB      6
f2bd5995bzxz        webodm_node-odm_1   0.00%               179.7MiB / 31.34GiB   0.56%               13.5GB / 6.19GB     169GB / 14GB        59
e7d43f3e4zxz        broker              0.20%               46.66MiB / 31.34GiB   0.15%               406MB / 234MB       120MB / 4.16GB      5

Please help me regarding what I need to change in the options or if I need to increase my RAM more.

Let me know if more information is needed regarding the above.

Thanks in advance.

452 images is near the limit of what 32GB can do without adding swap (virtual memory). There is just one small portion of the tool chain that tends to use a lot of RAM, so it usually sensible to prevent crashes by adding at least the same amount of swap, but up to 2x the swap.

So for a 32GB RAM machine, add 32-64GB of swap, per the directions here:

2 Likes

Hi,

Thanks for the answer. I cannot implement it, since I have Standard SSD on my Linux VM. Is there any otherway or I have to increase my RAM?
I have few questions:

  1. How can we find out that at which step did the previous processing failed?( from the logs or something else)
  2. And will this happen only for HD preset, because for the fast-orthophoto preset, it did not crash?
  3. How much RAM do I need to process around 1000 images(considering best memory usage perspective)?(Is image information needed or something else to answer this)

I also saw the docker container RAM usage stats for sometime. Looks like it is not going above 20GBs, or is it that, it reaches that PEAK RAM usage at certain point, and if it does not find required RAM, it fails with code 137?

@smathermather-cm any response on my questions?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.