Small dataset: Process exited with code 1

First time using ODM and I am unable to put together an orthomosaic of very small size.

I select my 23 images, Choose ‘Auto’ for processing node, Options as ‘fast orthophoto’, Resize Images as ‘Yes’ 2048px. Then click review and start processing.

The images upload and resize fine, but shortly after it says it is running, it reports: ‘Process exited with code 1’. Here is the debug output:


[INFO] Fast orthophoto is turned on, automatically setting --skip-3dmodel
[INFO] Initializing ODM - Tue Dec 29 21:07:26 2020
[INFO] ==============
[INFO] build_overviews: False
[INFO] camera_lens: auto
[INFO] cameras: {}
[INFO] crop: 3
[INFO] debug: False
[INFO] dem_decimation: 1
[INFO] dem_euclidean_map: False
[INFO] dem_gapfill_steps: 3
[INFO] dem_resolution: 5
[INFO] depthmap_resolution: 640
[INFO] dsm: False
[INFO] dtm: False
[INFO] end_with: odm_report
[INFO] fast_orthophoto: True
[INFO] feature_quality: high
[INFO] feature_type: sift
[INFO] force_gps: False
[INFO] gcp: None
[INFO] geo: None
[INFO] gps_accuracy: 10
[INFO] ignore_gsd: False
[INFO] matcher_distance: 0
[INFO] matcher_neighbors: 8
[INFO] matcher_type: flann
[INFO] max_concurrency: 8
[INFO] merge: all
[INFO] mesh_octree_depth: 11
[INFO] mesh_size: 200000
[INFO] min_num_features: 8000
[INFO] name: 5b99a6c5-b810-490e-bda6-fd804bbdd8d1
[INFO] opensfm_depthmap_method: PATCH_MATCH
[INFO] opensfm_depthmap_min_consistent_views: 3
[INFO] opensfm_depthmap_min_patch_sd: 1
[INFO] optimize_disk_space: False
[INFO] orthophoto_compression: DEFLATE
[INFO] orthophoto_cutline: False
[INFO] orthophoto_no_tiled: False
[INFO] orthophoto_png: False
[INFO] orthophoto_resolution: 5
[INFO] pc_classify: False
[INFO] pc_csv: False
[INFO] pc_ept: False
[INFO] pc_filter: 2.5
[INFO] pc_las: False
[INFO] pc_quality: medium
[INFO] pc_rectify: False
[INFO] pc_sample: 0
[INFO] primary_band: auto
[INFO] project_path: /var/www/data
[INFO] radiometric_calibration: none
[INFO] rerun: None
[INFO] rerun_all: False
[INFO] rerun_from: [‘odm_orthophoto’, ‘odm_report’]
[INFO] resize_to: 2048
[INFO] skip_3dmodel: True
[INFO] skip_band_alignment: False
[INFO] sm_cluster: None
[INFO] smrf_scalar: 1.25
[INFO] smrf_slope: 0.15
[INFO] smrf_threshold: 0.5
[INFO] smrf_window: 18.0
[INFO] split: 999999
[INFO] split_overlap: 150
[INFO] texturing_data_term: gmi
[INFO] texturing_outlier_removal_type: gauss_clamping
[INFO] texturing_skip_global_seam_leveling: False
[INFO] texturing_skip_local_seam_leveling: False
[INFO] texturing_tone_mapping: none
[INFO] tiles: False
[INFO] time: False
[INFO] use_3dmesh: False
[INFO] use_exif: False
[INFO] use_fixed_camera_params: False
[INFO] use_hybrid_bundle_adjustment: False
[INFO] use_opensfm_dense: False
[INFO] verbose: False
[INFO] ==============
[INFO] Running dataset stage
[INFO] Loading dataset from: /var/www/data/5b99a6c5-b810-490e-bda6-fd804bbdd8d1/images
[INFO] Loading images database: /var/www/data/5b99a6c5-b810-490e-bda6-fd804bbdd8d1/images.json
[INFO] Found 23 usable images
[INFO] Coordinates file already exist: /var/www/data/5b99a6c5-b810-490e-bda6-fd804bbdd8d1/odm_georeferencing/coords.txt
[INFO] Parsing SRS header: WGS84 UTM 12N
[INFO] Finished dataset stage
[INFO] Running split stage
[INFO] Normal dataset, will process all at once.
[INFO] Finished split stage
[INFO] Running merge stage
[INFO] Normal dataset, nothing to merge.
[INFO] Finished merge stage
[INFO] Running opensfm stage
[WARNING] /var/www/data/5b99a6c5-b810-490e-bda6-fd804bbdd8d1/opensfm/image_list.txt already exists, not rerunning OpenSfM setup
[INFO] running /code/SuperBuild/src/opensfm/bin/opensfm detect_features “/var/www/data/5b99a6c5-b810-490e-bda6-fd804bbdd8d1/opensfm”
2020-12-29 21:07:26,709 INFO: Extracting ROOT_SIFT features for image DJI_0012.JPG
2020-12-29 21:07:26,710 INFO: Extracting ROOT_SIFT features for image DJI_0011.JPG
2020-12-29 21:07:26,710 INFO: Extracting ROOT_SIFT features for image DJI_0029.JPG
2020-12-29 21:07:26,710 INFO: Extracting ROOT_SIFT features for image DJI_0021.JPG
2020-12-29 21:07:26,710 INFO: Extracting ROOT_SIFT features for image DJI_0018.JPG
2020-12-29 21:07:26,710 INFO: Extracting ROOT_SIFT features for image DJI_0008.JPG
2020-12-29 21:07:26,710 INFO: Extracting ROOT_SIFT features for image DJI_0026.JPG
2020-12-29 21:07:27,059 DEBUG: Computing sift with threshold 0.1
2020-12-29 21:07:27,060 DEBUG: Computing sift with threshold 0.1
2020-12-29 21:07:27,076 DEBUG: Computing sift with threshold 0.1
2020-12-29 21:07:27,078 DEBUG: Computing sift with threshold 0.1
2020-12-29 21:07:27,079 DEBUG: Computing sift with threshold 0.1
2020-12-29 21:07:27,088 DEBUG: Computing sift with threshold 0.1
2020-12-29 21:07:27,098 DEBUG: Computing sift with threshold 0.1
2020-12-29 21:07:27,100 DEBUG: Computing sift with threshold 0.1
/code/SuperBuild/src/opensfm/bin/opensfm: line 12: 152 Killed “$PYTHON” “$DIR”/opensfm_main.py “$@”
Traceback (most recent call last):
File “/code/run.py”, line 69, in
app.execute()
File “/code/stages/odm_app.py”, line 83, in execute
self.first_stage.run()
File “/code/opendm/types.py”, line 361, in run
self.next_stage.run(outputs)
File “/code/opendm/types.py”, line 361, in run
self.next_stage.run(outputs)
File “/code/opendm/types.py”, line 361, in run
self.next_stage.run(outputs)
File “/code/opendm/types.py”, line 342, in run
self.process(self.args, outputs)
File “/code/stages/run_opensfm.py”, line 32, in process
octx.feature_matching(self.rerun())
File “/code/opendm/osfm.py”, line 276, in feature_matching
self.run(‘detect_features’)
File “/code/opendm/osfm.py”, line 25, in run
system.run(‘%s/bin/opensfm %s “%s”’ %
File “/code/opendm/system.py”, line 79, in run
raise Exception(“Child returned {}”.format(retcode))
Exception: Child returned 137


I have a laptop that meets the hardware requirements listed here and I’m running linux.

According to other issues on this forum, I believe this could be an out of memory situation. But in Linux, as far as I can tell, there is no way to add more RAM to a particular container. It just uses what it needs. I’m somewhat experienced with Docker and I’ve never heard of a way to increase a containers memory - but I would love to be told otherwise, it that’s the case.

My hardware specs are modern CPU with 8 cores, 8 GB RAM + swap (4-8 GB, can’t remember) and 500GB SSD.

Thanks for the help!

Lower the quality settings to see if the run completes. E.g change feature_quality to medium or increase orthophoto_resolution.

As far as Docker settings, it sounds like you are using WSL 2 engine which automatically provides better performance and does not give you the option to change system allocation.

I’m not using WSL; just straight Ubuntu Desktop.
The settings you mentioned, are those located in the webUI when I am choosing which kind of job, which processing node and those other settings? Or are they located somewhere else?

Thanks for the reply!!

I found those settings and was able to get the job to finish by decreasing the quality some. Thanks for the tip! I will have to work on tweaking this until I can find a medium where it gives good results and doesn’t overwhelm my hardware.

Thanks again!

3 Likes

I was able to run your photoset in about 4-1/2 minutes on WebODM Lightning (first tried preset “default” but had comparable performance with preset fast ortho). I expect you will find 8 GB RAM too restrictive if you are planning to run larger jobs.

1 Like

Especially with that little swap. When I use ODM on my 8GB Elitebook 2740p, I have 32GB swap for it and it’ll often get up there and beyond at max quality.

1 Like

I am a very Newbie to this myself. I do not understand most terminology as well so it makes it very difficult for me. I had this issue with my MAC to start off with as well. I went in to Docker and upped the settings on the resources tab and it has worked fine on smaller projects. I am still working on how many images I can process with my machine.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.