This is the first of I hope many runs where I attempt to determine the RAM consumption of different stages under different conditions.
I have Docker Desktop running with 8 CPUs and 56G of RAM on my Mac M1 Max laptop. I have configured a one-node Kubernetes cluster which has Prometheus and Grafana installed within it. I ran a single job and monitored its RAM utilization, making annotations for the ending of the stages that looked relevant. The job had 296 images (average size about 8.5MB) and had the following command line:
--rerun-all --pc-quality high --min-num-features 15000 --feature-quality high --pc-filter 0 --auto-boundary.
The annotations from left to right:
- 16:57:26 end of opensfm detect_features, beginning of opensfm match_features
- 17:01:57 end of opensfm match_features, beginning of opensfm reconstruct
- 17:15:28 end of opensfm reconstruct, beginning of opensfm undistortion
- 17:20:26 end of opensfm, beginning of openmvs densifypointcloud
- 17:36:18 end of estimated depth maths, beginning of fused depth maps
- 17:39:00 end of fused depth maps, beginning meshing
- 17:48:29 end of meshing, beginning of texturing
- 17:52:43 end of texturing, beginning of georeferencing
The two stages that used the most memory were opensfm detect_features which peaked at about 9.3G and fused depth maps which went to 15G and stayed there for about two minutes.
The next stuff on my short list: add DTM and DSM, see what happens when either or both qualities are set to ultra, process half the images to see how the curves change.