Hi,
I am having memory problems. I have 128GB of system memory with over 200GB of Windows virtual memory. I am running WebOdM in docker with WSL 2. My .wslconfig file WAS configured with
memory 112GB and the swap was at 150GB. This gave me this memory error
With the classic error message [ERROR] Whoops! You ran out of memory! Add more RAM to your computer, if you’re using docker configure it to use more memory, for WSL2 make use of .wslconfig (Advanced settings configuration in WSL | Microsoft Docs), resize your images, lower the quality settings or process the images using a cloud provider (e.g. https://webodm.net).
Adding and extra 50GB (200GB) total to the swap did not fix the problem. I also noted that my swap memory was not being used much so I figure this part of the code needed real system memory to work… It this possible?
Anyway, I add 150GB of memory in the configuration file. This is more than the 128GB of real system ram (and obviously a bad idea). I was hopping Docker would then use Windows virtual memory rather then using the Docker swap memory. This gave me Processing node went offline error.
What is the way out of this bind? Is my only solution reducing the amount of images? Using feature_quality: high (rather than ultra) is not an option for my 1950’s orthophotos.
When imaging with a drone, Ground Sample Distance - essentially the size of the pixels. With 400km^2 area, the file needs to be quite large to maintain fine details. If you know the width dimension on the ground of the original image scan, and how many pixels wide the scan is, you can determine GSD pretty well.
Thanks for that extra info. I will check tomorrow. For now, I can say that the images are 10680x10680. 1200dpi, scale is 1:20k. I will check the pixel posting in the morning. Traditionally the scene would have about 8.1m square miles.
I abandoned using OSM native as I was getting inconsistent results using the same parameters. The reconstructed area was very small compared to my first partial success runs using Docker with hyper-V. I switched to ODM hopping it would be faster and easier to track my different commands using the command prompt.
I can’t understand why some things would work in hyper-v machine but not locally. I figured it came down to versions of odm being slightly different.
After seeing the inconsistent runs using ODM on windows, I then reinstalled webODM using docker with WSL2.
Are you hopping the parameters you gave me will help or using odm directly on Windows will help? I am using -fast orthophoto so I think the dem resolution command is not useful and I thing orthophoto resolution is at 5 by default. I will check.
Hi,
GSD was estimated at 79.3cm in that report. Pixel size is 0.80m in QGIS so both are very close. For the “successfull” run (the run with the biggest recontruction, these where the options.
For the run that created the above memory error, the options where the following: The biggest difference between both runs is the use of some GCP, min-number of feature at 40k (rather than 18K), and pc-tiles I think…
Looks like I mixed the scale and the altitude. Altitude was 20000 feet and the scale 1:40 000. Area should be 32.3 square miles. Qgis give me a GSD of 80cm while ODM estimates it at 79.3cm. Sorry for your loss of time and thanks for the help
--pc-tile should, in theory, help with memory pressure.
I’m happy to see you adjust --crop to 0, it can help get the largest reconstruction, though it may not be as clean around the edges.
However, you’ve added our favorite dangerous flag, --ignore-gsd to the mix alongside a few other changes. Can you please leave this one out and try to change as few flags as possible when A/B testing?
Another incredibly heavy flag is --mesh-octree-depth, which I believe doubles the memory required (roughly) each level.
crop: 0, debug: true, fast-orthophoto: true, feature-quality: ultra, force-gps: true, gps-accuracy: 250, matcher-type: bruteforce, min-num-features: 18000, pc-tile: true, resize-to: -1 I also reduced the threads from 9 to 6 in the WSL2.
I just noticed that resize-to:… was added automatically to 2048. Maybe removing this will help? -1 apparently deactivate this old option.
Hi,
Last run finished but the reconstructed area is very small (much smaller than my first runs). I will re-running the exact same script to see what is going on. Then I will boost the min-num-features to 20 000 or 25 000… How much of this is up to pure chance based on those min-num features found?
Nicolas
Hi,
I have place screen shots of two identical runs in my Google drive. As you can see, results are very different. This is the exact same project. All I did for run2 is click “restart” from Load Dataset. If you have any idea of what is going on, I am ready to listen!