WebODM Lightning Failure due to lack of RAM?


I’m new here, but hopefully you will see more of me as I am determined to get to grips with WebODM. Apologies if my post is in the wrong forum section, or not formatted correctly. Feedback welcome. :slight_smile:

I have been trying to get to grips with WebODM via the very reasonably priced Lightning service. I task has just failed and the console indicates that the Node ran out of RAM, is this possible? Am I configuring something wrong? Message provided was as follows;

It looks like your processing node ran out of memory. If you are using docker, make sure that your docker environment has enough RAM allocated. Alternatively, make sure you have enough physical RAM, reduce the number of images, make your images smaller, or reduce the max-concurrency parameter from the task’s [options](javascript:void(0);). You can also try to use a cloud processing node.

Last few lines of console output;

[INFO] DEM input file /var/www/data/10464cc7-6e55-4987-b4a7-fd4dd96a6b3c/odm_georeferencing/odm_georeferenced_model.laz found: True
[WARNING] DEM will not be generated
[INFO] Finished odm_dem stage
[INFO] Running odm_orthophoto stage
[WARNING] Maximum resolution set to GSD - 10.0% (12.8 cm / pixel, requested resolution was 2.0 cm / pixel)
[INFO] running /code/build/bin/odm_orthophoto -inputFiles /var/www/data/10464cc7-6e55-4987-b4a7-fd4dd96a6b3c/odm_texturing_25d/odm_textured_model_geo.obj -logFile /var/www/data/10464cc7-6e55-4987-b4a7-fd4dd96a6b3c/odm_orthophoto/odm_orthophoto_log.txt -outputFile /var/www/data/10464cc7-6e55-4987-b4a7-fd4dd96a6b3c/odm_orthophoto/odm_orthophoto_render.tif -resolution 7.80958091753981 -outputCornerFile /var/www/data/10464cc7-6e55-4987-b4a7-fd4dd96a6b3c/odm_orthophoto/odm_orthophoto_corners.txt
Error in OdmOrthoPhoto:
OpenCV(4.5.0) /code/SuperBuild/src/opencv/modules/core/src/alloc.cpp:73: error: (-4:Insufficient memory) Failed to allocate 36207239524 bytes in function ‘OutOfMemoryError’
Traceback (most recent call last):
File “/code/run.py”, line 68, in
File “/code/stages/odm_app.py”, line 81, in execute
File “/code/opendm/types.py”, line 338, in run
File “/code/opendm/types.py”, line 338, in run
File “/code/opendm/types.py”, line 338, in run
[Previous line repeated 6 more times]
File “/code/opendm/types.py”, line 319, in run
self.process(self.args, outputs)
File “/code/stages/odm_orthophoto.py”, line 73, in process
system.run(’{bin}/odm_orthophoto -inputFiles {models} ’
File “/code/opendm/system.py”, line 79, in run
raise Exception(“Child returned {}”.format(retcode))
Exception: Child returned 1

Any help would be appreciated.

Regards - Woody

1 Like

What’s the nature of the dataset?: Target resolution, number of bands (or RGB), number of images, and please share your settings. It’s possible the lightning network allocated a machine that wasn’t big enough for your processing settings. I’ve never had that happen, but it is possible.

1 Like

I’ve also OOM’D on Lightning, but I was being a jerk with the settings, haha.

Welcome! Once we get a better idea of your settings and data, we can get you sorted.

1 Like

Wow, thanks folks for the rapid replies… this really is a friendly community…

Settings were as follows;

Options: camera-lens: fisheye, skip-3dmodel: true, fast-orthophoto: true

Task was 97 images taken with a Parrot Bebop 2 using Pix4D Capture at 60m.

I will try to upload the data set somewhere shortly.

Regards - Woody

[edit: Corrected mission altitude.]


That all sounds totally fine.

Glad you find the community welcoming! That’s the goal!

I think the data will be the missing link for solving things.

Were the images from the bebop2 having lots of sky/horizon in each frame?

1 Like

Data set can be found here DroneDB Link


1 Like

Oooh, fun! Dogfooding DroneDB! Pretty exciting to see it being used as intended :slight_smile: Thanks for joining in on that, as well!

I’m processing it locally with all defaults on 28GB RAM, and so far so good.

Out of curiosity, could you rename the photos to remove the + symbol and re-up them to DroneDB? It isn’t thumbnailing them, which I find curious.

1 Like

No problems ref DroneDB. I thought I would jump on that bandwagon early. I am not a software Engineer (I am an Aircraft Engineer by trade and Hardware Hacker/Maker for hobbies) so all I can offer regarding Drone DB is to use it and feedback. I have downloaded the Windows Desktop Version (and bought license) and started to play with it. I shall start another thread ref DroneDB with some thoughts.

I will try to remove the “+” from the file names and try to re-upload. I also attempted to use a GCP file too (Made using GCP Editor Pro - also license bought) but when I tried to use the GCP file WebODM failed with an error that image referenced in GCP was not found. I just omitted the GCP file and tried again (that dataset is still running). I’m now wondering if the “+” also caused this issue?

Thanks again for all of your help.


1 Like

Sweet! Powerful combo between Lightning, DroneDB, and GCP Editor Pro! I hope to get a chance to really try it out later this year after I rebuild my Solo.

I don’t know the code well enough to know for sure, but from years of working with GIS software… Stick with bare minimum ASCII characters (a-z,A-Z,0-9,_-) with no spaces and you’ll prevent 99.9% of all headaches :rofl:

Just got this from running your data locally:

[INFO]    running /code/SuperBuild/install/bin/texrecon /code/opensfm/undistorted/reconstruction.nvm /code/odm_meshing/odm_mesh.ply /code/odm_texturing/odm_textured_model_geo -d gmi -o gauss_clamping -t none --no_intermediate_results
/code/SuperBuild/install/bin/texrecon (built on Mar  3 2021, 16:58:17)
Load and prepare mesh:
PLY Loader: comment VTK generated PLY File
Reading PLY: 152913 verts... 305756 faces... done.
Generating texture views:
NVM: Loading file...
NVM: Number of views: 97
NVM: Number of features: 30804
        Loading 100%... done. (Took 4.765s)
Building adjacency graph:
        Adding edges 100%... done. (Took 0.401s)
        458581 total edges.
View selection:
        Building BVH from 305756 faces... done. (Took: 403 ms)
        Calculating face qualities 100%... done. (Took 50.54s)
terminate called after throwing an instance of 'std::out_of_range'
  what():  vector::_M_range_check: __n (which is 2190114) >= this->size() (which is 305756)
Traceback (most recent call last):
  File "/code/run.py", line 68, in <module>
  File "/code/stages/odm_app.py", line 81, in execute
  File "/code/opendm/types.py", line 338, in run
  File "/code/opendm/types.py", line 338, in run
  File "/code/opendm/types.py", line 338, in run
  [Previous line repeated 4 more times]
  File "/code/opendm/types.py", line 319, in run
    self.process(self.args, outputs)
  File "/code/stages/mvstex.py", line 104, in process
    system.run('{bin} {nvm_file} {model} {out_dir} '
  File "/code/opendm/system.py", line 79, in run
    raise Exception("Child returned {}".format(retcode))
Exception: Child returned 134

Re-ran it, and processing has continued.

Reconstruction… Not so good:

1 Like

Looks like lens model issues: what kind of camera/lens does the bebop have?

1 Like

It is a fisheye, I don’t know much more about lens type beyond that sorry. I have done zero photography beyond pressing click :relaxed:

1 Like

Ultra-mega fisheye.

I’ve re-run with forcing fisheye now to see if it also fails to reconstruct. Just wanted a baseline where it ran to completion.

Seems not able to finish the depth-map stage. It gets about 2-3 images in and then just hangs on one thread without progressing for minutes on end. Curious.

[INFO] running /code/SuperBuild/install/bin/OpenMVS/DensifyPointCloud “/code/opensfm/undistorted/openmvs/scene.mvs” --resolution-level 3 --min-resolution 512 --max-resolution 4023 --max-threads 8 --number-views-fuse 2 -w “/code/opensfm/undistorted/openmvs/depthmaps” -v 0
12:05:12 [App ] Build date: Mar 3 2021, 17:07:33
12:05:12 [App ] CPU: Intel(R) Core™ i7-6700K CPU @ 4.00GHz (8 cores)
12:05:12 [App ] RAM: 23.49GB Physical Memory 232.00GB Virtual Memory
12:05:12 [App ] OS: Linux 5.4.91-microsoft-standard-WSL2 (x86_64)
12:05:12 [App ] SSE & AVX compatible CPU & OS detected
12:05:12 [App ] Command line: /code/opensfm/undistorted/openmvs/scene.mvs --resolution-level 3 --min-resolution 512 --max-resolution 4023 --max-threads 8 --number-views-fuse 2 -w /code/opensfm/undistorted/openmvs/depthmaps -v 0
12:05:18 [App ] Preparing images for dense reconstruction completed: 97 images (6s401ms)
12:05:18 [App ] Selecting images for dense reconstruction completed: 97 images (66ms)
Estimated depth-maps 3 (3.09%, 28s, ETA 14m)…


20mins post-run and still has not moved.

I have removed the “+0000” from the filenames (no idea why the drone appended it to the file names) and created a new DroneDB upload. 60m mission.

I have 2 further datasets of the same site (85m taken on the same day & 50m taken on another occasion). I sense some of you guys are intrigued by this headache.

I’m starting to think that the best solution is to just buy a DJI Phantom 4 :upside_down_face:

1 Like

Don’t give up on the Bebop2 quite yet! There are advantages to not having a fragile mechanical gimbal to worry about :stuck_out_tongue:

Yeah, quite intrigued. And as I suspected, thumbnailing works brilliantly without the + character.

Did you test behavior with GCP Editor Pro and removing those + characters from the image names?

Also, care to upload/share the other datasets as well?

I will have a go at that now. I made a crude GCP using known locations and features with high contrast (I think). Standby for a GCP based on this this Data Set. Question - Should I upload the GCP to DroneDB as well?

Yeah sure. I will do it shortly.

(I’m technically “in work” atm and trying to do this on another screen :shushing_face:

1 Like

If you don’t mind, I think it’s helpful.

Same. I don’t even have a second screen, though :rofl:

Data set above (Bebop 2 with Pix4D capture) with GCP added 60m → Here (97 images)

Same site, same drone, same day, different mission flown at 85m (with GCP) → Here (56 images)

Same site, same drone, different day (different conditions) flown at 50m → Here (362 images!)

I have cleaned up other early attempts at DroneDB test Hub use as well, to save data datasets other than those above have been deleted. I have also figured out that you can re-name datasets within the web portal! Very handy.

[Edit - link to 85m dataset included.]

1 Like

Apologies, post above was typed before I created GCPs and I posted before adding hyperlinks. I will edit shortly when I have got eh GCPs done.

I am having an issue though. The images are opening 90deg rotated in both DroneDB & GCP Editor Pro. If I open them natively (Windows 10) the horizon is correct, but in both software packages above the horizon in 90deg out with sky all on the right. Any ideas?

1 Like

It was brought to my attention that the lens type that the Bebop employ isn’t really supported upstream from us in OpenSFM as it isn’t really anything common (rectilinear, brown, spherical, fisheye), but somewhere between Spherical and Fisheye (hemispherical?), and as such, the existing lens models can’t really reconstruct the data from it properly.

I’ve filed an inquiry with them to see what they have to say.

For the time being, I know Pix4D can reconstruct that data (since the Bebop is Parrot Group’s own platform) if you must have something.

1 Like

Thank you for the info. I think I might be flogging a dead horse with this bebop… Recommendations for a replacement drone?

DJI Phantom Pro 4 or Parrot Anafi?