72 Image project fails with out of memory error in WebODM Lightning

I tried processing a 72 image dataset of a house captured with Pix4D capture Circular mission type. The images are all off nadir and captured at 120 feet altitude. After running for 2 hours and 50 minutes in WebODM Lightning I got an out of memory error message as posted below

Is there an issue with processing all off nadir images and this mission type in WebODM?

Thanks

.
.
Input file size is 50806, 42988
0…10…20…30…40…50…60…70…80…90…100 - done.
[INFO] running gdalbuildvrt -resolution highest -r bilinear “/var/www/data/17ee4a54-ba98-48c4-b44e-d6dced67fd71/odm_dem/merged.vrt” “/var/www/data/17ee4a54-ba98-48c4-b44e-d6dced67fd71/odm_dem/tiles.small_filled.tif” “/var/www/data/17ee4a54-ba98-48c4-b44e-d6dced67fd71/odm_dem/tiles.tmp.tif”
0…10…20…30…40…50…60…70…80…90…100 - done.
[INFO] running gdal_translate -co NUM_THREADS=4 -co TILED=YES -co BIGTIFF=IF_SAFER -co COMPRESS=DEFLATE --config GDAL_CACHEMAX 44.65% “/var/www/data/17ee4a54-ba98-48c4-b44e-d6dced67fd71/odm_dem/merged.vrt” “/var/www/data/17ee4a54-ba98-48c4-b44e-d6dced67fd71/odm_dem/tiles.tif”
Input file size is 50806, 42988
0…10…20…30…40…50…60…70…80…90…100 - done.
[INFO] Starting smoothing…
[INFO] Smoothing iteration 1
[INFO] Completed smoothing to create /var/www/data/17ee4a54-ba98-48c4-b44e-d6dced67fd71/odm_dem/dsm.tif in 0:19:29.543718
[INFO] Completed dsm.tif in 0:36:03.759874
[INFO] Cropping /var/www/data/17ee4a54-ba98-48c4-b44e-d6dced67fd71/odm_dem/dsm.tif
[INFO] running gdalwarp -cutline /var/www/data/17ee4a54-ba98-48c4-b44e-d6dced67fd71/odm_georeferencing/odm_georeferenced_model.bounds.gpkg -crop_to_cutline -co TILED=YES -co COMPRESS=DEFLATE -co BLOCKXSIZE=512 -co BLOCKYSIZE=512 -co BIGTIFF=IF_SAFER -co NUM_THREADS=4 /var/www/data/17ee4a54-ba98-48c4-b44e-d6dced67fd71/odm_dem/dsm.original.tif /var/www/data/17ee4a54-ba98-48c4-b44e-d6dced67fd71/odm_dem/dsm.tif --config GDAL_CACHEMAX 45.65%
Creating output file that is 48073P x 38068L.
Processing /var/www/data/17ee4a54-ba98-48c4-b44e-d6dced67fd71/odm_dem/dsm.original.tif [1/1] : 0Using internal nodata values (e.g. -9999) for image /var/www/data/17ee4a54-ba98-48c4-b44e-d6dced67fd71/odm_dem/dsm.original.tif.
Copying nodata values from source /var/www/data/17ee4a54-ba98-48c4-b44e-d6dced67fd71/odm_dem/dsm.original.tif to destination /var/www/data/17ee4a54-ba98-48c4-b44e-d6dced67fd71/odm_dem/dsm.tif.
…10…20…30…40…50…60…70…80…90…100 - done.
[INFO] Optimizing /var/www/data/17ee4a54-ba98-48c4-b44e-d6dced67fd71/odm_dem/dsm.tif as Cloud Optimized GeoTIFF
[INFO] running gdal_translate -of COG -co NUM_THREADS=4 -co BLOCKSIZE=256 -co COMPRESS=DEFLATE -co PREDICTOR=2 -co BIGTIFF=IF_SAFER -co RESAMPLING=NEAREST --config GDAL_CACHEMAX 45.05% --config GDAL_NUM_THREADS 4 “/var/www/data/17ee4a54-ba98-48c4-b44e-d6dced67fd71/odm_dem/dsm.tif” “/var/www/data/17ee4a54-ba98-48c4-b44e-d6dced67fd71/odm_dem/dsm_cogeo.tif”
Input file size is 48073, 38068
0…10…20…30…40…50…60…70…80…90…100 - done.
[INFO] Finished odm_dem stage
[INFO] Running odm_orthophoto stage
[INFO] running “/code/SuperBuild/install/bin/odm_orthophoto” -inputFiles /var/www/data/17ee4a54-ba98-48c4-b44e-d6dced67fd71/odm_texturing_25d/odm_textured_model_geo.obj -logFile “/var/www/data/17ee4a54-ba98-48c4-b44e-d6dced67fd71/odm_orthophoto/odm_orthophoto_log.txt” -outputFile “/var/www/data/17ee4a54-ba98-48c4-b44e-d6dced67fd71/odm_orthophoto/odm_orthophoto_render.tif” -resolution 50.0 -outputCornerFile “/var/www/data/17ee4a54-ba98-48c4-b44e-d6dced67fd71/odm_orthophoto/odm_orthophoto_corners.txt”
Error in OdmOrthoPhoto:
OpenCV(4.5.0) /code/SuperBuild/src/opencv/modules/core/src/alloc.cpp:73: error: (-4:Insufficient memory) Failed to allocate 17088793040 bytes in function ‘OutOfMemoryError’

===== Dumping Info for Geeks (developers need this to fix bugs) =====
Child returned 1
Traceback (most recent call last):
File “/code/stages/odm_app.py”, line 94, in execute
self.first_stage.run()
File “/code/opendm/types.py”, line 340, in run
self.next_stage.run(outputs)
File “/code/opendm/types.py”, line 340, in run
self.next_stage.run(outputs)
File “/code/opendm/types.py”, line 340, in run
self.next_stage.run(outputs)
[Previous line repeated 7 more times]
File “/code/opendm/types.py”, line 321, in run
self.process(self.args, outputs)
File “/code/stages/odm_orthophoto.py”, line 67, in process
system.run(’"{odm_ortho_bin}" -inputFiles {models} ’
File “/code/opendm/system.py”, line 106, in run
raise SubprocessException(“Child returned {}”.format(retcode), retcode)
opendm.system.SubprocessException: Child returned 1

===== Done, human-readable information to follow… =====

[ERROR] Uh oh! Processing stopped because of strange values in the reconstruction. This is often a sign that the input data has some issues or the software cannot deal with it. Have you followed best practices for data acquisition? See https://docs.opendronemap.org/flying.html
100 - done.

What processing parameters are you using?

Can you share the dataset so I can try reproducing?

1 Like

I realize now that this mission type (circular) probably has a hard time creating an orthophoto but all I really want out of it is a 3D model. Is this possible?

1 Like

Absolutely possible.

I’m guessing that some rogue points were added to the reconstruction that are just noise, but placed so far apart that are included in the bounds of the orthophoto, thus making a request for an abnormally large orthophoto.

Try to lower pc-filter and if you don’t care about the orthophoto, set the resolution to some large value.

2 Likes

I’m using lightning and on the second try I chose 3D model under options. Does that forego creation of the orthophoto? I will try adjusting the resolution to something high as I don’t care about the orthophoto I want the point cloud and mesh.

1 Like

Upped the resolution of the ortho to 10 but wasn’t able to alter the pc-filter value of 2.5 in Lightning.

The process took 18 minutes but finished with disappointing results. The 72 images were a circular mission with plenty of overlap but the building is all wavy. Is this an issue with my capture or maybe a parameter I can tweak?

image

That’s strange. that parameter shouldn’t be restricted/limited.

Could be an issue with the capture, how does the point cloud look like?

One issue I see is these photos are taken from 120 feet and are pretty far off nadir. Not enough to show horizon but enough to show land area away from the house and it’s hilly as well so ridges around the house are captured in the distance. This probably explains the issue with the failed orthophoto but it could also be a problem for the 3D model. Does the pc-filter remove these areas ? What is a good value I assume is 2.5 is the default it should be lower?

When I attempt to change the pc-filter value it’s just not enabled. I could try to create a completely new task and see if it’s available but when I edit the old failed task it’s not chanegable.

Nevermind on the problem with pc-filter. I was able to change it when I re-ran.

1 Like

You can increase --feature-quality, --pc-quality, --pc-geometric to true, and maybe try to raise --mesh-size and --mesh-octree-depth as far as Lightning will allow.

1 Like

Made all sorts of changes to the settings based on your recommendations but the best I could get is this. Is it capture? Saijin did you get the link I sent you and were you able to access the images?

The angle between images is 5 degrees. Maybe it needs to be 2-3 degrees and I don’t have enough images?

image

1 Like

I have not processed them yet! Looks better.

Options: cog: true, crop: 0, debug: true, dem-gapfill-steps: 4, dem-resolution: 1, dsm: true, matcher-neighbors: 16, mesh-size: 300000, min-num-features: 16000, orthophoto-resolution: 1, pc-ept: true, pc-geometric: true, pc-quality: high, use-3dmesh: true, verbose: true

image
Point cloud is CLEAN.

I think the issue with the model is (at least here), a TON of background is being reconstructed, so the number of vertices used for the mesh are being wasted on geometries you don’t care about.

(No seriously, a ton):
image

I think you’d likely have to trim/subset the pointcloud to just your Area Of Interest and then regenerate the mesh from that.

(Pre-Processing settings)
image

2 Likes

Thanks for looking into this. Not sure I understand the XnConvert portion of the response. Is that part of WebODM processing?

Should I re-run using your settings but choose not to create the mesh. Edit the point cloud in CloudCompare and run the mesh portion of the processing or is there a way to limit the photos to remove the ones that contribute to the extraneous area away from the house?

No, an external tool I run sometimes to help with matching and final Image Quality.

Yeah, you could do that workflow. Others on the forum have, though I have not as of yet.

You could make image masks for each of the images to limit what is used for matching & reconstruction.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.