WebODM Lightning - task not complete

Hi Guys,

Have a data set that hasn’t completed WEB ODM Lightning.

Data is a farm with some bush attached to it so might be having trouble with identifying features etc

Processed 475 images from the same flight with no bush and it came out very nice.

I have tried to process twice will add both error messages.

Machine is MBP 8gb M2

Data set -1213 images

Usually only change-
Min features- 30,000
Cam lens- Brown
Mesh-size- 250,000

Any advice or help would be much appreciated!

Cheers!

Error message 1st time-

WARNING- getAngle : One or both vectors are null !
WARNING- getAngle : One or both vectors are null !
WARNING- getAngle : One or both vectors are null !
WARNING- getAngle : One or both vectors are null !
WARNING- getAngle : One or both vectors are null !
WARNING- getAngle : One or both vectors are null !
WARNING- getAngle : One or both vectors are null !
WARNING- getAngle : One or both vectors are null !
Segmentation fault (core dumped)

===== Dumping Info for Geeks (developers need this to fix bugs) =====
Child returned 139
Traceback (most recent call last):
File “/code/stages/odm_app.py”, line 81, in execute
self.first_stage.run()
File “/code/opendm/types.py”, line 398, in run
self.next_stage.run(outputs)
File “/code/opendm/types.py”, line 398, in run
self.next_stage.run(outputs)
File “/code/opendm/types.py”, line 398, in run
self.next_stage.run(outputs)
[Previous line repeated 3 more times]
File “/code/opendm/types.py”, line 377, in run
self.process(self.args, outputs)
File “/code/stages/odm_meshing.py”, line 54, in process
mesh.create_25dmesh(tree.filtered_point_cloud, tree.odm_25dmesh,
File “/code/opendm/mesh.py”, line 38, in create_25dmesh
mesh = dem_to_mesh_gridded(os.path.join(tmp_directory, ‘mesh_dsm.tif’), outMesh, maxVertexCount, maxConcurrency=max(1, available_cores))
File “/code/opendm/mesh.py”, line 111, in dem_to_mesh_gridded
raise e
File “/code/opendm/mesh.py”, line 98, in dem_to_mesh_gridded
system.run('“{bin}” -inputFile “{infile}” ’
File “/code/opendm/system.py”, line 110, in run
raise SubprocessException(“Child returned {}”.format(retcode), retcode)
opendm.system.SubprocessException: Child returned 139

===== Done, human-readable information to follow… =====

[ERROR] Uh oh! Processing stopped because of strange values in the reconstruction. This is often a sign that the input data has some issues or the software cannot deal with it. Have you followed best practices for data acquisition? See Flying Tips — OpenDroneMap 3.1.7 documentation
100 - done.

Error message 2nd time-

2023-05-02 06:56:58,052 INFO: DJI_20230428092709_0291_V.JPG resection inliers: 23774 / 25680
2023-05-02 06:56:58,405 INFO: Adding DJI_20230428092709_0291_V.JPG to the reconstruction
2023-05-02 06:57:12,056 INFO: -------------------------------------------------------
2023-05-02 06:57:12,398 INFO: DJI_20230428092900_0359_V.JPG resection inliers: 19070 / 22417
2023-05-02 06:57:12,692 INFO: Adding DJI_20230428092900_0359_V.JPG to the reconstruction
2023-05-02 06:57:26,556 INFO: -------------------------------------------------------
2023-05-02 06:57:26,909 INFO: DJI_20230428092708_0290_V.JPG resection inliers: 22375 / 24225
2023-05-02 06:57:27,223 INFO: Adding DJI_20230428092708_0290_V.JPG to the reconstruction
2023-05-02 06:57:40,191 INFO: -------------------------------------------------------
2023-05-02 06:57:40,469 INFO: DJI_20230428092925_0375_V.JPG resection inliers: 17942 / 19954
2023-05-02 06:57:40,743 INFO: Adding DJI_20230428092925_0375_V.JPG to the reconstruction
2023-05-02 06:57:43,734 INFO: Shots and/or GCPs are well-conditioned. Using naive 3D-3D alignment.
/code/SuperBuild/install/bin/opensfm/bin/opensfm: line 12: 771 Killed “$PYTHON” “$DIR”/opensfm_main.py “$@”

===== Dumping Info for Geeks (developers need this to fix bugs) =====
Child returned 137
Traceback (most recent call last):
File “/code/stages/odm_app.py”, line 81, in execute
self.first_stage.run()
File “/code/opendm/types.py”, line 398, in run
self.next_stage.run(outputs)
File “/code/opendm/types.py”, line 398, in run
self.next_stage.run(outputs)
File “/code/opendm/types.py”, line 398, in run
self.next_stage.run(outputs)
File “/code/opendm/types.py”, line 377, in run
self.process(self.args, outputs)
File “/code/stages/run_opensfm.py”, line 38, in process
octx.reconstruct(args.rolling_shutter, reconstruction.is_georeferenced(), self.rerun())
File “/code/opendm/osfm.py”, line 55, in reconstruct
self.run(‘reconstruct’)
File “/code/opendm/osfm.py”, line 34, in run
system.run(‘“%s” %s “%s”’ %
File “/code/opendm/system.py”, line 110, in run
raise SubprocessException(“Child returned {}”.format(retcode), retcode)
opendm.system.SubprocessException: Child returned 137

===== Done, human-readable information to follow… =====

[ERROR] Whoops! You ran out of memory! Add more RAM to your computer, if you’re using docker configure it to use more memory, for WSL2 make use of .wslconfig (Advanced settings configuration in WSL | Microsoft Learn), resize your images, lower the quality settings or process the images using a cloud provider (e.g. https://webodm.net).

1 Like

Piero has made some adjustments to ODM that should hopefully help with robustness in situations like this.

I’m not sure if they’ve rolled out fully yet, though.

1 Like

Thanks Saijin, is there any thing I can try to get this one processed. Smaller batches? Or any options I can change?

1 Like

I don’t have any suggestions other than to try again in a bit when hopefully the changes will have propagated to Lighting.

Do you have a deadline for that Task?

Ok yeah, we need the map for a client asap.
I have the data on Google drive if that helps.

1 Like

Give it another run on Lightning , and maybe try also processing it under Docker/Linux if you have access, since that is more like a rolling release that gets patches as they build and deploy.

Ok will try it again, we don’t have access to those two options.

Can you provide any information on why this is happening? We have processed up to 20 maps and have managed to work through a couple of issues and have had good outcomes.
But we cannot continue to run into errors such as this.

We like the online aspect of the software and would like to continue using the platform if it can be stable.

Cheers!

1 Like

For the specific error you posted about, it was an abnormality in how the data reconstructed causing the issue.

We don’t get many datasets that exhibit this issue, so being able to test and reproduce the issue takes time.

The fixes should have rolled out to Lightning to mitigate this error in datasets that exhibit this behavior.

1 Like

So is it a problem with the data or the software?

Sorry just trying to work through it with my boss.

Is there a way we can get data sets processed in future if we experience such problems.

Is there a way I can jump on a call with someone just to discuss some different options?

Thanks so much.

2 Likes

Processed ok on PC.

1 Like

Having been a project manager and a program manager I understand the nature of this question quite well. Unfortunately the answer isn’t always as straightforward in photogrammetry. There are many edge cases. As we find them and can replicate them, we patch them, as happened here in short order.

Edge cases are just that: edge cases. This is different from a regression (a bug that comes back) or a new bug (something that affects a wider range of datasets). But, can I assume you’ve now processed and are just trying to understand future risks?

In short, the OpenDroneMap ecosystem is very good at processing aerial data collected in a recommended manner, and often very good at processing data not collected in the recommended manner. Occasionally, we find and patch edge cases like this. Like all software, we do sometimes find bugs and regressions as well, and release patches for those as well.

The approach here was typical: share a replicable dataset with relevant logs and the software typically gets patched quickly, whether an edge case, bug, or regression.

UAV4Geo provides webodm.net as a service for the convenience of using OpenDroneMap for processing datasets without the overhead of running your own infrastructure. The community forums are a place to have these and other kinds of conversations in the open to allow others to benefit from our collective lessons, and in this case document and demonstrate an edge case.

I hope this helps a bit in framing things.

3 Likes

Stephen covered it nicely, I think, but basically, it is an interaction between a particular aspect of your dataset and our reconstruction pipeline. Given that photogrammetric reconstruction (typically) isn’t deterministic, you likely have had other datasets that just didn’t trip the error.

You can ask the Community for help and guidance, especially since folks here have a wealth of experience with processing tricky data, or even massively large datasets (in excess of 90,000 images).

We do not provide phone or video support, but you can always reach out to [email protected] or via Private Message here if you have sensitive data you can’t share and we need your help to test & reproduce the issue.

3 Likes

Thanks guys, for providing more clarity.

and also for the support tips!

I didn’t realise I wasn’t posting to everyone here?

Sorry if this is a dumb question but is there a thread with Ideal mapping conditions, things to avoid etc to make it easier for processing?

For a lot of us we just jump at the chance of good weather but not always the best with shadows etc…

Thanks,
Chase

1 Like

You are posting to our public Community in an open thread (which I hope is what you wanted?), so everyone here can see and interact with your thread.

No dumb questions, just questions left unasked!

Stephen has an excellent series on flight planning here:

Shadows kind of are what they are, unfortunately. An overcast day is best, but you don’t always have the option of when you can collect. But if you can take that day, take that overcast day :slight_smile:

1 Like