Can't process Large Dataset

Yes I am using WebODM over docker. The steps above will come in handy if I ever run into that situation. I guess my next logical question would be which setup is more efficient and quicker to process large datasets?..The native WebODM Windows install or WebODM over Docker? My laptop is Windows 10 Pro with 12 core processor, 128GB Ram, 2 SSD’s (C: 1TB and D:2TB).

1 Like

I have to admit I just deleted the processing node and restarted WebODM. That recreated the processing node and I was able to start processing.

1 Like

I believe Stephen has indicated that Docker/Native on Linux is the best, especially for large datasets. I would think that Native over Windows would be slightly above Docker over Windows, but I don’t have the data to back that up at present.

AlienSygourneyWeaverGIF
I mean… It worked, right? :rofl:

5 Likes

Ripley was right!

1 Like

About quite literally everything, to be honest.

Hi guys,
I am struggling to get any result at the moment.
The VM with the recently added 64GB swap is running for the past 2 days and 3 hrs… so it is in progress
The pc cannot currently process anything. idinitialization-error
On my laptop where I was able to process bigger datasets in the past, I am getting the following error:

2022-01-12 12:08:16,985 DEBUG: Undistorting image Area1_Route4_0877.JPG
2022-01-12 12:08:17,185 DEBUG: Undistorting image Area1_Route4_0673.JPG
2022-01-12 12:08:17,397 DEBUG: Undistorting image Area1_Route4_0677.JPG
[INFO] running “C:\WebODM\resources\app\apps\ODM\SuperBuild\install\bin\opensfm\bin\opensfm” export_visualsfm --points “C:\WebODM\resources\app\apps\NodeODM\data\bcd41548-9dcc-410e-ae04-e36010cdf2be\opensfm”
[INFO] Finished opensfm stage
[INFO] Running openmvs stage
[INFO] running “C:\WebODM\resources\app\apps\ODM\SuperBuild\install\bin\opensfm\bin\opensfm” export_openmvs “C:\WebODM\resources\app\apps\NodeODM\data\bcd41548-9dcc-410e-ae04-e36010cdf2be\opensfm”
[INFO] Running dense reconstruction. This might take a while.
[INFO] Estimating depthmaps
[INFO] running “C:\WebODM\resources\app\apps\ODM\SuperBuild\install\bin\OpenMVS\DensifyPointCloud” “C:\WebODM\resources\app\apps\NodeODM\data\bcd41548-9dcc-410e-ae04-e36010cdf2be\opensfm\undistorted\openmvs\scene.mvs” --resolution-level 1 --min-resolution 507 --max-resolution 973 --max-threads 12 --number-views-fuse 2 -w “C:\WebODM\resources\app\apps\NodeODM\data\bcd41548-9dcc-410e-ae04-e36010cdf2be\opensfm\undistorted\openmvs\depthmaps” -v 0 --geometric-iters 0
===== Dumping Info for Geeks (developers need this to fix bugs) =====
Child returned 3221226505
Traceback (most recent call last):
File “C:\WebODM\resources\app\apps\ODM\stages\odm_app.py”, line 94, in execute
self.first_stage.run()
File “C:\WebODM\resources\app\apps\ODM\opendm\types.py”, line 346, in run
self.next_stage.run(outputs)
File “C:\WebODM\resources\app\apps\ODM\opendm\types.py”, line 346, in run
self.next_stage.run(outputs)
File “C:\WebODM\resources\app\apps\ODM\opendm\types.py”, line 346, in run
self.next_stage.run(outputs)
[Previous line repeated 1 more time]
File “C:\WebODM\resources\app\apps\ODM\opendm\types.py”, line 327, in run
self.process(self.args, outputs)
File “C:\WebODM\resources\app\apps\ODM\stages\openmvs.py”, line 100, in process
raise e
File “C:\WebODM\resources\app\apps\ODM\stages\openmvs.py”, line 91, in process
run_densify()
File “C:\WebODM\resources\app\apps\ODM\stages\openmvs.py”, line 86, in run_densify
system.run(’"%s" “%s” %s’ % (context.omvs_densify_path,
File “C:\WebODM\resources\app\apps\ODM\opendm\system.py”, line 106, in run
raise SubprocessException(“Child returned {}”.format(retcode), retcode)
opendm.system.SubprocessException: Child returned 3221226505

===== Done, human-readable information to follow… =====

[ERROR] The program exited with a strange error code. Please report it at https://community.opendronemap.org

2 Likes

Hi Declan, I understand you are using windows+docker right? Try rebooting the pc and deleting the swapfile. It helped me a couple of times before. It will be re-created again anyways.

2 Likes

Hi Daniel, I have been looking for a swapfile.ini in the root of c: but I can’t find it. I did see a post saying I can delete it via the registry but I am always careful about messing in there.
How did you delete it? I am running Windows 10 home…

Hi guys,
On the VM, I have noticed the gap between 11:55 and 18:20
It is currently at this stage again and there have been no messages since 23:14
Should I continue to let this run? Is the ‘Termination: NO_CONVERGENCE’ message significant?

1 Like

I was referring to the docker image’s swap file, you can read more here:

2 Likes

Monitor CPU, GPU, RAM, and HDD access for the task. It likely is still processing.

And no, that mesage is not an error so don’t worry about it.

2 Likes

Just checking if this still looks like it is processing…
No entry in 27 hours
CPU usage is between 100% and 3100% (32 cores)

3 Likes

This thread is like reading a thriller! And learning a lot along the way!

How did it go? Did it complete? Is it still running?!?

5 Likes

So… It got to 50 hours without any updates so I cancelled it, cleared out the files and started a new process with half of the images. So it is now running on 4019 images. Maybe I should have left it for another few days but decision made now so let’s see how this progresses :slight_smile:

3 Likes

You made the right decision, I suspect. By contrast, I lost 50 hours or so of processing when I thought a process for 20k images was stuck last week and killed it prematurely.

4k images will work more predictably on your system for sure.

3 Likes

I thought so too… but…

I am going to set --feature-quality medium and try again.

3 Likes

Yes, you are at the top end of what might be possible with that hardware:

2 Likes

Just for my curiosity, and to stress test my system, could you share those 4k images with me? Currently I have no jobs running so I got CPU time. I’ll send you a OneDrive link in PM if you agree.

1 Like

Wouldn’t I love to share the data… It is confidential information so I need to find a way to process them myself.
The --feature-quality medium failed.
The images are in batches of approx. 1000 images per direction. And 8 directions.
I am going to go through them and try to split each batch into 4 sections.
I wonder is there software that would place the images on a grid according to their location… similar to what I see when I select ‘show cameras’ in WebODM. That would help me to split them into 4 areas.
I can stitch them afterwards… possibly in Blender.
I am only interested in getting a 3d model.
The alternative is to do a split merge and process all 50k images and convert the resulting point cloud into a textured mesh externally. I am guessing the processing of 50k images using a split merge will either require a very long time or a much more powerful machine like that outlined here:

4 Likes

Option 1

Use ODM to help you split the dataset and then just run it as separate models.

Use this formula to make sure you have locally mounted data:

Then run with split set to 2000 (it will include more in the overlap) and end-with to split. You might also tweak split-overlap to something smaller than 150m. Probably you could get away with 50 or 100 meters for the density of data you have collected.

When this is done running, you will have a directory where you have mounted it called submodels that will look like this (hopefully with fewer submodels):

submodel_0001  submodel_0004  submodel_0007  submodel_0010  submodel_0013  submodel_0016  submodel_0019
opensfm        submodel_0002  submodel_0005  submodel_0008  submodel_0011  submodel_0014  submodel_0017  submodel_0020
submodel_0000  submodel_0003  submodel_0006  submodel_0009  submodel_0012  submodel_0015  submodel_0018

Just copy these out to another location, and you’ll have a bunch of slightly overlapping sets of images that you can process independently.

Option 2

Run a tool which reads the exif data (e.g.exiftool -csv >/path/to/out.csv -GPSLatitude -GPSLongitude <InputFileOrDir>, plot your locations in QGIS, and choose which images go where. This might be tricky, as you probably want some overlap between your models, but it’s super manual and easy to control.

edit:

Option 3

Maybe. But split-merge actually isn’t more memory intensive than what you are doing. Point taken off my post for bragging about compute resources, but you should be able to use split merge here as long as your split size is small enough for your memory requirements. So, you could just run split-merge, set your split to 2000, and wait a while. You’ll likely get a great result.

edit 2:
Split merge does generate the textured models, it just doesn’t share them back via the web, nor merge them into 1 model. But you can extract the individual models from the processing chain.

4 Likes