Depth Map Construction Inquiry

What exactly occurs during this phase?

After completion of the opensfm stage, and entering of the openmvs stage

Roughly it seems to involve:

  1. Estimation
  2. Filtering
  3. Fusing

Are there any optimizations that can be performed at this point? Where does this code live? I routinely run out of memory during the fusing stage.

Parallel to this, is it possible to have this stage “checkpoint” like the other stages seem to?
For example, if it fails during fusing, shouldn’t the estimation and filtering performed above (at great time cost) still be valid? Why aren’t these written to disk?

Trace:

[INFO] Finished opensfm stage
[INFO] Running openmvs stage
[INFO] running /code/SuperBuild/src/opensfm/bin/opensfm export_openmvs "/code/opensfm"
[INFO] Running dense reconstruction. This might take a while.
[INFO] running /code/SuperBuild/install/bin/OpenMVS/DensifyPointCloud "/code/opensfm/undistorted/openmvs/scene.mvs" --resolution-level 1 --min-resolution 2000 --max-resolution 4000 --max-threads 1 -w "/cod
e/opensfm/undistorted/openmvs/depthmaps" -v 0
01:41:57 [App ] Build date: Nov 12 2020, 20:03:59
01:42:00 [App ] CPU: Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz (8 cores)
01:42:00 [App ] RAM: 26.44GB Physical Memory 1.00GB Virtual Memory
01:42:00 [App ] OS: Linux 5.4.72-microsoft-standard-WSL2 (x86_64)
01:42:00 [App ] SSE & AVX compatible CPU & OS detected
01:42:00 [App ] Command line: /code/opensfm/undistorted/openmvs/scene.mvs --resolution-level 1 --min-resolution 2000 --max-resolution 4000 --max-threads 1 -w /code/opensfm/undistorted/openmvs/depthmaps -v
0
01:42:50 [App ] Preparing images for dense reconstruction completed: 171 images (45s78ms)
01:43:23 [App ] Selecting images for dense reconstruction completed: 170 images (33s296ms)
Estimated depth-maps 170 (100%, 8h40m6s282ms)
Filtered depth-maps 170 (100%, 11m36s711ms)
Fused depth-maps 147 (86.47%, 2m57s, ETA 27s)...Killed
Traceback (most recent call last):
File "./run.py", line 69, in <module>
app.execute()
File "/code/stages/odm_app.py", line 86, in execute
self.first_stage.run()
File "/code/opendm/types.py", line 361, in run
self.next_stage.run(outputs)
File "/code/opendm/types.py", line 361, in run
self.next_stage.run(outputs)
File "/code/opendm/types.py", line 361, in run
self.next_stage.run(outputs)
[Previous line repeated 1 more time]
File "/code/opendm/types.py", line 342, in run
self.process(self.args, outputs)
File "/code/stages/openmvs.py", line 62, in process
system.run('%s "%s" %s' % (context.omvs_densify_path,
File "/code/opendm/system.py", line 79, in run
raise Exception("Child returned {}".format(retcode))
Exception: Child returned 137

Good questions all: are you having more memory issues with the introduction of OpenMVS?

I’m fairly certain things are written to disk, but one limitation to how we deal with checks for each of the steps is they’re pretty coarse. So during rerun, if OpenMVS hasn’t finished, it eliminates what’s there and reruns from the start of that stage. So, it’s a bit heavy handed at this stage.

It probably wouldn’t take too much effort to add some additional checks (you or I could do it, more than likely) for intermediate products. While doing this throughout the toolchain could be overkill and may have unintended consequences, doing do in a long running step like OpenMVS could make a lot of sense.

2 Likes

It’s all in OpenMVS. https://github.com/cdcseacave/openMVS/

1 Like

Yes, it seems to run far heavier now. Jobs that passed before fail now.

Oh. All the checks are in OpenMVS too?..

1 Like

Here’s what I’ve got thus far. Support/corrections welcome.

1 Like

Sorry, no the checks on whether we need to rerun are on ODM (and we delete the entire OpenMVS folder on rerun, even if there are intermediate results that could be used for OpenMVS to resume work).

1 Like

I’m not sure that support request is relevant to OpenMVS; it’s up to ODM to make more efficient use of it.

I think I misinterpreted your questions and led you the wrong way with my answer :slight_smile:

2 Likes

Is that fixable?

Closed my issue before I make a name for myself over there, too :rofl:

1 Like

Should be.

2 Likes

How much RAM profiling have we done on OpenMVS? I just ran a 5k dataset through a machine with 768GB of RAM with a TB of swap and I think it just had an OOM. I need to dig deeper, but that’s what it appears to be.

edit:
Oops, no: out of space. Le sigh. That process ran for a week.

3 Likes

Minimal, but in my comparison thread you can see the settings I used with MVE that all fail now, even with min. features dialed back

I even got OOM’d using OATS during Sheffield’s Cross dataset, which I passed easily back with MVE.

Are you profiling the RAM as you go? I’d be curious for ones that run to completion where the maximum RAM usage is in the pipeline. It sounds like it is no longer texturing that is the memory bottleneck.

1 Like

Nah, not closely. I have 28GB RAM and 1GB SWAP allocated to my WSL2 instance.

Hmm, maybe we can write RAM and SWAP to console alongside progress indicators?

Yeah, that’d be useful. I haven’t tracked this too carefully either.

1 Like

Strange though, I processed Sheffield Cross with 24GB and no swap on Linux (docker).

1 Like

I’m not sure how memory pressure works within/without WSL, so perhaps my host was putting enough memory pressure on WSL that it had to trigger an OOM? :man_shrugging:

I’m re-running it just in case to confirm my previous statement.

2 Likes

Confirm, processes fine on Linux with docker, 24GB, no swap.

2 Likes

Kernel version?