Can't find / open output files

I have tried my best searching on the issues page on GitHub and this community forums but I haven’t found the exact issue. It is of two parts:
I have not tinkered with the default output path variable / environment. The code ran partially but crashed at different points for datasets of varying sizes. Initially, it returned an Exception: Child returned 137 error but I quickly found out that that was due to limited available memory. I finally tried running this code docker run -it --rm -v "($path)/datasets/project/images:/code/images" opendronemap/odm against just 16 images and the code still didn’t run to completion and aborted with an Exception: Child returned 134.
Here are my system specifications:
Intel Core M 5y10c with a base clock speed of 1.00GHz with a turbo boost upto 2.00GHz. Dual Core, 4 logical processors with virtualisation enabled.
L1 Cache = 128 KB
L2 Cache = 512 KB
L3 Cache = 4.0 MB
And most importantly, just 4GB of RAM

Secondly, can I open any temporary files that were generated? My main motive is to obtain the dense point clouds that were generated and analyse them. Everytime I run the code, I find a (significant) increase in data in the APPDATA folder for the current user, specifically C:\Users\user\AppData\Local\Docker\wsl\data\ext4.vhdx. I had come across a post carrying a capitalised red warning from Microsoft stating that I should not play with any file present here from the File Explorer, so I’m lost as to how I can access it and take a visual look at the temporary files (provided they are accessible)

  • Merging feature onto tracks and finds number of ‘good’ tracks (here 9497)
  • Attempts incremental reconstruction (using each image pair)
  • “Two-view reconstruction inliers: 935 / 935”
  • “Triangulated: 935”
  • Operates Ceres-Solver
    • Remove outliers from the ‘inliers’ calculated for each image pair
  • Reconstructed image is available
  • Undistorts images
  • Skips exporting geocoords_transformation.txt (shown as a warning. Due to lack of metainformation)
  • Finished OpenSfM
  • Estimated Depth-Maps (in 18m)
  • Filtered all Depth-Maps
  • Fused all Depth-Maps
  • Finished Open mvs
  • Finished odm_filterpoints stage
  • Created DSM for 2.5D mesh
    • DSM resolution = 0.0036512934767399326 (0.09cm/pixel)
  • Point cloud bounds are calculated (trivial)
    • “[minx: -4.466757774, maxx: 4.508954525] [miny: -4.692795753, maxy: 4.963309288]”
  • “DEM resolution is (2645, 2459), max tile size is 4096, will split DEM generation into 1 tiles”
  • Completed smoothing to create mesh_dsm.tif
  • Created mesh from DSM to complete odm_meshing stage
  • Loaded and prepared the mesh
  • Built an adjacency graph (591220 edges)
  • “Building BVH from 394195 faces… done. (Took: 993 ms)
    Calculating face qualities 100%… done. (Took 19.852s)
    Postprocessing face infos 100%… done. (Took 1.368s)
    Maximum quality of a face within an image: 2712.83
    Clamping qualities to 62.9438 within normalization.”
  • “62838 faces have not been seen
    Took: 47.901s
    Generating texture patches:
    Running… done. (Took 16.189s)
    4814 texture patches.
    Running global seam leveling:
    Create matrices for optimization… done.
    Lhs dimensionality: 201772 x 201772
    Calculating adjustments:
    Color channel 0: CG took 93 iterations. Residual is 9.96233e-05
    Color channel 1: CG took 94 iterations. Residual is 9.69049e-05
    Color channel 2: CG took 93 iterations. Residual is 9.97278e-05
    Took 1.403 seconds
    Adjusting texture patches 100%… done. (Took 7.792s)
    Running local seam leveling:”
    Aborted… Exception: Child returned 134

No parameters were fine tuned. I can’t even access most Docker Desktop settings like memory allocation that’s shown in many YouTube videos

I am currently using ODM (and not WebODM). I use Microsoft Edge running on Windows 10 Home. The software was installed through Docker Desktop (I tried both the WSL2 installation and native Ubuntu installation (on wsl2) before attempting it using Docker). I am not familiar with the whole procedure and mostly copy-pasted the code from the README.md file. I ran it against test cases with significant overlap (~80%) and the last subset of only 16 images that I had mentioned, was a strictly linear dataset.

Any help would be useful. Cheers!

Update: I ran it after slightly modifying the file names which previously included alphanumeric and underscores to only alphanumeric and the code runs a few lines further. It is now killed with the same 137 error, and personally I don’t think I have work around to the resources I have at hand.
.
.
.
Generating texture patches:
Running… done. (Took 23.254s)
7187 texture patches.
Running global seam leveling:
Create matrices for optimization… done.
Lhs dimensionality: 215953 x 215953
Calculating adjustments:
Color channel 0: CG took 103 iterations. Residual is 9.01634e-05
Color channel 2: CG took 102 iterations. Residual is 9.56484e-05
Color channel 1: CG took 107 iterations. Residual is 8.40037e-05
Took 1.82 seconds
Adjusting texture patches 100%… done. (Took 10.65s)
Running local seam leveling:
Killed
Traceback (most recent call last):
File “/code/run.py”, line 69, in
app.execute()
File “/code/stages/odm_app.py”, line 83, in execute
self.first_stage.run()
File “/code/opendm/types.py”, line 361, in run
self.next_stage.run(outputs)
File “/code/opendm/types.py”, line 361, in run
self.next_stage.run(outputs)
File “/code/opendm/types.py”, line 361, in run
self.next_stage.run(outputs)
[Previous line repeated 4 more times]
File “/code/opendm/types.py”, line 342, in run
self.process(self.args, outputs)
File “/code/stages/mvstex.py”, line 104, in process
system.run(’{bin} {nvm_file} {model} {out_dir} ’
File “/code/opendm/system.py”, line 79, in run
raise Exception(“Child returned {}”.format(retcode))
Exception: Child returned 137

Although, it will still be helpful if someone can answer if temporary files are accessible, and if yes, where I can find them.
Cheers!

Hi, given your errors I believe the processing will not have completed the dense point cloud. I am not the best person to identify temporary files, but I suspect it won’t have any usable outputs at this stage. Also, you will have trouble processing more than a handful of images with only 4 GB of RAM.

1 Like

Yes coreysnipes. That’s understandable. If at all the code does run to completion, where can I find the output files? Will it be stored within the WSL subsystem or will it be in the Windows environment?
Like how do files usually get stored. A few screenshots would help

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.