Error Writing PLY file

I want to test how this software generates different outputs. I have installed Docker in a Windows 10 computer with 16GBRAM.

I have downloaded dataset “aukerman” (https://github.com/OpenDroneMap/odm_data_aukerman) to test that the installation is correct, and also see that input and output paths are correct.
In this case:

docker run -it --rm -v D:\DRONE\images\aukerman\images:/code/images -v D:\DRONE\out\odm_georeferencing\aukerman:/code/odm_georeferencing -v D:\DRONE\out\odm_orthophoto\aukerman:/code/odm_orthophoto opendronemap/odm

After executing, the orthophoto is generated correctly. So far so good.

The problem is when I proceed the same but with my dataset (812 images):
URL = https://www.dropbox.com/sh/xdakbi70wqbpe8v/AACGgVXITj4rV6y21XXVcnmja?dl=0

In this case, and in a similar fashion than the other example:

docker run -it --rm -v D:\DRONE\images\ir55\images:/code/images -v D:\DRONE\out\odm_georeferencing\ir55:/code/odm_georeferencing -v D:\DRONE\out\odm_orthophoto\ir55:/code/odm_orthophoto opendronemap/odm

The problem seems to be when writing PLY file to disk:

Writing PLY file (192294095 verts, with colors, with normals, with confidences, with values, 0 faces)... time="2020-10-19T13:45:59+02:00" level=error msg="error waiting for container: invalid character 'u' looking for beginning of value"

Some output is attached as follows:

2020-10-19 08:12:37,947 INFO: Removed outliers: 63
2020-10-19 08:12:38,624 INFO: {'points_count': 87139, 'cameras_count': 812, 'observations_count': 400692, 'average_track_length': 4.5983084497182665, 'average_track_length_notwo': 6.696314119184419}
2020-10-19 08:12:38,626 INFO: Reconstruction 0: 812 images, 87139 points
2020-10-19 08:12:38,626 INFO: 1 partial reconstructions in total.
[INFO]    Updating /code/opensfm/config.yaml
[INFO]    undistorted_image_max_size: 640
[INFO]    Undistorting /code/opensfm ...
2020-10-19 08:12:53,457 DEBUG: Undistorting image DJI_0815_R.JPG
2020-10-19 08:12:53,571 DEBUG: Undistorting image DJI_0817_R.JPG
2020-10-19 08:12:53,572 DEBUG: Undistorting image DJI_0801_R.JPG
2020-10-19 08:12:53,586 DEBUG: Undistorting image DJI_0813_R.JPG
2020-10-19 08:12:53,616 DEBUG: Undistorting image DJI_0809_R.JPG
...............<NOT SHOWING OUTPUT FOR ALL IMAGES>..............
2020-10-19 08:13:05,391 DEBUG: Undistorting image DJI_0259_R.JPG
2020-10-19 08:13:05,405 DEBUG: Undistorting image DJI_0768_R.JPG
[INFO]    running /usr/bin/env python3 /code/SuperBuild/src/opensfm/bin/opensfm export_visualsfm --points "/code/opensfm"
[INFO]    running /usr/bin/env python3 /code/SuperBuild/src/opensfm/bin/opensfm export_geocoords --transformation --proj '+proj=utm +zone=30 +datum=WGS84 +units=m +no_defs +type=crs' "/code/opensfm"
[INFO]    Finished opensfm stage
[INFO]    Running mve stage
[INFO]    running /code/SuperBuild/src/elibs/mve/apps/makescene/makescene "/code/opensfm/undistorted/reconstruction.nvm" "/code/mve"
MVE Makescene (built on Oct 14 2020, 15:25:44)
Info: Detected VisualSFM bundle format.
NVM: Loading file...
NVM: Number of views: 812
NVM: Number of features: 87139
Creating output directories...
Writing MVE views...
Writing MVE view: view_0000.mve...
Writing MVE view: view_0009.mve...
Writing MVE view: view_0010.mve...
...............<NOT SHOWING OUTPUT FOR ALL MVE views>..............
Writing MVE view: view_0809.mve...
Writing MVE view: view_0811.mve...
Writing MVE view: view_0810.mve...
Writing bundle file...
Writing bundle (812 cameras, 87139 features): /code/mve/synth_0.out...

Done importing NVM file!
[INFO]    Running dense reconstruction. This might take a while.
[INFO]    running /code/SuperBuild/src/elibs/mve/apps/dmrecon/dmrecon -s0 --progress=fancy --local-neighbors=2 "/code/mve"
MVE Depth Map Reconstruction (built on Oct 14 2020, 15:25:39)
Initializing scene with 812 views...
Initialized 812 views (max ID is 811), took 37ms.
Reading Photosynther file (812 cameras, 87139 features)...
Reconstructing all views...
0 of 812 completed (0.00%)
1 of 812 completed (0.12%)
2 of 812 completed (0.25%)
4 of 812 completed (0.49%)
...............<NOT SHOWING ALL OUTPUT HERE>..............
812 of 812 completed (100.00%)
Reconstruction took 12609267ms.
Saving views back to disc...
Saving views to MVE files... done.
[INFO]    running /code/SuperBuild/src/elibs/mve/apps/scene2pset/scene2pset -F0 -mmask "/code/mve" "/code/mve/mve_dense_point_cloud.ply"
MVE Scene to Pointset (built on Oct 14 2020, 15:25:53)
Using depthmap "depth-L0" and color image "undistorted"
Initializing scene with 812 views...
Initialized 812 views (max ID is 811), took 40ms.
Initialized 812 views (max ID is 811), took 40ms.
Processing view "0010" (with colors)...
Processing view "0008" (with colors)...
Processing view "0000" (with colors)...
Processing view "0001" (with colors)...
Processing view "0009" (with colors)...
...............<NOT SHOWING OUTPUT FOR ALL IMAGES>..............
Mask not found for image "0803", skipping.
Mask not found for image "0804", skipping.
Mask not found for image "0805", skipping.
Mask not found for image "0806", skipping.
Mask not found for image "0807", skipping.
Mask not found for image "0808", skipping.
Mask not found for image "0809", skipping.
Mask not found for image "0810", skipping.
Mask not found for image "0811", skipping.
Filtered a total of 0 points.
Writing final point set (192294095 points)...
Writing PLY file (192294095 verts, with colors, with normals, with confidences, with values, 0 faces)... time="2020-10-19T13:45:59+02:00" level=error msg="error waiting for container: invalid character 'u' looking for beginning of value"

And the software stops at this point

Can someone help me?
Thanks in advance

1 Like

Hi @alberto.fernandez!

Sorry for not taking the time to review your set of images, but a doubt assails me. Very surely your hardware is not optimal for the number of images in your set. Without knowing the CPU you use, 16 GB of RAM is a low amount to process that data set (It is good for a set between 100-200 images).

I hope I was wrong and that someone please correct me if my comment is not correct. I hope it can be useful to you.

Regards,

1 Like

Hi, thank you for your response.
CPU = Intel Core i7-8750H CPU @ 2.20Ghz

Additionally, I would like to say:

  1. The images in my dataset are of low resolution (640x512). Without knowing in detail how this library processes the images, I was hoping that despite the fact that the dataset is composed of many images, the low resolution of each image was a good point. “Aukerman” dataset is composed of images with resolution 4896x3672.
  2. IMO the processing stops when trying to write PLY file and not in an intermediate state during the processing.

Thanks

1 Like

I think you’re both right in a way. It is likely you’re getting an Out Of Memory error when trying to write that massive PLY…

Can you skip that output product and see if you can process through to the end on your specs?

I don’t know how to skip writing the PLY file… :confused:

Hi,
with a new computer, I can generate the orthophoto without any problem (64GBRAM).
The problem now is that (the thermal images are only 640x480) the matching is not performed ok with some images… Attached you can see the resulting image.
Any recommendation about what parameters to adjust?
Many thanks in advance

image

1 Like

What configuration options are you using?

Please list them all.

[39m[INFO] Initializing ODM - Wed Nov 18 14:47:07 2020
[39m[INFO] ==============
[39m[INFO] build_overviews: False
[39m[INFO] camera_lens: auto
[39m[INFO] cameras: {}
[39m[INFO] crop: 3
[39m[INFO] debug: False
[39m[INFO] dem_decimation: 1
[39m[INFO] dem_euclidean_map: False
[39m[INFO] dem_gapfill_steps: 3
[39m[INFO] dem_resolution: 5
[39m[INFO] depthmap_resolution: 640
[39m[INFO] dsm: False
[39m[INFO] dtm: False
[39m[INFO] end_with: odm_report
[39m[INFO] fast_orthophoto: False
[39m[INFO] feature_quality: high
[39m[INFO] feature_type: sift
[39m[INFO] force_gps: False
[39m[INFO] gcp: None
[39m[INFO] geo: None
[39m[INFO] gps_accuracy: 10
[39m[INFO] ignore_gsd: False
[39m[INFO] matcher_distance: 0
[39m[INFO] matcher_neighbors: 8
[39m[INFO] max_concurrency: 48
[39m[INFO] merge: all
[39m[INFO] mesh_octree_depth: 10
[39m[INFO] mesh_point_weight: 4
[39m[INFO] mesh_samples: 1.0
[39m[INFO] mesh_size: 200000
[39m[INFO] min_num_features: 8000
[39m[INFO] name: code
[39m[INFO] opensfm_depthmap_method: PATCH_MATCH
[39m[INFO] opensfm_depthmap_min_consistent_views: 3
[39m[INFO] opensfm_depthmap_min_patch_sd: 1
[39m[INFO] optimize_disk_space: False
[39m[INFO] orthophoto_compression: DEFLATE
[39m[INFO] orthophoto_cutline: False
[39m[INFO] orthophoto_no_tiled: False
[39m[INFO] orthophoto_png: False
[39m[INFO] orthophoto_resolution: 5
[39m[INFO] pc_classify: False
[39m[INFO] pc_csv: False
[39m[INFO] pc_ept: False
[39m[INFO] pc_filter: 2.5
[39m[INFO] pc_las: False
[39m[INFO] pc_rectify: False
[39m[INFO] pc_sample: 0
[39m[INFO] project_path: /
[39m[INFO] radiometric_calibration: none
[39m[INFO] rerun: None
[39m[INFO] rerun_all: False
[39m[INFO] rerun_from: None
[39m[INFO] resize_to: 2048
[39m[INFO] skip_3dmodel: False
[39m[INFO] sm_cluster: None
[39m[INFO] smrf_scalar: 1.25
[39m[INFO] smrf_slope: 0.15
[39m[INFO] smrf_threshold: 0.5
[39m[INFO] smrf_window: 18.0
[39m[INFO] split: 999999
[39m[INFO] split_multitracks: False
[39m[INFO] split_overlap: 150
[39m[INFO] texturing_data_term: gmi
[39m[INFO] texturing_keep_unseen_faces: False
[39m[INFO] texturing_nadir_weight: 16
[39m[INFO] texturing_outlier_removal_type: gauss_clamping
[39m[INFO] texturing_skip_global_seam_leveling: False
[39m[INFO] texturing_skip_hole_filling: False
[39m[INFO] texturing_skip_local_seam_leveling: False
[39m[INFO] texturing_skip_visibility_test: False
[39m[INFO] texturing_tone_mapping: none
[39m[INFO] tiles: False
[39m[INFO] time: False
[39m[INFO] use_3dmesh: False
[39m[INFO] use_exif: False
[39m[INFO] use_fixed_camera_params: False
[39m[INFO] use_hybrid_bundle_adjustment: False
[39m[INFO] use_opensfm_dense: False
[39m[INFO] verbose: False
[39m[INFO] ==============
[39m[INFO] Running dataset stage
[39m[INFO] Loading dataset from: /code/images
[39m[INFO] Loading 812 images
[39m[INFO] Wrote images database: /code/images.json
[39m[INFO] Found 812 usable images
[39m[INFO] Parsing SRS header: WGS84 UTM 30N
[39m[INFO] Finished dataset stage
[39m[INFO] Running split stage
[39m[INFO] Normal dataset, will process all at once.
[39m[INFO] Finished split stage
[39m[INFO] Running merge stage
[39m[INFO] Normal dataset, nothing to merge.
[39m[INFO] Finished merge stage
[39m[INFO] Running opensfm stage
[39m[INFO] Writing exif overrides
[39m[INFO] Maximum photo dimensions: 640px
[39m[INFO] Altitude data detected, enabling it for GPS alignment
[39m[INFO] [‘use_exif_size: no’, ‘flann_algorithm: KDTREE’, ‘feature_process_size: 320’, ‘feature_min_frames: 8000’, ‘processes: 48’, ‘matching_gps_neighbors: 8’, ‘matching_gps_distance: 0’, ‘depthmap_method: PATCH_MATCH’, ‘depthmap_resolution: 640’, ‘depthmap_min_patch_sd: 1’, ‘depthmap_min_consistent_views: 3’, ‘optimize_camera_parameters: yes’, ‘undistorted_image_format: tif’, ‘bundle_outlier_filtering_type: AUTO’, ‘align_orientation_prior: vertical’, ‘triangulation_type: ROBUST’, ‘bundle_common_position_constraints: no’, ‘feature_type: SIFT’, ‘use_altitude_tag: yes’, ‘align_method: auto’, ‘local_bundle_radius: 0’]
[39m[INFO] running /code/SuperBuild/src/opensfm/bin/opensfm extract_metadata “/code/opensfm”
[39m[INFO] running /code/SuperBuild/src/opensfm/bin/opensfm detect_features “/code/opensfm”
[39m[INFO] running /code/SuperBuild/src/opensfm/bin/opensfm match_features “/code/opensfm”
[39m[INFO] running /code/SuperBuild/src/opensfm/bin/opensfm create_tracks “/code/opensfm”
[39m[INFO] running /code/SuperBuild/src/opensfm/bin/opensfm reconstruct “/code/opensfm”
[39m[INFO] Updating /code/opensfm/config.yaml
[39m[INFO] undistorted_image_max_size: 640
[39m[INFO] Undistorting /code/opensfm …
[39m[INFO] running /code/SuperBuild/src/opensfm/bin/opensfm export_visualsfm --points “/code/opensfm”
[39m[INFO] running /code/SuperBuild/src/opensfm/bin/opensfm export_geocoords --transformation --proj ‘+proj=utm +zone=30 +datum=WGS84 +units=m +no_defs +type=crs’ “/code/opensfm”
[39m[INFO] Finished opensfm stage
[39m[INFO] Running openmvs stage
[39m[INFO] running /code/SuperBuild/src/opensfm/bin/opensfm export_openmvs “/code/opensfm”
[39m[INFO] Running dense reconstruction. This might take a while.
[39m[INFO] running /code/SuperBuild/install/bin/OpenMVS/DensifyPointCloud “/code/opensfm/undistorted/openmvs/scene.mvs” --resolution-level 0 --min-resolution 640 --max-resolution 640 --max-threads 48 -w “/code/opensfm/undistorted/openmvs/depthmaps” -v 0
15:00:04 [App ] Build date: Nov 9 2020, 22:57:13
15:00:04 [App ] CPU: Intel® Xeon® CPU E5-2650 v4 @ 2.20GHz (48 cores)
15:00:04 [App ] RAM: 62.64GB Physical Memory 31.44GB Virtual Memory
15:00:04 [App ] OS: Linux 3.10.0-957.1.3.el7.x86_64 (x86_64)
15:00:04 [App ] SSE & AVX compatible CPU & OS detected
15:00:04 [App ] Command line: /code/opensfm/undistorted/openmvs/scene.mvs --resolution-level 0 --min-resolution 640 --max-resolution 640 --max-threads 48 -w /code/opensfm/undistorted/openmvs/depthmaps -v 0
15:00:04 [App ] Preparing images for dense reconstruction completed: 812 images (352ms)
15:00:05 [App ] Selecting images for dense reconstruction completed: 812 images (81ms)

1 Like

I’d try changing the below, but you might run out of resources:
–depthmap_resolution 1280
–feature_quality ultra
–gps_accuracy 5
–mesh_octree_depth 12
–min_num_features 80000
–opensfm_depthmap_method BRUTE_FORCE
–pc_classify
–pc_rectify
–resize_to -1
–use_3dmesh
–use_hybrid_bundle_adjustment

2 Likes

I have a question regarding the use of --opensfm-depthmap-method BRUTE_FORCE.

I know it produces more points but Is there any accuracy improvement from using it?

From what I understand, BRUTE_FORCE should not only generate more matches, but will handle difficult data better than FLANN.

I’m not on a clock for my processing, so I like allowing more time to match, if possible.

1 Like

With the indicated parameters:

–depthmap-resolution=1280 --feature-quality=ultra --gps-accuracy=5 --mesh-octree-depth=12 --min-num-features=80000 --opensfm-depthmap-method=BRUTE_FORCE --pc-classify --pc-rectify --resize-to=-1 --use-3dmesh --use-hybrid-bundle-adjustment

The output is:
image

The output for default parameters (shown above) is:
image

1 Like

So it improved, but still not amazing…

You weren’t kidding about those thermal images being low resolution, jeez.

Maybe try bumping the min-num-features up to 160000, or half the pixels per image.

Also, try increasing pc-quality to ultra.

You could also increase the mesh-size to maybe double or triple default to see if that helps any.

1 Like

Thank you,

  • Does it make sense to perform both min-num-features=160000 AND --resize-to=320 at the same time?
  • I do not see “pc-quality” in the documentation or in the previous log
1 Like

I’d not use resize-to. If you’re on the latest version, it is deprecated for feature-quality.

pc-quality is also a flag on the latest version.

1 Like

Output with:

–depthmap-resolution=1280 --mesh-size 300000 --feature-quality=ultra --gps-accuracy=5 --mesh-octree-depth=12 --min-num-features=160000 --opensfm-depthmap-method=BRUTE_FORCE --pc-classify --pc-rectify --resize-to=-1 --use-3dmesh --use-hybrid-bundle-adjustment

I think that this configuration gives better results (but not sure…). Here is the ouput.

image

Based on this, what could I finally do to improve the results?
Thanks

1 Like

Getting much better, I think!

Don’t forget, resize-to has been deprecated, using feature-quality ultra is sufficient.

You might get some benefit from trying higher levels of pc-quality, as it defaults to Medium, and you can try High/Ultra, if you’ve got enough RAM.

1 Like

--crop 0 --debug --dem-resolution 1.0 --dsm --feature-quality ultra --gps-accuracy 5 --mesh-octree-depth 12 --mesh-size 400000 --min-num-features 160000 --opensfm-depthmap-method BRUTE_FORCE --orthophoto-resolution 1.0 --pc-classify --pc-quality ultra --pc-rectify --rerun-all --time --use-3dmesh --use-hybrid-bundle-adjustment --verbose

Looking at the odm_filterpoints PLY in CloudCompare:
image

I’m seeing a bit of bowling/overly-aggressive self-calibration in this scene.

Further, look at all the gaps in the cloud and note that that is where there is significant issues with reconstruction.

Ortho:
image

DSM:
image

Cameras:
image

That being said, the 3D Model isn’t awful (excluding that weird cliff in the data):
image

1 Like
./run.sh --camera-lens brown --crop 0 --debug --dem-resolution 1.0 --dsm --feature-quality ultra --ignore-gsd --mesh-octree-depth 12 --mesh-size 1000000 --min-num-features 32768 --open
sfm-depthmap-method BRUTE_FORCE --orthophoto-resolution 1.0 --pc-classify --pc-quality ultra --pc-rectify --rerun-all --time --use-3dmesh --use-hybrid-bundle-adjustment --verbose

So I dialed back the tie-points number to about 10% of the total pixels per image, instead of closer to 50% like before, with the hope that this will filter out some of the weaker tie-points. Also greatly increased mesh size, which still lead to about 10% final vertex count compared to unsimplified mesh.

DEM still bowls and has that fracture:
image

Ortho looks reasonable, but due to that fracture, has some reconstruction issues:
image

Mesh looks good, IMO:
image

At the end of the day, you’re dealing with input images that are 0.3MP taken pretty close to the subject in the far-infrared/thermal band, with lots of highly spectral surfaces… I don’t know that we can expect super much more from this reconstruction.

That being said, I hope others can take a poke at it as well.

Data products:
https://1drv.ms/u/s!AvMZEGXuAwQzv74bsWBhruN1E9kDzg

Let me know when you get it, so I can take it down.

2 Likes

Thank you very much for the detailed answers :).
I have downloaded the data products. I think that we can not expect a lot of improvements. Having said that, a teammate created the Ortho with a commercial software. I am attaching it here just for the sake of completeness (not judging at all) and trying to show that maybe by setting the right parameters we can achieve similar results.
image

EDIT: They used Agisoft Metashape (previously known as Agisoft PhotoScan). PhotoScan cloud processing filter can be configured: a) without depth filters, b) with a moderate filter and with an c) aggressive filter. In this case, aggressive filter was used. They also used “Optimize Cameras” option (an option to optimize camera alignment results). As they processed this dataset some months ago, they do not remember the exact parameters because they did several tests and the output does not contain any kind of log.

2 Likes