GCP: file does not exist

I’m very new to this lovely ODM community, so forgive me if I’m asking a stupid question.

TLDR

I’m getting these two errors when running ODM through Docker:

[WARNING] GCP file does not exist: E:/project/Uint16/gcp_list.txt
AttributeError: ‘NoneType’ object has no attribute ‘gcp_path’

Context

I am trying to create an orthomosaic from a batch of single band black and white aerial photographs from the 1960s (16-bit tif, one band). The dataset is not georeferenced so I created a .txt GCP file, the first few lines are as follows:

+proj=utm +zone=50 +ellps=WGS84 +datum=WGS84 +units=m +no_defs
114.110670774979 22.2258409817862 NA 779.5 1714.4 1963_1963-8757_cropped.tif
114.110670774979 22.2258409817862 NA 1751.82 1734.34 1963_1963-8756_cropped.tif
114.110670774979 22.2258409817862 NA 2653.2 1727.3 1963_1963-8755_cropped.tif
114.110670774979 22.2258409817862 NA 2085 934.8 1963_1963-8795_cropped.tif
114.110670774979 22.2258409817862 NA 1167.2 915.4 1963_1963-8796_cropped.tif
114.110670774979 22.2258409817862 NA 246.6 950.5 1963_1963-8797_cropped.tif
114.12317750048 22.1812293161165 NA 1724.5 1343.6 1963_1963-8762_cropped.tif
114.12317750048 22.1812293161165 NA 2645 1253.2 1963_1963-8761_cropped.tif
114.12317750048 22.1812293161165 NA 688.8 1291.2 1963_1963-8763_cropped.tif
114.12317750048 22.1812293161165 NA 1922.6 2421.8 1963_1963-8744_cropped.tif
114.12317750048 22.1812293161165 NA 1115.3 2419.8 1963_1963-8745_cropped.tif

My directory has the following structure:
E:/project/Uint16/images << my images
E:/project/Uint16/gcp_list.txt << my GCPs

I saw two related post on the issue (here and here), but I can’t seem to successfully mount the GCP file after hours of playing around with the structure of the code.

docker run -ti --rm -v E:/project:/project opendronemap/odm --project-path /project Uint16 --fast-orthophoto --gcp E:/project/Uint16/gcp_list.txt --skip-3dmodel --min-num-features 12000 --orthophoto-resolution 100

I’ve tried running this on webODM but it gave me an even more nebulous “cannot process dataset” 10 seconds in.

Any suggestions would be greatly appreciated.

1 Like

Welcome!

No stupid questions, just the ones left unasked :slight_smile:

Mmm… I think you should be looking at the geo.txt file, as the GCP is not really meant for the way you’re looking to use it, and that is likely while processing failed under WebODM.

https://docs.opendronemap.org/geo/

As for why it can’t accept the gcp_list.txt path, that’s curious. Does anything have that file locked for editing (program where you created it still open?)

1 Like

Thanks.

I just had a quick look at supplying geo.txt files, I assume that the x and y coordinates correspond to the centres of the images? That might not be easy to obtain since the landscape changed quite a bit and it’s difficult to find the corresponding feature in the modern day (compared with the GCPs, which are coordinates of obvious unchanged features), but I guess even a few points would at least lock the images in place?

On the gcp_list.txt front, I don’t think it’s opened elsewhere, it just kept giving me the same two errors.

Interestingly if I run the code twice and ignore the gcp errors it appears to align most images but produces a near-empty, low resolution orthomosaic with unrecognisable stray pixels.

1 Like

Ah, yes, that is a challenge.

Ideally, the GCPs should be present in 3-5 images each, not a single image per GCP. Do you have sufficient image density to have that be possible?

1 Like

Yes, the images on the same flightline has something like 70% overlap, overlap between different flightlines are more varied, but it’s not difficult to find landmarks covered by multiple images.

Does that mean I could stick with GCPs? If so that’ll be wonderful. Still leaves the “GCP file does not exist” issue though…

2 Likes

If you can get 3 GCPs or more, each with the recommended 3-5 (or more) images per GCP, the liklihood of a recurrent failure to process in the beginning with it should go down to basically zero. That being said, I must still caution that this is not the purpose of the gcp_list.txt file, so please don’t generalize this workflow to other situations. The more correct choice for lack of geodata will be the geo.txt file.

As for failure to access the file, I’m not sure. You’re positive nothing has a read/write lock on it, so aside from making a copy to another directory you’re positive you have full access rights to (desktop, documents, downloads, etc), I’m not sure… Maybe just re-try using WebODM interface?

2 Likes

Thanks Saijin

Just out of curiosity, what’s the actual difference in the backend between using GCPs and using geo.txt? I might give geo.txt a try at some point.

I tried moving the gcp_list.txt file around to get it mounted, without avail. I have a suspicion that it might be due to how the project path is specified. I used the format in the current documentation page (…-v E:/project:/project … --project-path /project Uint16). In some older posts there seem to be other ways to mount the project directory, which I don’t fully understand.

I don’t mind running things using the GUI on webODM, but could someone remind me how to attach the gcp_list.txt file in webODM? Uploading it along with the images (using the “select images and GCP” button) doesn’t seem to work and ends up with a “cannot process dataset” error, but I don’t see anywhere else I could get the .txt file up there. Or is this the gcp vs geo issue we talked about? I did have 5 GCPs all visible in at least 3 images though, and that’s for a relatively small testing dataset with only 51 images.

2 Likes

geo.txt is meant to define the position of an image without geolocation data.

gcp_list.txt is meant to define a singular pixel’s geographic location across multiple images.

If it is loading the images, you likely passed the project path correctly.

You would just select it alongside the images, and it will be processed properly since it will be auto-detected if named as per specification (gcp_list.txt for GCP file and geo.txt for geoinformation file).

Very likely the GCP vs GEO txt suitability issue we talked about. Hard to know for certain without all the data at my hands to work with, but seems it.

So, it’s less about having a certain number of GCPs visible in a given image, and more about having a given GCP visible across multiple images, and then having a sufficient number of well-formed GCPs distributed across the study site.

1 Like

Generally, geo.txt files are easier to generate. The trick here is to use the error in your favor. If you don’t know the exact center of frame, estimate it and estimate the error. The geo.txt supports those error estimates. You can always rubber sheet or affine transform the data after reconstruction in QGIS or similar. You just want your geo values to be good enough and your error estimates to be larger than or equal to your error.

3 Likes

Thank you for articulating the how and why of the geo.txt being more appropriate for this workflow :slight_smile:

2 Likes

Thanks both for explaining the difference between geo.txt and gcp_list.txt

I managed to solve a few of the issues, but now other errors are popping up. Let’s start with the issues I’ve managed to solve, so people could refer to this if they encountered the same problem again.

Solutions

  1. I ignored the --gcp tag, and the programme automatically found the gcp_list.txt file correctly (weirdly this didn’t work yesterday, maybe I did something wrong)

  2. Empty columns in the gcp_list.txt files cannot be filled with NAs, it generates an error almost right away (“ValueError: could not convert string to float: ‘NA’”). NaNs worked to some extent, but errors still pop up down the line. I ended up adding height data to avoid errors.

  3. I stupidly copied the wrong PROJ string for WGS84. EPSG:4326 works fine. The local projection (EPSG:2326) doesn’t work.

Now to the problems

  1. What’s the proper way to specify empty/nodata columns in the .txt files? It’ll be nice if this is included in the documentation.

  2. When I only use GCPs (i.e. no geo.txt), it almost succeeds (in fact a small part started appearing on the orthomosaic with correct georeferencing), but then an error pops up (“IndexError: list index out of range”).

  3. If I include a geo.txt file with three points, another error pops up (“Processing stopped because of strange values in the reconstruction.”). Do I need to specify coordinates for every image? That would be very challenging for the dataset I’m working with.

Error messages for debugging

  1. gcp_list.txt only

File “/code/opendm/types.py”, line 346, in run
self.next_stage.run(outputs)
[Previous line repeated 7 more times]
File “/code/opendm/types.py”, line 327, in run
self.process(self.args, outputs)
File “/code/stages/odm_report.py”, line 198, in process
octx.export_report(os.path.join(tree.odm_report, “report.pdf”), odm_stats, self.rerun())
File “/code/opendm/osfm.py”, line 456, in export_report
pdf_report.generate_report()
File “/code/SuperBuild/install/bin/opensfm/opensfm/report.py”, line 637, in generate_report
self.make_gcp_error_details()
File “/code/SuperBuild/install/bin/opensfm/opensfm/report.py”, line 301, in make_gcp_error_details
self._make_table(column_names, rows)
File “/code/SuperBuild/install/bin/opensfm/opensfm/report.py”, line 54, in _make_table
columns_sizes = [int(self.total_size / len(rows[0]))] * len(rows[0])
IndexError: list index out of range

  1. geo.txt + gcp_list.txt included

File “/code/SuperBuild/install/bin/opensfm/opensfm/matching.py”, line 688, in match_flann

  • results, dists = index.knnSearch(f2, 2, params=search_params)*
    cv2.error: OpenCV(4.5.0) /code/SuperBuild/src/opencv/modules/flann/include/opencv2/flann/kdtree_index.h:470: error: (-215:Assertion failed) result.full() in function ‘getNeighbors’

terminate called without an active exception
/code/SuperBuild/install/bin/opensfm/bin/opensfm: line 12: 82 Aborted “$PYTHON” “$DIR”/opensfm_main.py “[email protected]

===== Dumping Info for Geeks (developers need this to fix bugs) =====
Child returned 134
Traceback (most recent call last):

  • File “/code/stages/odm_app.py”, line 94, in execute*
  • self.first_stage.run()*
  • File “/code/opendm/types.py”, line 346, in run*
  • self.next_stage.run(outputs)*
  • File “/code/opendm/types.py”, line 346, in run*
  • self.next_stage.run(outputs)*
  • File “/code/opendm/types.py”, line 346, in run*
  • self.next_stage.run(outputs)*
  • File “/code/opendm/types.py”, line 327, in run*
  • self.process(self.args, outputs)*
  • File “/code/stages/run_opensfm.py”, line 35, in process*
  • octx.feature_matching(self.rerun())*
  • File “/code/opendm/osfm.py”, line 326, in feature_matching*
  • self.run(‘match_features’)*
  • File “/code/opendm/osfm.py”, line 34, in run*
  • system.run(’"%s" %s “%s”’ %*
  • File “/code/opendm/system.py”, line 106, in run*
  • raise SubprocessException(“Child returned {}”.format(retcode), retcode)*
    opendm.system.SubprocessException: Child returned 134

===== Done, human-readable information to follow… =====

[ERROR] Uh oh! Processing stopped because of strange values in the reconstruction. This is often a sign that the input data has some issues or the software cannot deal with it. Have you followed best practices for data acquisition? See Flying Tips — OpenDroneMap 2.8.0 documentation

1 Like

Problems:

  1. You should be able to use 0 for any numerical column value you don’t have, like height, for instance.
  2. How are you seeing results in-progress?
  3. Yes, you should specify coordinates for every image because if you don’t you’ll likely end up with multiple reconstructions, one of which will be georeferenced and placed properly, and one which will be at 0N by 0W, and that isn’t going to be valid.

Error Messages:

  1. You might be able to process fullly with --skip-report. It looks like it’s failing to make the GCP details table.
  2. This seems to make sense given that some images will be georeferenced and some will not due to an incomplete geo.txt (failing in FLANN feature matching with getNeighbors)

Thanks again.

  1. Right, I’ll use 0 for future nodata columns then. Wouldn’t that be confused as 0 height though?
  2. The script created a odm_orthomosaic directory in my project directory, which contains a tif with a few correctly georeferenced pixels
  3. Got it.

On the second part…

I tried --skip-report, now it gets to the end (with the very cool odm text logo), but the orthomosaic is still only showing a few pixels. I then tried tweaking the parameters and running it through both docker and web odm, but it returned a “We ended up with an empty point cloud” error. Interestingly, without the GCP file, the construction works perfectly with the same exact parameters. It would be nice the GCPs could be correctly incorporated, but my backup plan is to generate the ungeoreferenced image and put it back in place later using gdal.

Error message

Estimated depth-maps 35 (79.55%, 9s, ETA 2s)…
Estimated depth-maps 36 (81.82%, 9s, ETA 2s)…
Estimated depth-maps 37 (84.09%, 10s, ETA 1s)…
Estimated depth-maps 38 (86.36%, 10s, ETA 1s)…
Estimated depth-maps 39 (88.64%, 10s, ETA 1s)…
Estimated depth-maps 40 (90.91%, 11s, ETA 1s)…
Estimated depth-maps 41 (93.18%, 11s, ETA 831ms)…
Estimated depth-maps 42 (95.45%, 11s, ETA 555ms)…
Estimated depth-maps 43 (97.73%, 11s, ETA 277ms)…
Estimated depth-maps 44 (100.00%, 12s, ETA 0ms)…
Estimated depth-maps 44 (100%, 12s367ms)

Filtered depth-maps 6 (13.64%, 103ms, ETA 657ms)…
Filtered depth-maps 44 (100%, 162ms)

Fused depth-maps 44 (100%, 25ms)
11:26:35 [App ] Densifying point-cloud completed: 4 points (13s979ms)
11:26:35 [App ] MEMORYINFO: {
11:26:35 [App ] VmPeak: 2745396 kB
11:26:35 [App ] VmSize: 2680228 kB
11:26:35 [App ] } ENDINFO
[INFO] running “/code/SuperBuild/install/bin/OpenMVS/DensifyPointCloud” --filter-point-cloud -1 -i “/var/www/data/a80181ec-8186-40b5-938e-a2eb05b09eb9/opensfm/undistorted/openmvs/scene_dense.mvs” -v 0 --cuda-device -1
11:26:35 [App ] Build date: Feb 16 2022, 02:56:37
11:26:35 [App ] CPU: Intel(R) Core™ i7-9750H CPU @ 2.60GHz (12 cores)
11:26:35 [App ] RAM: 7.68GB Physical Memory 2.00GB Virtual Memory
11:26:35 [App ] OS: Linux 5.10.16.3-microsoft-standard-WSL2 (x86_64)
11:26:35 [App ] SSE & AVX compatible CPU & OS detected
11:26:35 [App ] Command line: --filter-point-cloud -1 -i /var/www/data/a80181ec-8186-40b5-938e-a2eb05b09eb9/opensfm/undistorted/openmvs/scene_dense.mvs -v 0 --cuda-device -1

Point visibility checks 4 (100%, 0ms)
11:26:35 [App ] MEMORYINFO: {
11:26:35 [App ] VmPeak: 158468 kB
11:26:35 [App ] VmSize: 157140 kB
11:26:35 [App ] } ENDINFO
[INFO] Finished openmvs stage
[INFO] Running odm_filterpoints stage
[INFO] Filtering /var/www/data/a80181ec-8186-40b5-938e-a2eb05b09eb9/opensfm/undistorted/openmvs/scene_dense_dense_filtered.ply (statistical, meanK 16, standard deviation 2.5)
[INFO] running “/code/SuperBuild/install/bin/FPCFilter” --input “/var/www/data/a80181ec-8186-40b5-938e-a2eb05b09eb9/opensfm/undistorted/openmvs/scene_dense_dense_filtered.ply” --output “/var/www/data/a80181ec-8186-40b5-938e-a2eb05b09eb9/odm_filterpoints/point_cloud.ply” --concurrency 12 --meank 16 --std 2.5
*** FPCFilter - v0.1 ***

?> Parameters:
input = /var/www/data/a80181ec-8186-40b5-938e-a2eb05b09eb9/opensfm/undistorted/openmvs/scene_dense_dense_filtered.ply
output = /var/www/data/a80181ec-8186-40b5-938e-a2eb05b09eb9/odm_filterpoints/point_cloud.ply
std = 2.5
meanK = 16
boundary = auto
concurrency = 12
verbose = no

→ Setting num_threads to 12

?> Skipping crop

?> Skipping sampling

→ Statistical filtering

?> Done in 0.000731s

→ Writing output

?> Done in 0.0001716s

?> Pipeline done in 0.000931s

[ERROR] Uh oh! We ended up with an empty point cloud. This means that the reconstruction did not succeed. Have you followed best practices for data acquisition? See Flying Tips — OpenDroneMap 2.8.0 documentation

1 Like

Have you tried generating the geo.txt file and using that instead of the GCP file?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.