Unable to convert scaled value (1e+32) to int32 (PDAL) (with grayscale colorspace images)

(ultimate end goal: create a website where the public can view historical aerial imagery layers; these aerial imagery layers would be stitched together and georeferenced by odm.

I am using the ODM docker image on a digitalocean ubuntu 16.04 droplet.
following the instructions in the readme to run ODM.

Here’s what I’ve done so far:
Setup, ubuntu 16.04 digitalocean instance;


secured it;

converted my source images from TIFs to JPEGs;
created a gcp_list.txt with https://webodm.net/gcpi

Initially, I was receiving notices that my gcp file was not being found; so I did some troubleshooting with this by doing doing:
removed tiff in file names;
removed hyphens from file names (https://github.com/OpenDroneMap/WebODM/issues/648)
renamed my gcp file to gcp_list.txt ;
had 2 copies of my gcp_list.txt file; one in my home directory (where I would run the docker command for odm); and in my images directory;

I had also tried:
docker run -it --rm
-v “$(pwd)/images:/code/images”
-v “$(pwd)/odm_orthophoto:/code/odm_orthophoto”
-v “$(pwd)/settings.yaml:/code/settings.yaml”
-v “$(pwd)/odm_texturing:/code/odm_texturing”
opendronemap/odm --rerun-all --gcp gcp_list.txt

which quickly gave me an error;

AttributeError: ‘NoneType’ object has no attribute ‘gcp_path’

Meanwhile, I also did a couple things to ensure that I was on the right track:
(edited the project_path in settings.yaml to be “/” as indicated in https://github.com/OpenDroneMap/ODM/blob/d80d0b2992c96d4a336319f46e76b844dfa33e84/docker.settings.yaml
adjusted other settings.yaml for creating high quality orthophotos;
per https://docs.opendronemap.org/using.html#creating-high-quality-orthophotos

Finally, I was able to get my gcp_list.txt identified and found;
by using the following command below:
I run this command from my home directory in my digitalocean instance:
I explicitly included the gcp_list as a parameter for docker instead of odm as follows:

docker run -it --rm \
    -v "$(pwd)/images:/code/images" \
    -v "$(pwd)/odm_orthophoto:/code/odm_orthophoto" \
    -v "$(pwd)/settings.yaml:/code/settings.yaml" \
    -v "$(pwd)/odm_texturing:/code/odm_texturing" \
    -v "$(pwd)/gcp_list.txt:/code/gcp_list.txt" \
    opendronemap/odm

my GCP file is recognized: The output is, which gets me quite far,

The primary error that I receive is at the end of that file, https://gist.github.com/skorasaurus/1b194b8515a4f5a64255c2b2ca54fda5#file-output-L1578 which states:
“PDAL: writers.las: Unable to convert scaled value (1e+32) to int32 for dimension ‘X’ when writing LAS/LAZ file /code/odm_georeferencing/odm_georeferenced_model.decimated.las.”

This error message is very similar to https://github.com/OpenDroneMap/ODM/issues/798, but not exactly the same as #798;
; in mine, the dimension is X; and in #798, the dimension is Z.

my source images are at:
https://f002.backblazeb2.com/file/publique/2019-webodm-debugging/15104.jpeg
(replace 15104 with the following integers: 15104 to 15107 and 14124 to 14127)

In addition to solving the error message above, I’m also asking about the following:

  • considering these images aren’t georeferenced AND I can’t create frame centers
    (from my limited knowledge as I understand, frame centers are only applicable if all of the images have exactly the same dimensions, and each are slightly different and are the center points of images that are made from drones), are there enough center points (https://webodm.net/gcpi) and is that best way to reach my goal, (of being able to stitch all of these images together and then host them online) (The entire image set is about 700, but I started out with only 6 here to see if I could get a proof of concept working).

  • Is the following something to be worried about; they appear in my logs when I run the docker command:

    rm: cannot remove ‘/code/odm_orthophoto’: Device or resource busy
    rm: cannot remove ‘/code/odm_texturing’: Device or resource busy

(I’ve deleted those odm folders that are created each time i run the container to be safe and I’ve noticed that odm_orthophoto and odm_texturing folder are owned by root)

Thanks for any assistance.

I can address the question of frame centers: the images can absolutely be of different sizes.

For what it’s worth, I was able to determine the root cause of that error:
my images did not have proper EXIF tags. There’s several tools or software libraries to determine if your images have EXIF tags in them; I used exiftool and https://github.com/ianare/exif-py

1 Like

Following up my last post, I am getting closer to creating an orthophoto with my sample data set
of historical aerial imagery. I believe that the cause of the previous errors were missing and/or corrupt EXIF tags in my JPEGs.

But, I still have a new issue that I’ll describe below.

To recap what I’ve done:
create a gcp_list.txt with https://webodm.net/gcpi
convert my source TIF images (which are not georeferenced) into JPEG using
for i in ls *.tif; do nconvert -out jpeg $i $i.jpeg; done
these JPEG are what I use in odm.
(you can access them at https://f002.backblazeb2.com/file/publique/2019-webodm-debugging/14124.jpg
replace 14124 with a number of 14124 to 14127 and 15104 to 15106.

I believe these JPEGS have EXIF tags but they are not georeferenced (this is what I hope odm can do, to create
the georeferenced orthophoto).

I have a docker instance on digitalocean and when I run the docker command:.

docker run -it --rm
-v “$(pwd)/images:/code/images”
-v “$(pwd)/odm_orthophoto:/code/odm_orthophoto”
-v “$(pwd)/settings.yaml:/code/settings.yaml”
-v “$(pwd)/odm_texturing:/code/odm_texturing”
-v “$(pwd)/gcp_list.txt:/code/gcp_list.txt”
opendronemap/odm --fast-orthophoto | tee 2019-08-29.log

and my output will sit there at
"Read nodata value: -9999"

I eventually cancelled the process, and everything that appears after the line of
“Read nodata value: -9999”
ONLY appears when I cancel the process (which I do by opening a new SSH session and kill -9 thepid)

The full ouput, my GCP file, and settings.yaml is at:

Meanwhile, on the same computer (a digital ocean box with 16gb of ram);
I’m able to successfully process the sheffield-park data set Sheffield Park 3
in under an hour.

If I am able to process the sheffield site in an hour, and after 12 hours of my data set, it’s still hanging at the “read nodata value”,

I’m wondering what’s still wrong with my data set.

One thing that I noticed my log of processing the sheffield park set had the line:
“Extent is (-97.896561, 114.288789), (-83.680695, 93.252555)”

In my log, I have:
Extent is (-4733165.500000, -4729556.500000), (4203725.500000, 4207346.100000)

Is this unusual?

If you could look at my output, gcp file, and settings - https://gist.github.com/skorasaurus/cced51656e7b1caffe2684a84815ea81
and then my images, I would appreciate it.

Your better (and easier option) over GCPs is to set the approximate scene center (including an approximate Z) for each of your images in your exif. This then triggers the use of only the nearest images for matching, which is an optimization trick which will substantially reduce the processing time (this is likely why Sheffield is fast and yours is slow).

1 Like

Thanks for the tip and explanation Steve!

I’ve progressed and am now able an orthophoto; but my results aren’t very promising:

the resulting orthophoto was about 98% transparent.

To recap:
My settings file is https://gist.github.com/skorasaurus/1b194b8515a4f5a64255c2b2ca54fda5#file-settings-yaml

I tried out with 3 geotagged photos

(how I geotagged them:
I opened the photo in gimp; used the guide lines in there to fine the exact center point; and then went on bboxfinder.com to roughly match the exact corresponding place on the map;
and put the center points in there;

I uploaded the photos to https://www.thexifer.net/
and I opened the photo in gimp; used the guide lines in there to find the exact center point of image; and then went on bboxfinder.com to obtain the corresponding coordinates for the same location of the image (by eyeball and knowledge, and felt quite confident of them)
and entered the center points for each photo on thexif.net; and I wasn’t sure about the elevation; so I put above sea level and elevation 100 for each photo.
This process was kind of rote and took a couple minutes for each photo (maybe I’m jumping too far ahead, but I’m skeptical how this would scale for the entire data set which is hundreds of photos)…

(the images are at:
https://f002.backblazeb2.com/file/publique/2019-webodm-debugging/14124.jpg) (replace 14124 with 14124 to 14246. (covering areas of University Circle, Little Italy, East Cleveland, Cleveland Heights).

Do I need to use more images even for a proof of concept? Do they need to have a vertical and a horizontal overlap? (These only had horizontal (that is, these 3 photos were roughly on the same latitude; one photo was to the west of longitude slightly varied).

My docker command was:
docker run -it --rm
-v “$(pwd)/geotagged-images:/code/images”
-v “$(pwd)/odm_orthophoto:/code/odm_orthophoto”
-v “$(pwd)/settings.yaml:/code/settings.yaml”
-v “$(pwd)/odm_texturing:/code/odm_texturing”
opendronemap/odm --fast-orthophoto --ignore-gsd --texturing-nadir-weight 30 --orthophoto-resolution 15 | tee 2019-09-02-take11.log