Orthorectify.py previous problem solved?

Regards reading to another of my problem question i got stumbled on orthorectification contribution.
I did the steps prescribed:
docker run -ti --rm -v "D:/odm/youruser/datasets:/datasets".ToLower() opendronemap/odm --project-path /datasets project
docker run -ti --rm -v "D:/odm/youruser/datasets:/datasets".ToLower() --entrypoint /code/contrib/orthorectify/orthorectify.py opendronemap/odm /datasets/project
and i got the following error:

Traceback (most recent call last):
  File "/code/contrib/orthorectify/orthorectify.py", line 15, in <module>
    from opensfm import dataset
ModuleNotFoundError: No module named 'opensfm'

This error was a known issue asked again in the past:

I am using also docker for windows.Sadly the guy asking the question didn’t reveal how he solved the 2 problems he faced.
Is there a known fix to that?Secondly how can i manually edit the code , i guess the code lives in opendronemap/odm image in docker that doesn’t run in a container?

1 Like

When you run the webodm.sh update command, does the console indicate that it is fetching and installing OpenSFM?

Not much is going to work in our pipeline if OpenSFM isn’t properly installed in your processing environment.

$ ./webodm.sh update
docker-compose -f docker-compose.yml -f docker-compose.nodeodm.yml -f docker-compose.nodemicmac.yml down --remove-orphans
Container webapp Stopping
Container webapp Stopping
Container webapp Stopped
Container webapp Removing
Container webapp Removed
Container worker Stopping
Container webodm2_node-odm_1 Stopping
Container webodm2_node-odm_1 Stopping
Container worker Stopping
Container webodm2_node-odm_1 Stopped
Container webodm2_node-odm_1 Removing
Container webodm2_node-odm_1 Removed
Container worker Stopped
Container worker Removing
Container worker Removed
Container db Stopping
Container db Stopping
Container broker Stopping
Container broker Stopping
Container broker Stopped
Container broker Removing
Container db Stopped
Container db Removing
Container broker Removed
Container db Removed
Network webodm2_default Removing
Network webodm2_default Removed
Updating WebODM…
git pull origin master
From github.com/OpenDroneMap/WebODM

  • branch master β†’ FETCH_HEAD
    Already up-to-date.
    docker-compose -f docker-compose.yml -f docker-compose.nodeodm.yml pull
    worker Pulling
    db Pulling
    node-odm Pulling
    broker Pulling
    webapp Pulling
    db Pulled
    broker Pulled
    webapp Pulled
    worker Pulled
    node-odm Pulled
    Done! You can now start WebODM by running ./webodm.sh start
1 Like

Thx for the help Saijin_Naib i made a clean install of web but i don’t think that’s the prob.I am pretty close to running the orthorectify.py script by building opensfm mannualy.
But the script requires opensfm outputs from ODM ,that my dataset doesn’t have.Am i missing something?
Sites says that my report folder should look like this:

|-- images/
    |-- img-1234.jpg
    |-- ...
|-- images_resize/
    |-- img-1234.jpg
    |-- ...
|-- opensfm/
    |-- see mapillary/opensfm repository for more info
    |-- depthmaps/
        |-- merged.ply                  # Dense Point cloud (not georeferenced)
|-- odm_meshing/
    |-- odm_mesh.ply                    # A 3D mesh
    |-- odm_meshing_log.txt             # Output of the meshing task. May point out errors.
|-- odm_texturing/
    |-- odm_textured_model.obj          # Textured mesh
    |-- odm_textured_model_geo.obj      # Georeferenced textured mesh
    |-- texture_N.jpg                   # Associated textured images used by the model
|-- odm_georeferencing/
    |-- odm_georeferenced_model.ply     # A georeferenced dense point cloud
    |-- odm_georeferenced_model.laz     # LAZ format point cloud
    |-- odm_georeferenced_model.csv     # XYZ format point cloud
    |-- odm_georeferencing_log.txt      # Georeferencing log
    |-- odm_georeferencing_transform.txt# Transform used for georeferencing
    |-- odm_georeferencing_utm_log.txt  # Log for the extract_utm portion
|-- odm_orthophoto/
    |-- odm_orthophoto.png              # Orthophoto image (no coordinates)
    |-- odm_orthophoto.tif              # Orthophoto GeoTiff
    |-- odm_orthophoto_log.txt          # Log file
    |-- gdal_translate_log.txt          # Log for georeferencing the png file

I guess I’m not understanding exactly what you’re doing.

Does using WebODM work properly in your environment at all if you execute it via the script and process a job from the UI?

Yes WebODM runs fine.The problem is not in WebODM ,the problem is in orthorectify.py contribution in ODM and a known issue as i linked above .Sadly the guy said he solved the issue didnt say how.I builted OpenSfM locally to run this script:https://github.com/OpenDroneMap/ODM/tree/master/contrib/orthorectify.But this contribution demands opensfm outputs from a task runned by WebODM.
I’m trying to get opensfm output in my project output but still nothing.Searching a bit i found this question WebODM output folders in local file system has opensfm folder missing? - #2 by pierotofy suggesting to edit Task.js in nodeodm folder.i put β€˜opensfm’ in tasks output but doesnt work.Any help will be grately appreciated!
opensfm isn’t there but output dir expected from docs:

β”œβ”€β”€ images/
β”‚   β”œβ”€β”€ img-1234.jpg
β”‚   └── ...
β”œβ”€β”€ opensfm/                            # Tie Points and camera positions here in JSON format
β”‚   β”œβ”€β”€ config.yaml
β”‚   β”œβ”€β”€ images/
β”‚   β”œβ”€β”€ masks/
β”‚   β”œβ”€β”€ gcp_list.txt
β”‚   β”œβ”€β”€ metadata/
β”‚   β”œβ”€β”€ features/
β”‚   β”œβ”€β”€ matches/
β”‚   β”œβ”€β”€ tracks.csv
β”‚   β”œβ”€β”€ reconstruction.json
β”‚   β”œβ”€β”€ reconstruction.meshed.json
β”‚   β”œβ”€β”€ undistorted/
β”‚   β”œβ”€β”€ undistorted_tracks.json
β”‚   β”œβ”€β”€ undistorted_reconstruction.json
β”‚   └── depthmaps/
β”‚      └── merged.ply                   # Dense Point Cloud
β”œβ”€β”€ odm_meshing/
β”‚   β”œβ”€β”€ odm_mesh.ply                    # A 3D mesh
β”‚   └── odm_meshing_log.txt             # Output of the meshing task. May point out errors.
β”œβ”€β”€ odm_texturing/
β”‚   β”œβ”€β”€ odm_textured_model.obj          # Textured mesh
β”‚   β”œβ”€β”€ odm_textured_model_geo.obj      # Georeferenced textured mesh
β”‚   └── texture_N.jpg                   # Associated textured images used by the model
β”œβ”€β”€ odm_georeferencing/
β”‚   β”œβ”€β”€ odm_georeferenced_model.ply     # A georeferenced dense point cloud
β”‚   β”œβ”€β”€ odm_georeferenced_model.laz # LAZ format point cloud
β”‚   β”œβ”€β”€ odm_georeferenced_model.csv     # XYZ format point cloud
β”‚   β”œβ”€β”€ odm_georeferencing_log.txt      # Georeferencing log
β”‚   └── odm_georeferencing_utm_log.txt  # Log for the extract_utm portion
β”œβ”€β”€ odm_orthophoto/
β”‚   β”œβ”€β”€ odm_orthophoto.png              # Orthophoto image (no coordinates)
β”‚   β”œβ”€β”€ odm_orthophoto.tif              # Orthophoto GeoTiff
β”‚   β”œβ”€β”€ odm_orthophoto_log.txt          # Log file
β”‚   └── gdal_translate_log.txt          # Log for georeferencing the png file
└── odm_dem/
    β”œβ”€β”€ dsm.tif                     # Digital Surface Model Geotiff - the tops of everything
    └── dtm.tif                     # Digital Terrain Model Geotoff - the ground.
1 Like

Ah, thank you. I understand better now.

Unfortunately, I am not familiar at all with this contributed module and how it interacts with the changes we’ve made in OpenDroneMap in the past few months. Perhaps Piero or someone else might have better insight for you.

Any suggestions on how to get opensfm output folder of task?

Hello pierotofy i managed to debug your contribution orthorectify.py hardcoded for now at local windows instalation since i’m still testing,but will probably soon will test for ubuntu and i will make a pull request:

#!/usr/bin/env python3
# Author: Piero Toffanin
# License: AGPLv3

import os
import sys
import rasterio
import numpy as np
import numpy.ma as ma
import multiprocessing
import argparse
import functools
from opensfm import dataset

default_dem_path = "odm_dem/dsm.tif"
default_outdir = "orthorectified"
default_image_list = "img_list.txt"

parser = argparse.ArgumentParser(description='Orthorectification Tool')
                help='Path to ODM dataset')
                help='Absolute path to DEM to use to orthorectify images. Default: %(default)s')
                    help="Don't output an alpha channel")
                    choices=('nearest', 'bilinear'),
                    help="Type of interpolation to use to sample pixel values.Default: %(default)s")
                    help="Output directory where to store results. Default: %(default)s")
                    help="Path to file that contains the list of image filenames to orthorectify. By default all images in a dataset are processed. Default: %(default)s")
                    help="Comma-separeted list of filenames to rectify. Use as an alternative to --image-list. Default: process all images.")
                    help="Number of CPU processes to use. Default: %(default)s")

args = parser.parse_args()

dataset_path = args.dataset
dataset_path = 'D:\\odm\\ODMmanual\\grigoro\\project'
print(os.path.join(dataset_path, "opensfm"))
dem_path = os.path.join(dataset_path, default_dem_path) if args.dem == default_dem_path else args.dem
interpolation = args.interpolation
with_alpha = not args.no_alpha
image_list = os.path.join(dataset_path, default_image_list) if args.image_list == default_image_list else args.image_list

cwd_path = os.path.join(dataset_path, default_outdir) if args.outdir == default_outdir else args.outdir

if not os.path.exists(cwd_path):

target_images = [] # all

if args.images:
    target_images = list(map(str.strip, args.images.split(",")))
    print("Processing %s images" % len(target_images))
elif args.image_list:
    with open(image_list) as f:
        target_images = list(filter(lambda filename: filename != '', map(str.strip, f.read().split("\n"))))
    print("Processing %s images" % len(target_images))

if not os.path.exists(dem_path):
    print("Whoops! %s does not exist. Provide a path to a valid DEM" % dem_path)

def bilinear_interpolate(im, x, y):
    x = np.asarray(x)
    y = np.asarray(y)

    x0 = np.floor(x).astype(int)
    x1 = x0 + 1
    y0 = np.floor(y).astype(int)
    y1 = y0 + 1

    x0 = np.clip(x0, 0, im.shape[1]-1)
    x1 = np.clip(x1, 0, im.shape[1]-1)
    y0 = np.clip(y0, 0, im.shape[0]-1)
    y1 = np.clip(y1, 0, im.shape[0]-1)

    Ia = im[ y0, x0 ]
    Ib = im[ y1, x0 ]
    Ic = im[ y0, x1 ]
    Id = im[ y1, x1 ]

    wa = (x1-x) * (y1-y)
    wb = (x1-x) * (y-y0)
    wc = (x-x0) * (y1-y)
    wd = (x-x0) * (y-y0)

    return wa*Ia + wb*Ib + wc*Ic + wd*Id

# Read DEM
print("Reading DEM: %s" % dem_path)
with rasterio.open(dem_path) as dem_raster:
    dem = dem_raster.read()[0]
    dem_has_nodata = dem_raster.profile.get('nodata') is not None

    if dem_has_nodata:
        dem_min_value = ma.array(dem, mask=dem==dem_raster.nodata).min()
        dem_min_value = dem.min()
    print("DEM Minimum: %s" % dem_min_value)
    h, w = dem.shape

    crs = dem_raster.profile.get('crs')
    dem_offset_x, dem_offset_y = (0, 0)

    if crs:
        print("DEM has a CRS: %s" % str(crs))

        # Read coords.txt
        coords_file = os.path.join(dataset_path, "odm_georeferencing", "coords.txt")
        if not os.path.exists(coords_file):
            print("Whoops! Cannot find %s (we need that!)" % coords_file)
        with open(coords_file) as f:
            line = f.readline() # discard

            # second line is a northing/easting offset
            line = f.readline().rstrip()
            dem_offset_x, dem_offset_y = map(float, line.split(" "))
        print("DEM offset: (%s, %s)" % (dem_offset_x, dem_offset_y))

    print("DEM dimensions: %sx%s pixels" % (w, h))
    # Read reconstruction
    dataset_path = 'D:\\odm\\ODMmanual\\grigoro\\project'
    udata = dataset.UndistortedDataSet(dataset.DataSet(dataset_path),os.path.join(dataset_path, "opensfm/undistorted"))
    reconstructions = udata.load_undistorted_reconstruction()
    if len(reconstructions) == 0:
        raise Exception("No reconstructions available")
    max_workers = args.threads
    max_workers = 1
    print("Using %s threads" % max_workers)

    reconstruction = reconstructions[0]
    for shot in reconstruction.shots.values():
        if len(target_images) == 0 or shot.id.replace('.tif',"") in target_images:
            print("Processing %s..." % shot.id)
            shot_image = udata.load_undistorted_image(shot.id)

            r = shot.pose.get_rotation_matrix()
            Xs, Ys, Zs = shot.pose.get_origin()
            a1 = r[0][0]
            b1 = r[0][1]
            c1 = r[0][2]
            a2 = r[1][0]
            b2 = r[1][1]
            c2 = r[1][2]
            a3 = r[2][0]
            b3 = r[2][1]
            c3 = r[2][2]

            print("Camera pose: (%f, %f, %f)" % (Xs, Ys, Zs))

            img_h, img_w, num_bands = shot_image.shape
            print("Image dimensions: %sx%s pixels" % (img_w, img_h))
            f = shot.camera.focal * max(img_h, img_w)
            has_nodata = dem_raster.profile.get('nodata') is not None

            def process_pixels(step):
                imgout = np.full((num_bands, dem_bbox_h, dem_bbox_w), np.nan)
                minx = dem_bbox_w
                miny = dem_bbox_h
                maxx = 0
                maxy = 0

                for j in range(dem_bbox_miny, dem_bbox_maxy + 1):
                    if j % max_workers == step:
                        im_j = j - dem_bbox_miny

                        for i in range(dem_bbox_minx, dem_bbox_maxx + 1):
                            im_i = i - dem_bbox_minx

                            # World coordinates
                            Xa, Ya = dem_raster.xy(j, i)
                            Za = dem[j][i]

                            # Skip nodata
                            if has_nodata and Za == dem_raster.nodata:

                            # Remove offset (our cameras don't have the geographic offset)
                            Xa -= dem_offset_x
                            Ya -= dem_offset_y

                            # Colinearity function http://web.pdx.edu/~jduh/courses/geog493f14/Week03.pdf
                            dx = (Xa - Xs)
                            dy = (Ya - Ys)
                            dz = (Za - Zs)

                            den = a3 * dx + b3 * dy + c3 * dz
                            x = (img_w - 1) / 2.0 - (f * (a1 * dx + b1 * dy + c1 * dz) / den)
                            y = (img_h - 1) / 2.0 - (f * (a2 * dx + b2 * dy + c2 * dz) / den)

                            if x >= 0 and y >= 0 and x <= img_w - 1 and y <= img_h - 1:
                                if interpolation == 'bilinear':
                                    xi = img_w - 1 - x
                                    yi = img_h - 1 - y
                                    values = bilinear_interpolate(shot_image, xi, yi)
                                    # nearest
                                    xi = img_w - 1 - int(round(x))
                                    yi = img_h - 1 - int(round(y))
                                    values = shot_image[yi][xi]

                                # We don't consider all zero values (pure black)
                                # to be valid sample values. This will sometimes miss
                                # valid sample values.

                                if not np.all(values == 0):
                                    minx = min(minx, im_i)
                                    miny = min(miny, im_j)
                                    maxx = max(maxx, im_i)
                                    maxy = max(maxy, im_j)

                                    for b in range(num_bands):
                                        imgout[b][im_j][im_i] = values[b]

                                # for b in range(num_bands):
                                #     minx = min(minx, im_i)
                                #     miny = min(miny, im_j)
                                #     maxx = max(maxx, im_i)
                                #     maxy = max(maxy, im_j)
                                #     imgout[b][im_j][im_i] = 255
                return (imgout, (minx, miny, maxx, maxy))

            # Compute bounding box of image coverage
            # assuming a flat plane at Z = min Z
            # (Otherwise we have to scan the entire DEM)
            # The Xa,Ya equations are just derived from the colinearity equations
            # solving for Xa and Ya instead of x,y
            def dem_coordinates(cpx, cpy):
                :param cpx principal point X (image coordinates)
                :param cpy principal point Y (image coordinates)
                Za = dem_min_value
                m = (a3*b1*cpy - a1*b3*cpy - (a3*b2 - a2*b3)*cpx - (a2*b1 - a1*b2)*f)
                Xa = dem_offset_x + (m*Xs + (b3*c1*cpy - b1*c3*cpy - (b3*c2 - b2*c3)*cpx - (b2*c1 - b1*c2)*f)*Za - (b3*c1*cpy - b1*c3*cpy - (b3*c2 - b2*c3)*cpx - (b2*c1 - b1*c2)*f)*Zs)/m
                Ya = dem_offset_y + (m*Ys - (a3*c1*cpy - a1*c3*cpy - (a3*c2 - a2*c3)*cpx - (a2*c1 - a1*c2)*f)*Za + (a3*c1*cpy - a1*c3*cpy - (a3*c2 - a2*c3)*cpx - (a2*c1 - a1*c2)*f)*Zs)/m

                y, x = dem_raster.index(Xa, Ya)
                return (x, y)

            dem_ul = dem_coordinates(-(img_w - 1) / 2.0, -(img_h - 1) / 2.0)
            dem_ur = dem_coordinates((img_w - 1) / 2.0, -(img_h - 1) / 2.0)
            dem_lr = dem_coordinates((img_w - 1) / 2.0, (img_h - 1) / 2.0)
            dem_ll = dem_coordinates(-(img_w - 1) / 2.0, (img_h - 1) / 2.0)
            dem_bbox = [dem_ul, dem_ur, dem_lr, dem_ll]
            dem_bbox_x = np.array(list(map(lambda xy: xy[0], dem_bbox)))
            dem_bbox_y = np.array(list(map(lambda xy: xy[1], dem_bbox)))

            dem_bbox_minx = min(w - 1, max(0, dem_bbox_x.min()))
            dem_bbox_miny = min(h - 1, max(0, dem_bbox_y.min()))
            dem_bbox_maxx = min(w - 1, max(0, dem_bbox_x.max()))
            dem_bbox_maxy = min(h - 1, max(0, dem_bbox_y.max()))
            dem_bbox_w = 1 + dem_bbox_maxx - dem_bbox_minx
            dem_bbox_h = 1 + dem_bbox_maxy - dem_bbox_miny

            print("Iterating over DEM box: [(%s, %s), (%s, %s)] (%sx%s pixels)" % (dem_bbox_minx, dem_bbox_miny, dem_bbox_maxx, dem_bbox_maxy, dem_bbox_w, dem_bbox_h))

            if max_workers > 1:
                with multiprocessing.Pool(max_workers) as p:
                    results = p.map(process_pixels, range(max_workers))
                results = [process_pixels(0)]

            results = list(filter(lambda r: r[1][0] <= r[1][2] and r[1][1] <= r[1][3], results))

            # Merge image
            imgout, _ = results[0]
            for j in range(dem_bbox_miny, dem_bbox_maxy + 1):
                im_j = j - dem_bbox_miny
                resimg, _ = results[j % max_workers]
                for b in range(num_bands):
                    imgout[b][im_j] = resimg[b][im_j]
            # Merge bounds
            minx = dem_bbox_w
            miny = dem_bbox_h
            maxx = 0
            maxy = 0

            for _, bounds in results:
                minx = min(bounds[0], minx)
                miny = min(bounds[1], miny)
                maxx = max(bounds[2], maxx)
                maxy = max(bounds[3], maxy)

            print("Output bounds: (%s, %s), (%s, %s) pixels" % (minx, miny, maxx, maxy))
            if minx <= maxx and miny <= maxy:
                imgout = imgout[:,miny:maxy,minx:maxx]

                if with_alpha:
                    alpha = np.zeros((imgout.shape[1], imgout.shape[2]), dtype=np.uint8)

                    # Set all not-NaN indices to 255
                    alpha[~np.isnan(imgout[0])] = 255

                # Cast
                imgout = imgout.astype(shot_image.dtype)

                dem_transform = dem_raster.profile['transform']
                offset_x, offset_y = dem_raster.xy(dem_bbox_miny + miny, dem_bbox_minx + minx, offset='ul')
                profile = {
                    'driver': 'GTiff',
                    'width': imgout.shape[2],
                    'height': imgout.shape[1],
                    'count': num_bands + 1 if with_alpha else num_bands,
                    'dtype': imgout.dtype.name,
                    'transform': rasterio.transform.Affine(dem_transform[0], dem_transform[1], offset_x, 
                                                           dem_transform[3], dem_transform[4], offset_y),
                    'nodata': None,
                    'crs': crs

                outfile = os.path.join(cwd_path, shot.id)
                if not outfile.endswith(".tif"):
                    outfile = outfile + ".tif"

                with rasterio.open(outfile, 'w', **profile) as wout:
                    for b in range(num_bands):
                        wout.write(imgout[b], b + 1)
                    if with_alpha:
                        wout.write(alpha, num_bands + 1)

                print("Wrote %s" % outfile)
                print("Cannot orthorectify image (is the image inside the DEM bounds?)")

Great job with the code btw.Sadly i din’t expect that the outcome photos will be resized and offset.Is there a way to georeference the dataset source image as is? or somehow map pixel coords from the source image to dem pixels coords?Still studying on your code.If you can help will be a huge help.
Best regards

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.