Orthophoto resolution calculation based off desired output .TIFF image resolution and physical area coverage

I purchased the “The Missing Guide to ODM” book, and I have to say for anyone wanting to learn more about ODM it was a worthy purchase.

In the book, I found a formula for dem-resolution (which it is noted as being the same as orthophoto-resolution) on page #70, such as below:

(100 meters * 100 cm/meter ) / 10 cm/pixel = 1000 pixels

Which could also be written as below, if you show the values as variables

(AREA_WIDTH * 100 cm/meter ) / ODM_RESOLUTION = OUTPUT_TIFF_WIDTH

I want to generate *.tiff image files with roughly the same image resolution every time, even if the number of input images is different, or the area’s physical dimensions are different. So I went ahead and re-arranged this formula to have below:

ODM_RESOLUTION = (AREA_WIDTH * 100 cm/meter ) / OUTPUT_TIFF_WIDTH

So I thought, I know the AREA_WIDTH and I want a desired .TIFF image width of 10,000px I can now solve for ODM_RESOLUTION and send the varying value as the orthophoto-resolution parameter to ODM.

After trying with a few different data sets of varying physical area sizes, I found that this didn’t work.

It would be great if there was a way to make the TIFF output image the same image resolution every time. If anyone knows how to do this I would love to hear how it can be done.

1 Like

Welcome! Great analysis and idea.
May I ask for your use-case?

Can you retry but with crop 0? That might be why it isn’t deterministic, as the crop varies based upon hole/sinks in the dataset.

1 Like

Thanks for the welcome and the quick reply!

The basic idea is to provide the best orthophoto-resolution and also limit the output image resolution. With varying data sets, I’ve found it really difficult to find a orthophoto-resolution that will work well with them all. Also, I don’t want to have to manually tweak the parameters myself, instead it would be good to calculate the parameters at runtime based on the input data.

I just checked my parameters, and I’m already using crop=0. Below are my parameters (with the orthophoto-resolution being the calculated variable as described in the first post)

ignore-gsd=true, orthophoto-resolution=x.yz, crop=0, resize-to=-1, skip-3dmodel=true, fast-orthophoto=true

1 Like

This will always vary, even between runs of the same images due to the non-deterministic nature of reconstruction. It can be estimated, but not calculated, so you won’t be able to solve the equation precisely.

If you need an image of the same dimension, use imagemagick (or a similar tool) to crop it and resize it to your needs after processing?

2 Likes

Hi Pierotofy,

Let me explain a little bit about the process:

The drone is flying missions over an area defined by latitude/longitude pairs (polygon). I’m calculating a few things such as minimum/maximum latitude/longitude and using them to determine a boundary containing the polygon. I’m using the Haversine formula to calculate the boundary width and height in meters. It’s not exact, because the photos will reach outside of the boundary a little bit, but it’s not far off.

My images don’t contain EXIF meta data, so I’m not sure if that has an impact in ODM (there’s no latitude/longitude in them). I noticed that when I try using GSD I get some really small orthophotos on occasion, so I decided to use the ignore-gsd parameter.

I’ve been trying to use the other approach, as you mentioned, where you let ODM do its thing with a static orthophoto-resolution that generates large outputs (I tried between 0.10 and 0.20), my program is using memory streams (reducing memory usage, data is streamed off the storage in smaller chunks) to convert and resize into a JPEG with 10000px*10000px resolution. It works sometimes (e.g. a 1.4GB orthophoto *.TIFF file was successfully converted to a 6.7MB JPEG image with my program) but due to this ‘non-deterministic’ nature, other times the task will fail due to out of memory errors (that’s happening within ODM). Is it possible to change ODM to use streams instead of allocating the full byte array in one go? Oh but looking at the error message, that’s happening in OpenCV (might be out of scope of ODM):

OpenCV(4.5.0) /code/SuperBuild/src/opencv/modules/core/src/alloc.cpp:73: error: (-4:Insufficient memory) Failed to allocate 32836847368 bytes in function 'OutOfMemoryError'

These are my computer specifications:

image

In my original post I was trying to explain a way where I could calculate the best orthophoto-resolution for the particular job so it doesn’t run out of memory and produces a consistent resolution.

I’m happy to go either way and thanks for the suggestion. Are there some other parameters I can use to do it?

1 Like

This is going to further complicate the situation, as scale cannot be estimated by ODM without some georeferencing information… I’m not sure there’s a solution here.

1 Like