Image Weighting Options

When rendering orthoimages and 3D textures from images of widely differing quality, perhaps taken with different cameras, are all images given equal weight in the areas where they overlap? If not, what criteria are used for image weighting?

At first glance, inverse ground sampling distance (1/GSD) might be a candidate for image weighting: the smaller the mean GSD of an image, the higher the weight assigned to it.

It would also be helpful to be able to manually set image weight. For example, we might want to specify that Image 2 has much higher weight that Image 1, below, and effectively ‘overwrites’ it in the areas where they overlap. (Note the ferns growing inside the stone tower, which might be of particular interest.)

Image 1

Image 2

Is there currently a way of managing image weighting options and, if not, could it please be added to a feature wish list? Any links to documentation, comments or suggestions would be greatly appreciated. Thanks!


That’s a bit tricky to do, since you can’t have varying scale/pixel size across an image unless it is distorted.
If a small area or areas show much greater detail due to smaller GSD and that GSD is applied across the whole image, then the areas with larger GSD will appear blurry.
In practice, you really need to image the whole area at the desired GSD of the parts you are interested in.

In the stone tower above, get the drone images from a suitable height to show the detail you require in the tower across the whole area of interest, and if you want to see more of the internal walls, go inside and take them with the drone (hand-held or flown carefully!) or use another camera with (wide(r) angle lens standing as far back as you can). Those images will have a smaller GSD, but should render with the same GSD as the drone images from above, assuming there are significantly less of them than the drone images from above.

It’s very important to ensure there are enough common features in overlapping images for them all to be tied together.


Thanks for your detailed reply and the sound advice.

Overall image quality depends on more than just GSD. In my stone tower example, Image 2 also has better lighting (using the landing lights of the drone) than Image 1, and this makes a big difference.

I appreciate the fact that incorporating images of different quality into a single model is tricky and involves important tradeoffs. For my present purposes, I could live with lower-quality parts of a model appearing blurry when one zooms-in (eg with MeshLab) on parts of the model that have higher image quality. Could this be achieved by applying weights to individual pixels or clusters of pixels?

The key issue is to avoid throwing away existing high quality pixels because some parts of a scene have been recorded at a lower quality. In other words what I am looking for is a mechanism to keep the high quality pixels in the ‘foreground’ rather than diluting them with the overlapping parts of lower quality images.

1 Like

Indeed it does, but if your better lit images with smaller GSD are greatly outnumbered by the higher flown less detailed/poorly lit imagery, then there will be little or no benefit gained from them.

I think the only way to apply higher weight to the good images is to give them higher weight numerically- ie have more of them, and less of the lower quality images.

If you are unable to re-fly the site, then one solution may be to remove some of the higher flown imagery over the tower, although without knowing what you have, I don’t know if that is possible while still being able to create a single ortho.

Alternatively, have 2 or more orthophotos, one of the overall site, and more detailed orthos of selected areas.

1 Like

The numerical weighting sounds like an interesting concept. I shall give it a try and report back on results. I apologize in advance however that it may take a while. Many thanks for your help!


You may already be doing so, but make sure to get plenty of non-nadir imagery :slight_smile:

1 Like

The existing approach uses a few criteria that I think does what you are thinking, but perhaps not in the way you expect:

For 3D meshes, images are weighted based on a number of criteria, including:

  • which images are closest
  • priority to the pixels from center and least distorted portion of the image
  • as well as priority given to pixels that are closest to the angle of the face of the mesh.

For orthophotos, the criteria are a little different in that for that third criterion, the priority is actually given to the most vertical available data, so that there is as little perspective change as possible going from photo pixel to mesh pixel, to orthophoto pixel.

What you might try is using the use-3dmesh flag and see if that gets closer to what you want. Instead of following the orthophoto rules for generating the orthophoto, it uses the 3D mesh textured. In principle, it should prioritize your second closer image over your first image, but there will be some cost in the quality of the orthophoto overall.

1 Like

Thanks for your very clear and concise explanation of the current weighting criteria. That is extremely helpful and it does indeed do what I had been thinking.

Although the sample images I included above were shot almost from nadir, I am currently most interested in improving the quality of 3D meshes using oblique views, some of which are of considerably better quality than others. Although the criteria you outline should already help me do that, I wonder if it might still be worth trying Gordon’s suggestion of adding additional weight to particularly good images simply by including more of them. Would identical copies of the same high quality image (with different file names) be a way to test this, or would they be detected and discarded at the matching stage?

Never a good idea when dealing with data!

I’m not sure how the software will react, but what happens in matching where every point is a match? It might get bogged down, and it doesn’t add any information/detail to the output product, which is what you are looking for.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.