Calculating geocoordinates for image pixels after processing

Hi, we have a pipeline for detecting invasive plants in drone images via colour matching. It runs on individual images, so it takes and image and returns relative pixel coordinates within the image for each detected plant.

We’d like to convert these per-image pixel coordinates into actual geocoordinates so we can provide users with a map of where the invasive plants are. As I understand it, for a given pixel coordinate in the original image, we’d need to track that to its location in the orthorectified image, which I think means we’d need to have access to the orthorectified images or the transformation matrix for the images?
I presume ODM has to calculate all this anyway, so I’m presuming the data exists somewhere, but I couldn’t find documentation on how to interpret the numbers the cameras.json file (if that’s the correct place to be looking).

There are a few posts trying to do somewhat similar things (eg For a 3d point in pointcloud, How to get Image and corresponding 2d point? - #5 by ThorZ), but I didn’t see any that had found a solution to this.

Any suggestions or pointers to possible solutions would be appreciated.
Apologies in advance if I’m missing something obvious)

5 Likes

The easiest way would be to modify ODM/orthorectify.py at master · OpenDroneMap/ODM · GitHub (which is not part of ODM, but a contrib module). Instead of orthorectifying every pixel in the image, you could just choose the pixels you annotated (and perhaps skip the visibility testing). You will need a DSM as input and you will also need to apply lens undistortion to your images if you’ve run the detection algorithm on the raw images, rather than the ones in opensfm/undistorted/images.

Share your results afterwards? :pray:

3 Likes

This sounds interesting, but I am curious if you could use ODM first to create an ortho then use your pipeline on the rendered image to detect the invasive plants? The rendered ortho would already be georeferenced at that point.

2 Likes

I thought about that as well. My initial thought was that the orthos are likely to distort the original plant images a bit, making an already hard to find signal that much harder to detect, but I haven’t tested that. The other issue is that they will potentially be too big to easily run through the detector pipeline. But you are right that might be a better way to go about it, so we might look into sectioning the orthos into smaller pieces and sending those off.

1 Like

Thanks Piero, we’ll look into that and I’ll definitely report back if we get a good pipeline working.

1 Like

That sounds like a really interesting project Tim, can you share more about it?
E.g. what types of plants you’re detecting, what drone & resolution, what software, etc.

1 Like

Both are true. You’ll get better detection if you haven’t spliced and diced it together as an orthophoto through ODM. It does a good job, but any orthorectification process with any photogrammetry tool is a bit akin to Akroyd’s Bass-o-matic. This is doubly true for natural features and particularly complex geometries like we see in plants.

Here’s how I’ve thought about doing it in the past:

  • Perform your detection on the undistorted images
  • Use the per image pixel coordinates to create classified images.
  • Run those images through orthorectify.py in contrib per Piero’s suggestion. Your surface model can come from a full normal run of OpenDroneMap if needed.
  • With that stack of orthorectified, two-class images, you can now post process to your hearts delight, including just a simple mosaic of them, or count them for some probability of classification which might be useful for the edges of difficult to classify stuff, or some other measure from the stack of true/false classified images you get to play with.

Unless something has changed that I missed, orthorectify.py isn’t particularly performant, but it is wildly and embarrassingly parallel, so just throw lots of cores at it and you’ll be very happy.

And yes: please share the results. Since you have the classification part already ostensibly done, the rest should be easy and compelling.

2 Likes

So here’s a zoomed and cropped example of a plant in the original image and the ortho.
I do see some artifacts in the ortho due to it being reprojected from a 3D model, but it isn’t that bad. These are easy though because they are full shrubs, we’d need to check for a flight that had smaller individual flowers we were trying to detect, which I think will be a more common scenario
(1) Original image from drone:
image

Same plant in ortho:
image

1 Like

Thanks! We’ll look into that approach and report back.

2 Likes

That’s quite good. Yes, you could probably easily classify directly on the ortho for something this distinct and well behaved. But you do have the challenge that most AI classifiers aren’t built to scale to larger images.

2 Likes

Hi Johnny,

The detection pipeline is being developed by a colleague who runs a software company in rural SE Australia (2piSoftware). They are working with the local city council who want to be able to detect and map both invasive and endangered plants. Since we are in Australia this is a very common need for landcare groups and city councils.

The detection pipeline runs on Amazon cloud services. They started with using colour swatches rather than AI since it was an easy workflow to develop and many of the plants folks need to detect have distinct flowers, so this approach works better than one might expect.

I’m part of a project building a national drone platform for Australian researchers called the Australian Scalable Drone Cloud (ASDC). We’re using webODM for the photogrammetry but have linked it to a jupyter notebook setup so people can pull their data directly from the webODM instance into an analytical environment for running new methods development and running existing pipelines.

We thought it would helpful to link our platform to the weed detection system so we contracted 2pi to develop their API and a jupyter notebook to enable linking the two systems. We’ve just gotten all that working, so now we’re exploring what can be done with the system and mapping the the detected image coordinates to a real world map is an obvious and needed addition to the process.

In terms of ASDC, we’re doing a beta minimum viable project launch, likely in the end of October, but at this point it will likely be limited to Australian researchers. But we’re aware this sort of platform would be useful for a wider audience so we’re in the process (slowly) of thinking about if it will be possible to open the platform to a wider audience and potentially outside of Australia as well. but all that is TBD since we’re in development mode at the moment.

2 Likes

Ok, one more approach that would be more performant and scalable, but have different implication for the output product.

  • Perform your detection on the raw images
  • Create classified images to use as mask files ala: Using Image Masks — OpenDroneMap 3.1.7 documentation.
  • Run the pipeline with raw images and classified.
  • Use the missing data in the final to determine the extent of the classified area.

Now there are some weird implications: areas masked out won’t be included in final product, so relative proportion of images would determine if you want the masks to include or exclude the invasives.

It’s a weird hack, but could work for certain datasets and is certain to run much more quickly on individual machines but also not be as embarrassingly parallel.

2 Likes

This sounds even more interesting now Tim!
What’s the best way to get in touch with you directly?

2 Likes

PM me and we can chat.

2 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.