Hi ODM community,
I am a relatively new user and I am could produce some nice results from our recent drone mission. All images had accurate geotag information. To improve the final output, I included 6 GCPs (marked on six images each) in the processing pipeline. However, results look terrible (see screenshot below). How did this come? If I interpret text notifications in the command line correctly, ODM applies GCPs for georeferencing in the very end, and I guess it tries to derive a 7 parameters transformation based on the input GCP file - is this correct? Did I maybe include to less GCPs?
Ortho without GCPs:
docker run -it --rm -e NVIDIA_VISIBLE_DEVICE=1 -v /data2/stockerec/ODM_projects/DJI_65_8060:/code/images -v /data2/stockerec/ODM_projects/ODM_DJI_65_8060/odm_orthophoto:/code/odm_orthophoto my_odm_image --resize-to -1 --orthophoto-resolution 3 --use-exif
This looks to me like you didn’t get enough overlap for ODM to stitch the images together properly. Do you know what settings you used for your flight?
The flight mission was carried out with 80% forward overlap and 60% side lap - usually, these settings were sufficient. The interesting thing is, that the processing goes perfectly fine, when I don’t consider any GCPs but only the geotag information. This is the screen shot of the ortho without GCPs:
I had the same issue a couple of days ago with GCPs in one of my surveys, except that mine had also reduced to about 5% of its original size and was unidentifiable. I thought that I’d messed up the GCPs or a node had crashed or something, so I ran it again today before seeing your post. I’ll see how it turns out.
One interesting thing that I did notice was that I did two test surveys that day - the first at 33m and the second at 100m. The Mavic Pro that I was using has tagged the 100m photos at 5.3m, which is both way off the altitude at ground level and the altitude of the drone. The survey at 33m seemed to have the correct values in it (~68m; I’m just above sea level). Here’s the same part of the park with two widely different altitudes from the drone (I think the DJIs must have a bug):
The survey that came out poorly was the one with the incorrect altitudes. I’m running both again so we’ll see what the results look like. I’ll update tomorrow.
In my experience 60% side overlap is not enough. I have had similar issues without GCP’s in the past and your image with the GCP’s looks like some of the results I was getting. You should re-fly the mission with 80/80 to get a better result (90% overlap does even better because you get tie points across several images, but the datasets end up huge).
Another thing you can do is fly a second survey at slightly higher altitude and process them together. This gives better tying between images without losing resolution. If you flew at 40m try flying the second run at 50m or even 60m (don’t go too high because the computer won’t be able to find matches between the images). This changed my outputs dramatically and made my results really good. Unfortunately I have not experienced this problem with GCP’s, only with poor overlaps.
How many images in your dataset did you add the GCP’s to? Did you choose all the images with the control points on them? You say 6 points on 6 images each, so is that 36 images with the control points on them across your data set? ODM may have tried to not tie that common point between the images if you didn’t put the GCP onto all images where the GCP existed - it would have tried to exclude them if I am correct. I am not 100% sure about this though. It is simply a thought.
I dont think that 60% side lap is too less, as the dataset was nicely processed without including GCPs but the same set of pictures. I also included a cross flight in the processing which was carried out in a different height.
The last point here in the comments could be true, I certainly didn’t mark all images where GCP points were visible. If I have some time, I will give it a try and mark the points on all images where the respective point is visible. On the other hand, I didn’t have such big problems with another dataset where I didn’t mark all images either… Can someone else comment on this hypothesis who has some more insights into the code?