Same flight one month apart, different sized (orientated?) orthos

I am not sure where to ask this, but some background:

Did a flight in drone deploy in like July, then copied and pasted the flight plan and did same flight with the same phantom p4p in sept. Processed images (I still have them as the individual images from each m ission) in WebODM, same settings and the resulting orthos are off…or don’t match. They almost seemed slightly transposed or translated from eachother. Was curious for ideas of what might of caused this:

My flight ?
The processing methods?

Any way I could begin to trouble shoot.

Here is a link to the juxtaposeJS of the two images with a slider to see how they differ.

With commodity GPS you can expect (95% of the time) around 5m of horizontal error. That means, to compare between flights, expect 10m error.

This is a hardware problem, not a software one. However, there are some (nascent) solutions, see e.g.

Looks off in scale… altitude? Without the use of ground control points to establish scale I would not expect them to overlay exactly.

If you turn --ignore-gsd, do you get a better result? Just trying to exclude the possibility of a software bug (I think smathermather-cm is correct however).

I will try this just tob e sure.

I was hoping I could build very nice progress photos of construction sites over time without having to get into expensive GCPs. if I really need quality GCPs it means I need the expensive GPS units to go with it :confused: Seems a bummer cant even do quality progression photos without expensive equipement for GCPs. I guess I didn’t realize the GPS in the Phantom 4 pro was that bad.

I’m still pretty much a rookie here but my interest is in contour lines and need the GCP capability… I have teamed up with a local surveyor who is just getting into drones as well… we made some targets that are just 16 inch square pieces of ply wood with a black and white pattern on them. Just thinking out loud but can you scale one image to the other manually using existing identifiable points in the images. Also have you seen these devices…

Okay, how do I do this??!? This sounds like a good enough solution for nice construction progress orthos. I have seen the Emlid one, I would have to see how accurate it can be or is though. I am not a Surveyor so while I want accurate maps I feel like a total hack trying to dink around and purchase Emlid or other such equipment :slight_smile:

I dont know if its possible to load the composite image into a photo editing program and resize and trim them manually. I’m not a happy camper at the moment… my server on Digital Ocean is down… had some weird issue with processing my payment and still trying to get it straightened out.

Is that where you are hosting your instance of WebODM? I had a left over decent-ish PC I just slapped Ubuntu on it and am running there on my network (opened it up to the outside too). Seems to work okay. I thought maybe I could do the ‘tie points’ within webODM somehow. I could get unlazy and figure out how to scale the pictures to eachother I guess…just thought there was a more automagic way to do this.

Yes… I just set it up about a month ago and have been pleased with it for the most part… I have a django box there as well that is still working at the moment. you can have a server up and running in minutes… a lot of prebuilt options. you can resize memory and processors up and down. My friend just got a Mavic 2 Pro but unfortunately the GPS altitude above sea level is unusable. An upcoming firmware upgrade is supposed to address this issue.

You could do a pretty good job on this by just quickly referencing on to the other in QGIS.

Also, if you’re comfortable on the command line, you might be able to use the split-merge approach in OpenDroneMap:

Addendum: if you’re not comfortable on the command line and can share the images (either widely or just with me), I’d be happy to test if split merge is an approach that would work for this use case.

I also heard it isn’t great for mapping because of its rolling shutter.

I don’t mind sharing the images, though the images are like 166 of them. I am not sure why a split-merge as if it is a large data set would help. I am reading the link on QGIS too. I haven’t dabbled much at all. I am comfy in the command line but my ability to log into that box and do Command Line things are hindered till at least the weekend. I could pop both the missions up on google drive and post links?

This is interesting but I am not sure it would work with a Phantom4 Pro. It seems to want to connect up to a camera’s hot shoe if I read it right? I need to delve more into it to double check.

Yes that might be a bit of a chore to retrofit… I had advertised on craigslist for a surveyor to set ground control points… the guy I eventually connected with had already purchased and had just received the Mavic the same day we connected… I was afraid it might have a rolling shutter as there was no mention of a mechanical shutter as with the p4p. He has since bought a used Inspire 1… I’ve never seen one in person… heading over this evening to see it fly. It might be a better platform to experiment with… it has been around for a while and I read that DJI released a sdk for those wishing to write custom apps. Maybe thats just wishful thinking on my part. I’ve scratch built quite a few quads that use betaflight on STM32F4 controllers but I’ve not experimented with INAV which is controller software with waypoint capability.

Google Drive would work fine.

This is a proposed off-script use of split-merge. Split-merge’s purpose is to allow for the split of datasets in the case of really large datasets it’s true. But the problem that it solves is actually the co-alignment of datasets.

It’s actually pretty trivial to split datasets up, process them on different nodes and then put them back together. The challenge with the use of commodity GPS, however, is that when that approach is used naively with large datasets, the datasets don’t go back together well, as they don’t match well in the XY and Z axes. So the challenge for split-merge was to maintain alignment of all the input data, while allowing for somewhat independent processing of it.

In your use case, it’s a pair of small datasets that need to be aligned, but processed separately. This is a very similar problem to one dataset being split into two pieces and processed separately – and the alignment issues in doing so naively are the same for each approach.

I would love to see how this works, here are the two data sets. About 166 photos each. Phantom 4 pro flown with drone deploy.



Love to see how this goes, if I can figure out how to make matching orthos over time, I have a decent deliverable imho

Cool. I’ll take a look this week.

1 Like