3 projects processed with Drone2Map
Latest MicMac via NodeMicMac:
aukerman and brighton_beach
For some odd reason I cant still process sand_key.
sorry if I missed it, but is the comparison tool/UI up and online?
Yes! The participation for this challenge was overwhelmingly positive. Thank you all for the help.
The UAV Arena is now online: https://opendronemap.github.io/UAVArena/
I can not get Sand Key to successfully process on my machine. @pierotofy, I think we can count my set as complete for now, as we at least have a result for DroneMapper via @dronemapper-io’s contribution for Sand Key.
Tried out the UAVArena, freaking awesome! Love it. Left you a GitHub issue already
Also, I have two tricky datasets from my Solo that bowl/cup even in Pix4D with rolling shutter compensation, so if you want them as a “tough” sample set, let me know. I’d love to see what people can get out of them in the various suites.
UAV Arena is awesome!
A link to download DM processed data directly: https://www.dropbox.com/sh/u3xfwurbfrbfs9q/AAAmDgwKBMbhlbNDP0N2ibo2a?dl=0
Some slight differences used in processing such as ortho seamline feathering, new DEM algo, etc. Enjoy
If anyone wants to, feel free to take a look and tell me what’s gone wrong with with Sand Key -
The projections don’t line up, even though they should both be ESPG 32617
UAV Arena is SUPER awesome! Well done!
How hard would it be for a zero experience HTML person (me) to recycle the code with different geotiff or DSM information? Think same site with DSM on one side of slider and Orthophoto on the other? Or even same site with different capture dates? All selectable to the user’s preference? (I know QGIS has a slider feature plugin)
Keep up the good work here folks!
Here is my repository for sample sUAS data from my 3DR Solo with full-spectrum GoPro Hero4 Black:
These two have proven tough to reconstruct without bowling in the past. Take a swing at them if you dare
That comparison tool is addictive! The first thing that comes to my mind when running it is whether some of the differences are due to the settings that were chosen in running the apps, or due to differences in the math used in the pipeline steps. Then I wonder which is “right”, then how would I recognize “right” if I saw it? Too tough to ponder…
Then I want to know how to do this with my own data over different times! AAaaaggghhhh.
To try to stay at least a little on topic, are there ligit conclusions that should be drawn from this incredible body of work that will be consistent when considering one tool over another?
Apologies to OP if this is straying off topic.
I guess that one reason for the questions about what’s “right” is the use of the orthomosaics in construction. I’ve been operating under the assumption that I can take my orthomosaics made with GCP’s and pass the red face test when sitting with contractors and surveyors who want to know if the data is accurate. By that they really mean “Is it the same result that a land surveyor would get?” Now after running the comparison tool, I don’t know what I think about that question.
I’ve done a handful of mapping jobs for construction using DD as well Pix4dMapper, and am in the process of getting up to speed on ODM as my go-to affordable solution. Now I’m uncertain whether I really want to be in this market segment (drone mapping for construction).
Now I’m really pulling off topic, so time to stop.
The way the data is collected matters too. Non of these jobs we processed would have been good enough to say “Here’s some really accurate data”.
But it also depends how accurate they want it!
GCPs and proper flight plans (and other factors) give good absolute data.
Equally, proper flight plans (and other factors) give good relative data.
Agree. I had assumed that when I used best practices and GCP’s, that my data was accurate. Now I see that when identical data is processed through different engines and the orthomosaics are compared with the slider in the Mapping Arena, that they’re largely the same, but not exactly the same. So now I’m worried about the accuracy of my deliverables.
I also see that the delivered DEM’s don’t employ the same color palettes. How do those different color schemes get converted into the same contours or into equivalent inputs into CAD programs? I can see where this comparison data challenge can start abuncha follow on threads to explore questions. Oh Man! Maybe I should just enjoy playing with the Mapping Arena and not worry about “What Does It All Mean”. Accept that everything has variation and roll with it.
I’m feeling like Columbo with ‘One more thing’. If I wanted to explore some of the differences between deliverables in more detail, can someone point me to a good process for setting up a comparison like pierotofy has done in the Arena. I’d like to spend some time insuring that I’m comparing identical scales with exact overlays, but programming and GitHub are not my superpower. Any recommendations are appreciated. Thanks… bob r.
Good morning, thanks for the exercise, I would like to know if it is possible that you can share the set of configurations with which it was processed in the ODM software. Thank you very much I remain attentive to your comments
Agreed. This is a pretty interesting set of comparisons. I haven’t seen anything like it outside academic photogrammetric comparisons, and certainly not with such a wide swath of drone imagery processing software.
To that end, be sure to compare data in the middle. There are different automated cropping algorithms and different kinds of bias and error along the edges of these datasets, which affects the perception of differences. For these smaller datasets, this is even more apparent.
What a fantastic idea! I love the slider-comparison!
I noticed that sandy key is still available for Metashape, is that right? I can run it.
Are you guys planning on publishing the results?
What about Alice Vision/Meshroom? should it be included?