Didn’t work well with jpg straight from the camera so I looked at the raw files and their much better. Found a good free way to convert them to jpg as a batch, Adobe DNG converter and then Darktable.
When I zoom in on the jpg’s it’s clear than it’s way to much compression and filters going on.
Hm. Okay, guess I’ll hold off for a bit on buying. I need something I can use from the ground to combine with overhead shots from the drone, in order to get better detail in-and-around stuff where I don’t want to fly.
Just photograph with the drone hand-held. I suspect a task would not complete with a mix of images of different dimensions, and different distortions, the camera model would be a mess.
I haven’t actually tried using the thing handheld before- that’s an interesting idea.
I have mixed pictures from the drone and from my iPhone in the same task, though, and it hasn’t really caused any issues with distortion or image dimensions. Biggest problem, really, has been with GPS accuracy on the phone pictures, and getting them aligned in the wrong place due to it. If there’s repetitive stuff in-scene–say, a line of columns–and the phone thinks it’s 50 feet away from where it is, then it can get mixed up about which columns it was looking at, that sort of thing.
How does the quality report look for the mixed cameras task? If you can put it somewhere publicly accessible, I’d like to see how the mixed cameras were handled.
Wouldn’t the camera GPS accuracy be similar to that of the drone?
It’s actually the other way around, the jpg is compressed, the raw (generally) isn’t. Sometimes lossless compression can be applied to raw images, as it can be in my Nikon DSLR.
Unfortunately I don’t have the report from that project- hard drive space was getting a bit limited, so the only thing that I kept was the point cloud.
And no - the iPhone GPS accuracy is pretty much crap. I’d say that 10m is slightly optimistic in terms of lat / long, and it’s incredibly optimistic when it comes to altitude. Based on my past experience with things, it helps to do brute force feature matching if you want it to work, because even sequential pictures are not necessarily going to be in the right order if you look at GPS positions, never mind the right location. If you walk in a 10m diameter circle around an object taking pictures, then pull the EXIF and plot lat / long / altitude, you’ll get what looks like a 3D egg with an up to 20m difference in altitude.
This should go in the interesting oddity column., So I attempted to add walk around cell phone camera photos to improve a 3d rendering, phone gps was on but not within the camera app. here is what happened.
I added them to the project, log shows “no gps” but interestingly…
amazing! I have many questions, how,? camera,? was a drone camera used? how many photos? did you take pictures from in the hole? what about lighting. great detail, I can see the sheet piles and steel ladders, wow.