First post, as I am just starting out with WebODM.
I have a set of images that just won’t process, apart from a strip to the west of the survey. The image set processes fine in Pix4D, and, after some error removal in Metashape. So what is the story here?
For context, the playing field is flat and full of grass (I appreciate that isn’t great!), and I didn’t use any GCP. I’ve bumped up number of matches to 40k, run on ‘Forest’ (guessing these settings might help with featureless surveys. Secondly, it is flown with Litchi, but I had done and checked all the calculations for spacing and overlap (70%). Finally, I used a Mini2 (not my usual craft), flying at 60m
I fully understand that this context is very much not in my favour, which is why I wasn’t surprised WebODM fell over. However, nothing I do seems to work, and other software does.
Does anyone have any thoughts how I might alter some of the functionality to get something workable. Thought I could find some points on the ortho I made, and use them as GCP in WebODM, but I’m finding the GCP interface a bit of a challenge ATM
I did wonder if that was an issue. However, I flew a waypoints mission in Litchi (rather than interval shooting), and the drone stopped to take a picture each time. Whether it was stable enough, it is difficult to tell, but the images do not seem very blurry.
I have tried again (no GCP, and I understand there is no MT provision). I can get the first three images to match by selecting, say, the first 8. I can then get pic 6, 7, and 8 to match, if I select images 3 to 12. If I do the whole flight line (20 images) I get nothing. If I do two or three lines, then I only get good matches at the western end of the survey (As I do if I run the whole image set). That just seems rather odd, but I don’t really understand all the maths behind SfM!
Secondly, I wondered whether it was simple enough to just see whether the software reads the geotags and aligns cameras ok…I am not even sure it can locate the images correctly.
I’ll look at DroneDB.app now.
PS I have already built models and orthos from 2 data sets, and that worked fine. One is with my fixed wing (eBee) and the other with the Mini2 stripping out frames from a mp3 of a side of a building. So I am confident the software is OK…it is just odd that it doesn’t work in this instance (but acknowledge I haven’t picked the best site!)
Link is exactly what I needed, and survey looks sane, though we might be lacking overlap/sidelap for such a flat/featureless area… I’ll try to poke at it later today.
Gordon. Yes, I had noticed that. I seem to recall that Litchi doesn’t perform well on first photo (it might have been resetting gimbal to nadir?). I have just seen a tip to have a preshot taken before your start the flight line.
Sajin…I had calculated on 70% (I thought). I have just rechecked my working out. I had, but for an image format that was 4000x3000, not 4000x2250 as these have turned out as). That does effect the image size on the ground 4263 sq m when I had assumed 5655 sq m. So, yes that would make overlap lower than I had thought, I assume?
Also, am I getting my image width/height mixed up? As I usually fly fixed wing, my photos are always the same orientation; the shorter side parallel to direction of travel. Here, it seems as if the Mini2 flew to its first waypoint, and then crabbed across the sky, so the long side of the image follows the direction of travel. As I worked out my waypoints based on a site that was c400m wide, c150 high is this incorrect? Actually for the photos in this orientation, my survey area is, in fact, 150m wide and 400m high? But, as I set overlap laterally and longitudinally to be the same, would that make any difference?
I have just noticed that I had Mission Setting set to ‘Custom’ Heading mode. SO, definitely, the longest side of the sensor is running parallel to the survey line, which is bad practice (and also knackers my calculations)
So, if I redesign with better overlap, and make sure I am always rotated towards the next way point (making sure I rotate before I take a picture at the start of each line), I should get better results!!
I’m sorry, I never interact with WebODM directly, I usually just try things in OpenSfM (whjch is used internally by ODM to compute camera positions).
Normally, there should be a folder somewhere on your machine, where you should have a config.yaml and a camera_models.json files. Also, if you have the ODM log, that could help me troubleshooting what’s happening.
Looking at the log, it seems that there’s something wrong with SIFT_GPU, there clearly not enough features. Using feature_type: "HAHOG" option override would solve the issue, until we figure out what wrong with SIFT_GPU.
It does seem like a bug in the SIFT_GPU code, the min-num-features parameter is not being respected (strange) so you get a small number of features. Using --feature-type hahog until we figure this out is a workaround like Yan suggested.