850 images, 24 hours, still going, something wrong?

The logs seen to suggest that every image gets matched with every other image, is that right? The images are multispectral tifs (with exif GPS tags) of crop, so I set matcher-distance to 80 metres. I thought that would mean matching only happens between photos within 80 metres of each other?

matcher-neighbours is the default, 8.

Any help is greatly appreciated.

1 Like

Your understanding of it is correct.

That’s a long time for 850 images, and probably the result of setting the matcher distance to 80 meters. If you want to ensure you have enough matcher-neigbors, I would bump that number up slightly (say 12 or so), rather than matcher distance, which may match more images than you anticipate.

How many cores are you running on?

2 Likes

The images are about 20 metres apart, so 80 metres max distance should only be finding between 12-16 other images to match with. I’m fairly sure it’s doing the same thing without matcher-distance set, so I’m not sure matcher-neighbors is working either.

I’ve got 8 cores and 10GB RAM in a VM, but I can only successfully run with max-concurrency=2 otherwise it runs out of RAM and starts paging like crazy. That was going to be my next question, the docs say 1GB per concurrent task should be enough but I seem to need over 4GB.

I’ll do some smaller tests and report back on what effects matcher-distance and matcher-neighbors is/isn’t having.

Ok, never mind, everything is working as it should with a fresh install. I’m not sure how my installation got messed up, but I’ll explain in case it helps someone else.

I had installed WebODM a fair while ago (1 year maybe). I installed by cloning the git repo and running webodm.sh start. A couple of weeks ago I updated with a git pull and webodm.sh pull (I think this does the git pull anyway). Since then I had the problems. The job I was running yesterday eventually filled the VM’s disk space and died. I managed to stop ODM and take the docker containers down (webodm.sh down), then docker system prune cleared up a little bit of space. The docker system df told me there was 33GB of unused volumes so I did docker volume prune. That got all my disk space back, but seems to have wiped everything, it was like a fresh install when I started up again.

I’ve done some tests with 75 images using matcher-distance and matcher-neighbors, both are working as expected.

2 Likes

Excellent! Thanks for the update.

Argh it’s back! I tried again with 852 images, a mix of NIR and red-edge images from the same flight, and same thing, it’s matching every image. I’m trying to figure out if it’s a setting I’m using or the combination of images I’m using.

I think I might have an image or two which don’t have exif gps tags, would that cause the matcher to do this? (I’m doing a test now with 152 images of the 852, same settings, and it’s working.)

Aha, that might indeed trigger matching all. Is it doing Barrel of Words batching (does it have “Words” or “BOW” in the log output?

Yeah the image missing gps tags is the problem:

2020-08-06 11:30:26,172 WARNING: Not all images have GPS info. Disabling matching_gps_neighbors.
2020-08-06 11:30:27,565 INFO: Matching 363378 image pairs

The good log says:

2020-08-06 11:54:31,968 INFO: Matching 1536 image pairs

“BOW” is mentioned in both good and bad logs:

[INFO]    Multi-camera setup, using BOW matching
[WARNING] Using BOW matching, will use HAHOG feature type, not SIFT

Thanks for the help. I should have checked the log more thoroughly first!

The reason I have some missing gps tags is a bug in the script I wrote to add gps tags to the tifs that the Parrot Sequoia produces. Only the RGB images contain the gps tags, so I wrote a script to copy them into the multispectral images for the same shot. I’ll fix it up and might post it here for anyone else using Sequoia.

1 Like

I posted my script to a separate general help topic, Parrot Sequoia GPS Exif tags.

3 Likes

This is awesome. Thank you.