Hi everyone,
I’ve been trying to track down a problem I’ve been having with Windows Native ODM for a few months now, the low performance of the native version compared to the docker version. I’m aware that I’m not the first user to make a post about this topic, but those threads are now closed due to lack of activity and age. (I’ll include links to some of them at the bottom of this post). I was hoping that somehow ODM 3.0 would possibly change this, but sadly I still see the same kind of issues.
Some more details:
The native version was installed using the installer that was given to people who bought the docker installer back in 2021. I’ve really liked the native version due to the problems I had with docker on different drives and, virtualization issues. I was able to work around them, but it was quite bothersome when compared to the native version.
The WebODM version is 1.9.18, ODM version is 3.0.2. It’s running on Windows 10 Pro 21H2 build 19044.2604, with up-to-date drivers. The system specs are:
Ryzen 5900x
64gb ddr4 ram
RTX 3060
WebODM on drive E:/, a 4tb sata SSD.
My datasets were captured using:
DJI Mavic mini, Air 2, and Air 2s at full resolution JPEG.
I’ve noticed the difference with all sizes of datasets. Using default preset, and other presets. I’ve noticed low usage of hardware, specially when compared to the docker version. CPU usually sitting around 10%, RAM around 25% with other applications on the background, GPU some 15% usage (checked on Task manager and OCCT). SSD activity is basically 0~1%.
I noticed a possible speed up when changing the matcher-type from flan to bow, but I got inconsistent results like it is noted at the Docs. So this might just be a fluke.
GPU acceleration could be a factor as well, but since I’ve experienced this problem for a few years, before GPU acceleration was implemented on the native version, I don’t believe this is the cause.
Something interesting I noticed is that I was able to process way bigger datasets when compared to docker. I don’t have the data to prove this point, but if memory serves me right, I was only able to run datasets of at most 3k photos on docker. On native, I was able to process a dataset of 6385 photos with default bow matcher, with ok results after 32hrs.
Right now as a Hail Mary, I’m running my biggest dataset, made of 20137 photos, on the computer since I won’t be using it for a bit. After 36 hours running, it’s on the matching step and is still showing activity on the task output. This is just to fulfil my curiosity, since I wasn’t able to figure out the problem with the poor performance.
I’m just a hobbyist, so there is always the chance that I’m missing something obvious. Since the computer being used is my main desktop, it far from a clean environment that pros might be using for this kind of thing. But I don’t see what could be causing this issue besides an inherent problem/difference of the native version.
Thanks for reading the post, any insight will be greatly appreciated. Let me know if there is any relevant info that I might have forgotten to include in here.
Some questions you might have:
Q: Why not use docker then?
A: For me, docker is clunky and having to deal with hyperV, wsl2 and all this has takes quite a bit of my time and even had me to reinstall windows a few times. Native is almost perfect, the only problem is the performance.
Q: Could you provide examples of the difference by running the same thing on both versions?
A: I could do this if it might help you find the problem, but since it takes a few hours/days to run a test, and that there’s already examples that other provided, I figured that it wouldn’t be necessary.
Q: What settings have you tried changing from the defaults?
A: Like mentioned before, I changed matcher-type. I’ve also tried using split to help with bigger datasets, but there were no performance differences (I imagine this just helps with ram problems). No-gpu was also tested, I did notice an increase of usage on the CPU, but still quite slow.
Q: Do you really expect to be able to finish the 20k dataset?
A: No, I just wanted to give it a try.
Q: Have you tested other software with the same dataset?
A: Yes, I’ve tried Metashape with success. It’s way faster, but that is to be expected from a commercial product, and I like trying the open source options when available.
Q: What other hardware have you tested on?
A: I also tested this on my laptop with an I7-7700hq, GTX 1060, and 16gb of ram, and also I’ve had different hardware on my desktop computer in the past few years. But besides the expected difference in hardware, the difference between version still persisted.
Previous posts with similar concerns: