I would like to offer thanks in advance!
I am a new user/purchaser of WebODM from OpenDroneMap. I purchased the Windows install. I have installed it and I would like to make some setting changes in docker. I was told that this version of WebODM installs Docker. I have seen videos that show Docker Desktop linked into Chrome. I cannot find docker anywhere on my system, and when I run WebODM it seems to run in it’s own application, other than within Chrome as I was led to believe it was.
Could someone please explain to me:
- Does Docker Desktop get installed with this application?
- Does WebODM run within the browser, or is running natively?
- Where can I go to configure settings that I saw were in docker, such as CPU and storage limits?
I appreciate your help!
I just purchased the installer as well. My system resource utilization is very low. I’d really like to speed up the processing. Any ideas?
Windows 11 Pro
13900KS (32 threads)
64 GB of DDR5 7000
4 TB of 7000 Mbps SSD
Thank you for any help you can offer.
The native version does not use Docker. For resource allocation it relies upon Windows’ logic.
Keep in mind that not every step of every dataset will saturate your system.
Thanks for your response!
Thank you for your reply. Yep, I understood that you moved away from Docker and that resource allocation is handled by Windows.
Do you find Windows’ logic to be good? Does it allocate with performance as a priority? is it trying to preserve system usability while processing? Does it do a good job at making the tasks as fast as possible?
What do you see as my best way forward to increase performance? Switch to the GitHub/Docker version? Is that a good idea? Do you have any other ideas?
Thanks very much for your help!
On almost all configurations we see, yes, Windows’ logic seems to be just fine.
And no, it doesn’t seem to worry much about the machine being usable during processing. When saturated, the machine gets pretty hard to use, so not much is left on the table, so to speak.
You have one of the mixed-topology Intel CPUs that use ThreadDirector, which seem to be hit or miss for resource usage. You certainly can try the Docker build, but I’m not sure if it will improve things. We are still collecting feedback on these newer CPUs and how they split up work between the P and E Cores (none of which we have any control of, Windows or Linux/Docker [This is left entirely to the OS and ThreadDirector, as well as the CPU microcode/firmware]).
I really appreciate your reply. Thank you.
Is there any way to look at the progress of a task to be sure it has not stalled? My resource usage is so low that it seems like nothing is happening.
Did you have any thoughts on these questions: “What do you see as my best way forward to increase performance? Switch to the GitHub/Docker version? Is that a good idea? Do you have any other ideas?”
You can use Task Manager to see what is going on, or expand the Console to see what messages are being printed.
In general however, if the Task has not failed, it is safe to say it is processing.
There really isn’t much I can recommend to improve processing performance as I don’t have a good picture of what’s going on, and I’m also not sure there is actually a problem with how your machine is performing.
It is entirely possible, and likely, on a fast/good machine that you simply won’t have that much utilization consistently, whereas on slower weaker machines like mine, it saturates easily and pretty much the entire time.
If you process something like the Brighton Beach dataset on Defaults and report back with a screenshot of the dashboard for the Task so we can see how long it took, we can get a sense of how fast you’re processing.
Oh great! I found the “console”. I am sorry for being such a noob.
Thank you for the comments and for explaining that some points in the process may not have much utilization.
Sure, will absolutely do Brighton Beach when this one is complete.
Hello again. Here is the result. What are your thoughts?
More than 3x as fast as my machine, so you’re on the right track.
Try adding the entire WebODM directory to the Windows Defender exclusion list for Real-Time Scanning. That may help a bit more, as well.
Oh, thank you for that. Will do.
Still trying to figure out why Lightning nodes are so much faster
AMD EPYC probably isn’t hurting them
Lol. Bear with me though… each core on the fastest EPYC is 40% slower than my 8 fast cores. And the 16 slow cores on this chip are no slouch either.
You were clear and helpful when you explained that the system will not be saturated. But that leaves me wondering what is so great about 96 cores on an EPYC.
It must be the RAM, right? I am loosing time on my swaps? But the Brighton Beach test wouldn’t swap with 64GB of RAM, would it? I will run that test on Lightning to see what it does…
Any other thoughts in the meantime?
Ah ha! So when there are just 18 images, and likely no swapping, I get this:
My system is twice as fast in that scenario. So that probably bears out the 40% slower single-thread performance on EPYC. Can I conclude that the swapping is the reason that 56 45MP images cause Lightning to be twice as fast?
(Lightning is on top)
Certainly could be! Swap is orders of magnitude slower than working in RAM.
What sensor are those images coming from? If quad-bayer, you can safely resize to 1/4 resolution and lose no actual detail.