I am trying to understand the, sometimes significant differences, in the processing time for the same project and approximate total images. For example, I have been collecting the same data set which generally yields the same amount of images, but the processing time to complete is sometime significantly different. The latest example is the data set I am currently processing. Yesterday I had to stop the processing after 12 hours as the processing bar never moved. I reran the process and I am currently at 8.5 hours and the processing bar has still not moved. The normal processing time is approximately 4.5 to 5 hours. Am I doing something wrong for this to happen?
Generally, I am using the default settings with the exception of not resizing the images on each of the processes. Also, using the desktop software.
What I think is, it seems like the matching process might take a bit “longer” if the images have more detail. Additionally, if there are repetitive patterns in the images, it could potentially cause some issues with triangulation calculations.
There is many factors influencing the processing time.
Some 500 image datasets take half an hour, other datasets with 500 images can take 2-3 hours.
If you tweak the settings you can even extend that to a couple of days.
Being short on memory can be another reason for things moving along really slowly.
If you are interested to know what happened exactly, you can share some more details like the console.txt and what hardware you are using.
Ayoub is headed the same direction I’d have advised you on. Feature matching and Extracting is not deterministic, yet also is highly influenced by the actual data.
It finally finished with a grand total of almost 22 hours to complete. I understand the points you are trying to make. So how could one data set of the exact same mission twice before only take about 5 hours and this one almost 22 hours. I have attached my system setup. Where can I find the console.txt file. I will share that too. Thanks for the help.
Expand the Task details, turn on the Console, click the download button to the bottom right of the Console screen to get the log (or find it in your all.zip export, or download it from the Task assets list).
What version of WebODM/ODM were the prior Tasks processed under? We’ve made some adjustments that vastly improve the fidelity of reconstruction, but they can make reconstruction take longer. You can try adding ---pc-skip-geometric to reduce some processing at the end and to more closely emulate older behavior.
Also, again, the subject of the images can change, thus the number of features, thus the amount of time spent in this part of processing. The more features, the longer that part takes, and it can be significant. If you fly the same mission Winter, Fall, Spring, and Summer, you’ll have vastly different subject material and features for each re-fly.
The version of software is the same for all three projects. I am using the paid version of WebODM 1.9.16.
As far as the subject of the images, if you saw them I think you would be questioning the time difference too. There is very little change in the subject matter from January to March 22 that would warrant a 20 hour difference in processing, in my opinion.
The other very interesting thing is the Lightning server compiled it in an hour and a half. Same data set and no image resizing.
Not sure how to do this suggestion. You can try adding ---pc-skip-geometric to reduce some processing at the end and to more closely emulate older behavior
As @ayoubft pointed out: the matching process is making up bulk of the processing time.
Quickly looking over the console.txt the --pc-skip-geometric setting will (in this case) not influence processing time much, also since you are using a CUDA capable graphics card.
What seems a bit odd to me, is that the matching times vary from a couple of seconds to >10 seconds per match. Are you using an HDD or SSD for storage?
Though I still have to study what exactly happens during the matching phase
Otherwise I have the same experience as @saijin_naib describes: even the “exact” same dataset can take multiple times longer. One dataset has a different light exposure and ODM finds factors more of features while on another day the light was such that it had hard times to compute at all.
I am using SSD’s in my system. It just seems so incredibly odd for it to jump from 5 hours to over 22 hours! But yet the Lightning server did it in on 1.5 hours. Go figure. I’m stumped.
Interestingly the matching phase can often take longer.
It’s a pity that the progress bar does not give any indication, except for just not moving right at the start. When using WebODM for a bit, by now I know that the matching phase is in the beginning and I think that is also why @ayoubft was so quick with his diagnosis
When that happens to me, I download the console.txt once in a while during processing and run a grep -e "DEBUG: Matching " Downloads/console* -c on it to count the matching pairs. At the beginning of the matching is written how many pairs will be matched. Just now I run a small batch for which some 30,000 pairs will need to be matched:
So I know that roughly 10,000 are done and 20,000 to go
Would actually be easy to write something for that, but … time goes quick and life passes so fast
Guys, I am very stumped over the processing times with the exact same project, same amount of images. There is no significant changes to the area that would make it more difficult to stitch. I have followed the suggestions above with zero change in outcome. Last attept it took 21:45. The lightning server did it in 1:23. This process I am up to 23:30. At what point do I presume the processing is hosed up?
These projects are with no image resizing and standard automatic selections.
Any help to speed the process is greatly appreciated.
So at the moment it might be better to use WebODM with Docker or maybe even switch to Linux and the Docker installation. For sure under Linux it still runs well.
Though I think the devs are aware of the situation now and there will soon be a solution for the Windows native installation.
For anyone that is having issues with performance on Windows: you need to uninstall any AV software (especially anything other than Windows Defender) before you start comparing performance against Linux times. Whitelisting WebODM/ODM is not sufficient because WebODM is not a normal Windows application, it’s a wrapper around a rather large set of tools that get invoked separately and you can be sure that AV software is (annoyingly) sandboxing/scanning everything it does, destroying performance.
My recommendation is to try to run benchmarks on a clean installation of Windows, with no third-party software installed (not even the one that comes preinstalled from your computer vendor), test the numbers, then install the software you need. If processing is very slow, this should improve performance quite a lot, but don’t expect to beat Linux.
If running AV software is a must, I can suggest to offload processing to a separate machine or the cloud.