Peer-reviewed paper comparing Metashape, Pix4D, WebODM, Correlator3D

Hi all

A new paper with a comparison between Metashape, Pix4D, WebODM and Correlator3D has been published in the open access journal Drones. I haven’t read it yet but it seems interesting.

Pell T, Li JYQ, Joyce KE. Demystifying the Differences between Structure-from-MotionSoftware Packages for Pre-Processing Drone Data. Drones. 2022; 6(1):24.


Awesome find, thanks for sharing! On my reading list now :slight_smile:

1 Like

Just skimmed through it. Besides the obvious that WebODM is slower do to using the CPU I find that many of the differences has to do with settings mostly.

It would have been more interesting if the had used an rtk-drone.

1 Like

Yeah, but they have a reasonable justification for using all-defaults, so it makes sense. If nothing else, it gives me more to think about for what our defaults should be.

I’m a bit more concerned that they didn’t specify exactly what releases they used for each as that can have a massive impact on each suite

I think a higher value for pc-filter, like 6, would make things look better. And a much higher matching-neighbours value, 8 is very low. Using an overlap of 80/70 is normal and gives you 39 possible neighbours, at 70/60 it’s 17!

That’s a start.

1 Like

Yes, but I have to balance that against the wide range of hardware people use with OpenDroneMap, and we already have a ton of folks getting Out-Of-Memory with default settings 🤷 Not an easy problem to solve, but I’m looking at what we can do.

With this kind of software the user has a lot of responsibility. It’s the same with games, if it runs slow you have to upgrade your pc or lower the quality.

There’s the option to let the software check the computer specs and propose a setting.


I would set the default to work well on a modern standard desktop. So 4 cores, 16gigs of ram and an SSD at a minimum.

It’s interesting to I see how some people out there think of hardware requirements. If it’s Metashape/Pix4D/etc they say “I’m gonna need a powerful machine for this”, but if it’s ODM it’s more like “hmm… is this open-source? so maybe I can grab that 10-year old laptop and put it to some use…”

I reckon it is indeed a hard task to balance the default settings for everyone out there. Maybe we could have processing options like “high-end HW”, “low-end HW” ?

Or have the startup script to gather some information about the machine (RAM, GPU, etc) so a proper warning could be displayed in the Dashboard, considering the project settings and number of images? (like “you don’t have enough RAM to process 9999 images”)

1 Like

I believe that the program would be able to calculate what resources is needed based on image resolution and how many images.

1 Like

The problem here also is that the platform is running inside a docker container, if the test was run using default setting this is also a limiting factor; also the fastest products could be using cuda as the accelerator for its process, right now this is an option in webodm.

1 Like

Or a native ODM.

1 Like

Yes. An issue with this kind of paper is when the authors don’t explicit mention all settings/versions they use.

1 Like

in this case they stated that they used:

"we opted to follow the suggested workflow of each package,
based on the assumptions that many users are likely to opt for default settings at least
initially and that the default settings have been selected by the manufacturer as producing
the most consistent and hopefully optimal outcomes"

as @Saijin_Naib stated, the priority in the default setting in WebODM is to enable more people to have reasonable result in their hardware, it deviated from the writer assumption that that default configuration will yield the optimal outcome, at least in webodm.
This affected the coverage, resolution and who know what else (i’m starting to use and know WebODM)


Just started reading it for real. They do mention “Web Open Drone Map Version 2.6.4”

also “It is likely that the slow performance of WebODM for large datasets was due to it using the
CPU for processing, while the other three packages are able to access the GPU for higher

1 Like

The thing about comparing these type of programs is.

  1. Is the user processing on a server they don’t own.
  2. Comparison of these type of programs should be in two categories a. Self hosted and b. None self hosted to have any meaning.

The only Comparison that can be made against all other non self hosted software is settings and run times on equivalent hardware.

With self hosted platforms you have to have some amount of sys admin skills to benefit from fine tuning the software and hardware to your specification and what you want to achieve.
Such as with webodm etc failtoban and other security software, updating the os and software when required etc etc

If your installing on a hypervisor passing through the gpu, balloning or not to etc etc

Would it not be possible to write a memory manager to look at the upper limit of the memory pool?

This could then enable options for writing files to a page file to be transferred into main working memory when it becomes free effectively caching the next images to be processed?

Just thinking Outlook…

I think it might be best to leave that to the host OS and Python. I’m not sure we should be doing direct memory management that overrides them.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.