ODM Benchmarks Update

Hey, everybody. If you have some spare cycles we could use some more entries in our OpenDroneMap benchmarks data. Kudos to @Ichsan for contributing this week! You can see the current data organized by dataset, or by ODM version.

It’s super easy. Just download the sample data, run the dataset with the default config, and send me the number of hours + minutes it took to process. If you’re feeling fancy, try a few different configs with the same dataset (default, 3D model, High Res, etc). You can update the benchmarks data directly if github pull requests are your thing. But really sending to me is fine.

I’ll need to know some info about your computer setup too, for the benchmark data. Those questions are in the benchmarks README, or I can just ask you when you send me your results.

Datasets for this are all linked from the benchmarks repo. If you can run a couple I’d be grateful. These are my favorites:

  • Aukerman
  • Banana
  • Bellus
  • Lewis
  • Shitan
  • Toledo
  • Tuniu 1
  • Zoo
  • Quarry

Thanks!

Pondering a bit on our sample data… Is it worth discussing a relocation of some or all of the ODMData repo over to DroneDB?

:thinking: makes sense to me.

Is there an easy way to download all the files in a dataset on github? Loading each individual image and saving is not really feasible.

I tried tuniu_tw_1 on google drive it it has zipped files of all the images, but got this initially -

This page isn’t working

doc-0o-4c-docs.googleusercontent.com redirected you too many times.

  • [Try clearing your cookies]

ERR_TOO_MANY_REDIRECTS

Fortunately, clearing cookies allowed the download to start

Yes! The download as Zip button just above the source code on the right side:

Ahh, it’s in a drop down menu after clicking on ‘Code’ in Chrome.

Thanks! :slight_smile:

I’ve have a go at a couple of them after my current 1320 image job at Ultra quality finishes, which might be a while as it has only just started incremental reconstruction after 48 hours.

Another neat trick is to import them when using the Cloud Import Plugin. When you go to add images to a task, click it, choose GitHub, and then paste the GitHub link:

We also have the very new & shiny DroneDB plugin:

In the by ODM Version there are values for version 2.7.2 but last tag in Repo ODM is 2.7.1

How can this be ? Is this a typo ?

Interesting… those 2.7.2 additions are mine, and I was going by the self-described version number within the processing node info screen. I just updated yesterday via “./webodm.sh update” so maybe the tag has yet to be applied.

I had problems also with downloading the biggest dataset: Ziegeleipark and ended up using “Jdownloader 2”. Currently doing a benchmark run on this dataset, will provide the results if (when) it’s done.

Sometimes we will have a testing version deployed that hasn’t been marked stable and released within WebODM for Windows native and/or tagged/compiled on GitHub for general consumption.

I hope this is OK to post here.

ID Benchmark Number 1
DATASET Dataset Name Aukerman
PROCESSING_TIME Time taken to Process The dataset 36m 15s
PROCESSING_SUCCESS Confirm If process was successful Y
ERROR_TYPE Mention error if any occured
INFO: Altiude is negative (-119.3956715829223) : viewing directions are probably divergent. Using default altide of 1.0
08:29:32,477 INFO: Altitude for orientation based matching 1.0
RAM_SIZE Size of RAM allocated 96 GB
RAM_CLOCK_SPEED RAM Frequency of the Machine 2933 MT/s
CPU_TYPE Make and model of the CPU Intel i7 10700K
CPU_CLOCK_SPEED Clock Speed of the CPU 3.8 Ghz
CPU_NUM_CORES Number of Cores of the CPU 8
STORAGE_TYPE Storage Type of the system. NVME SSD
OS Operating System of the Machine Windows 10 Pro
VM_TYPE Virtual Machine on which ODM is running on Native Windows
ODM_VERSION Version of ODM Used to Benchmark 2.7.1
ODM_CLUSTER Delcaration of usage of Cluster ODM N
CONFIG_NAME Mention of the Configuration in which the dataset is being processed High Resolution
CONFIG_RESIZE mention of image size if images are being resized -
CONFIG_OTHER Other Configurations made that are worth mentioning Options: auto-boundary: true, dem-resolution: 2.0, dsm: true, feature-quality: ultra, orthophoto-resolution: 2.0, pc-quality: high
TEST_DATE Date of Test 2022-01-30
TEST_BY Information of the individual who tests the data Gordon Garradd
INCLUDE_IN_SUMMARY Check if Data has been included in summary Y
NOTES Additional Notes -
ID Benchmark Number 2
DATASET Dataset Name Tuniu River 1
PROCESSING_TIME Time taken to Process The dataset 5h 57m 43s
PROCESSING_SUCCESS Confirm If process was successful Y
ERROR_TYPE Mention error if any occured
RAM_SIZE Size of RAM allocated 96 GB
RAM_CLOCK_SPEED RAM Frequency of the Machine 2933 MT/s
CPU_TYPE Make and model of the CPU Intel i7 10700K
CPU_CLOCK_SPEED Clock Speed of the CPU 3.8 Ghz
CPU_NUM_CORES Number of Cores of the CPU 8
STORAGE_TYPE Storage Type of the system. NVME SSD
OS Operating System of the Machine Windows 10 Pro
VM_TYPE Virtual Machine on which ODM is running on Native Windows
ODM_VERSION Version of ODM Used to Benchmark 2.7.1
ODM_CLUSTER Delcaration of usage of Cluster ODM N
CONFIG_NAME Mention of the Configuration in which the dataset is being processed 3D
CONFIG_RESIZE mention of image size if images are being resized -
CONFIG_OTHER Other Configurations made that are worth mentioning Options: auto-boundary: true, mesh-octree-depth: 12, mesh-size: 300000, pc-geometric: true, pc-quality: high, use-3dmesh: true
TEST_DATE Date of Test 2022-01-30
TEST_BY Information of the individual who tests the data Gordon Garradd
INCLUDE_IN_SUMMARY Check if Data has been included in summary Y
NOTES Additional Notes GPU feature extraction

Are you still interested in Benchmark info for tasks that have produced messy results?

Bellus Rd camera distribution appears to be all over the place, and the PC is broken, half of the ground appears to be at an angle of about 15 degrees to the other part.

1h 03m 14s
Options: dem-resolution: 2.0, dsm: true, orthophoto-resolution: 2.0, pc-quality: high, pc-rectify: true

EDIT, since I cant post more than 3 times in a row.

Build 47 gave an even worse result than build 51 above
56m37s
Options: dem-resolution: 2.0, dsm: true, orthophoto-resolution: 2.0, pc-quality: high, pc-rectify: true

The 3D model is a complete mess, with nothing recognisable.

Just did a pull request for data from 3 runs of the Aukerman data set on my Acer Aspire 5 laptop. Always seemed fast but I’m blown away by the results for an $1800 (NZ dollars) laptop. I suspect it’s the fast DDR4 RAM that makes the difference. 5’, 9’ and 30’ for Fast Ortho, Default and High Res runs respectively.

Thanks Gordon, Ralley, Daniel, and Bernarde. I will add incorporate your updates. A couple of people submitted pull requests directly also, and I will merge those. Really appreciate everyone taking a few cycles and submitting their results here. This data is super helpful for the dev team and the community.

Gordon, re: messy datasets. Yes that’s helpful also. That sort of thing may point to issues in the code or the dataset itself. I’m going to try to reproduce on my end and see if I can tell what’s going on.

Gordon, could you post or send me the benchmark data for the run you did on build 51? I get a very similar result on my end, and I think that’s just because the original flight plan coverage wasn’t great. My older benchmarks on Bellus have terrible 2D and 3D outputs too. I think it’s just a difficult dataset. We ought to consider taking it out of the recommended datasets really, but for now the timing data is still useful.

I think this would be a very good discussion (and it’s been on my mind for a while…).

I think the missing piece in DroneDB at the moment (among many other features) is the ability to have organization accounts. The system is designed to support it, but all the parts (permissions, owners, collaborators, etc.) haven’t been rolled out yet.

I’ve started tracking this in Support organization accounts · Issue #242 · DroneDB/Registry · GitHub

Finally diving in with the moderately monstrous machine affectionately known as the SpaceHeater in my house… .

I deleted all the associated files last week to save disk space, so I’ve just run it again, with similar settings. The report, task_output and log.json are here -
https://drive.google.com/drive/folders/1C9BenPJEufyZwd_wyyuTmr3EShuZ0NUF?usp=sharing

EDIT - additional
I’ve just tried to run it again with these settings -
Options: dem-resolution: 2, feature-quality: ultra, matcher-neighbors: 12, mesh-octree-depth: 12, mesh-size: 300000, orthophoto-resolution: 2, pc-quality: high, pc-rectify: true, texturing-data-term: area, use-3dmesh: true, rerun-from: dataset

and only 6 seconds in it apparently ran out of memory, despite only using 15 out of 96GB of RAM (238GB incl. virtual)

2022-02-16 14:52:26,577 INFO: Reading data for image IMG_1391_RGB.jpg (queue-size=1
2022-02-16 14:52:26,580 INFO: Reading data for image IMG_1298_RGB.jpg (queue-size=1
2022-02-16 14:52:26,580 INFO: Extracting ROOT_SIFT_GPU features for image IMG_1390_RGB.jpg
2022-02-16 14:52:26,581 INFO: Extracting ROOT_SIFT_GPU features for image IMG_1297_RGB.jpg
d:\a\odm\odm\superbuild\build\pypopsift_deps\popsift-src\src\popsift\sift_octave.cu:208
Could not allocate Blur level array: out of memory