Processing time over 22 hrs now - I would like some clarification

Hi everyone,
I purchased the assisted installation for WebODM to run natively on an alienware laptop. Intel i9, 32gb ram, 1tb memory, 3080 graphics card, and windows 11. The installation went well and I uploaded 198 images taken from a matrice 300 with a zenmuse P1 camera. The p1 camera takes 45mp images.
Default settings were used to attempt my first processing/stitching. Now that I am over 22 hrs into the processing duration, I’m not sure if I am on the right track.
I can see that WebODM is using on average 7,000 to 8,500 mb of ram memory while the processing is taking place. In the WebODM dashboard I used the tools tab and can see that the data directory is being updated with new information.

If anyone can let me know just to wait it out or to stop the processing and adjust presets I would really appreciate it. Thanks in advance.

1 Like

Welcome! You’re likely waiting on virtual memory or pagefile access at that size of imagery and RAM.

Increasing RAM or lowering image resolution during Task creation should greatly help speed things along.

Thank you for your prompt response. I do see that the output was truncated to 500 lines in the task output. Processing continues however I do see different results in the task outputs: success: True vs success: False.
My goal is to use this data set, the settings used and total processing time as a baseline for my expectations going forward.
I will attempt to process the same dataset under default settings however resizing the images to 2048.
Would you mind recommending an image size to process in order to maintain optimal quality gathered from the 45mp camera?

1 Like

Update; I cancelled the last attempt to process images at full resolution. This was most likely not the best use case, while my hardware was not maxed out while attempting to process however I can safely assume that it would take an additional 2 days to complete this project based on the task output file lines completed.

I uploaded the same data set of 198 images to a project test #2, resizing images to 2048, and leaving all default setting alone as a baseline for reference. The memory(RAM) usage dropped from 7,500mb down to 1,500-2,500mb for this second attempt.
I am already seeing the progress bar move smoothly across after only 20 minutes. I hope my trials help others in the community while I explore the platform.

Happy New Year Saijin! Thank you for keeping up with our questions/concerns


Update: Processing failed due to strange values found in reconstruction.

I have ran this data set in Pix4D for a trial period and results came back. I’m not sure what to try next from here. I’ll go ahead and delete this project, check for any updates and restart my computer before attempting again.

1 Like

I’m under the impression from prior community posts that I need to upload a specific .json file for the Zenmuse P1 camera I used during the data capture. I do appreciate that an update was made to WebODM to accommodate the smart oblique capture feature that is integrated into the P1. So far I’m unable to find a .json file to upload into the custom settings prior to processing. Help with this would be appreciated.

1 Like

Hi again, thanks for tagging along on this post if you have been. I just have another update after adjusting some custom settings and re-processing the same data set. I am still unsure what a .json file is going to help me with in regards to the zenmuse p1 camera payload I am collecting data with, knowing that the smart oblique data capture has already been accounted for in the latest webodm update.
Here are the results from my latest attempt;

auto-boundary: true, feature-quality: ultra, force-gps: true, gps-accuracy: 5, matcher-type: bruteforce, max-concurrency: 22, mesh-octree-depth: 12, mesh-size: 300000, orthophoto-cutline: true, orthophoto-resolution: 3, pc-filter: 1.5, pc-quality: ultra, sfm-algorithm: triangulation, use-3dmesh: true, rerun-from: odm_postprocess
Average GSD: 1.77 cm
Area: 17,114.51 m²
Reconstructed Points: 23,153,221

1 Like

Has your processor got 11 or more cores? Max concurrency should be equal to, or less than (to reduce memory requirements), your number of logical processors.

I’m currently over 3 days into running this dataset, and still on matching:

[Κιλκίς - Σταθμός Μουριών - 17/11/2022]

from here: Multiple ground "layers" in reconstruction

171 images 73:57:13

|Created on:|30/12/2022, 11:15:52|
|Options:|auto-boundary: true, crop: 5, dem-resolution: 2.0, dtm: true, feature-quality: ultra, gps-accuracy: 6, min-num-features: 20000, orthophoto-resolution: 1.0, pc-quality: ultra, pc-rectify: true, use-3dmesh: true|

Using an i7 3.8GHz, 96GB RAM + 293GB Virtual memory on NvME (getting plenty of use)

I expect matching to finish within 6 hours, and maybe I’ll be half way through by then!

You need a LOT of computing power to process full size 45MP images, with 32GB RAM, you’ll be stuck working in virtual memory for much of the time.

1 Like

You can tune the image resolution to the Ground Sample Distance (GSD) you need, and nothing more is really needed. So if you collected at 1cm/px and you only need 5cm/px, resize accordingly and save yourself resource consumption and time!

As for failure to reconstruct, usually indicates lower overlap and sidelap than optimal for the given scenery and/or poor image quality (blur, over or under exposure, etc). You can try bumping feature-quality and min-num-features up to see if that mitigates the issue.

You shouldn’t need the lens calibration file, we should be able to work out the calibration coefficients without issue.

1 Like

Hey Saijin and Gordon,
Another update for the same data set I have (198 images). I resized images to 2730px as Saijin suggested in another forum post for a P1 camera payload. Here are the settings I used below, processing time of only 35 minutes and GSD was 1.3cm. Obviously as you can tell the number of reconstructed points are only 210,357 vs the previous processing settings that gave over 23 million since I requested a fast ortho to be processed. This affected the 3D model output significantly however the quality of the ortho is better by increasing the image resizing from 2048px to 2730px. Next run I’ll process a 3D model around the same parameters and continue to increase the px size of the images to understand what my computer can handle. Ultimately I’m trying to determine what my maximum levels are to run data natively and when I should inquire about webodm lightning for data sets processed for clients.

|Options:|auto-boundary: true, dem-resolution: 2, fast-orthophoto: true, feature-quality: ultra, mesh-size: 300000, orthophoto-resolution: 1.5, pc-quality: high|
|Average GSD:|1.3 cm|
|Area:|17,113.3 m²|
|Reconstructed Points:|210,357|


Agree, you shouldn’t need the cameras.json calibration file. The included P1 profile has been working for me in general. I was having problems reconstructing with the reverse angles (and effectively upside down photos) created by smart oblique missions with ODM 2.9, but sounds like I should give it another try with 3.0.

1 Like

Hey SummitBri, definitely give it another go and use the smart oblique data you capture. I was able to do another run right around in 3 hours total processing time and got the results I was looking for. I’m really happy how this turned out, I’ll share the presets I used below. For me being able to go out and fly for an hour, come back and process for 3 hrs and have a deliverable for a client that night is really what I was aiming for. Next I’ll be exploring QGIS for when I want to hand over a professional deliverable when working with volumetric surveys and topography maps for mining sites. **I’m so glad I didn’t fork up $3500 for another software or a monthly subscription of $350. Maybe that will make sense in the future but for now this is exactly what I needed to get started and understand the fundamentals of how this is really supposed to work. On my next data collection I may give WebODM Lightning a shot and see how that can increase my capabilities for clients. Great learning experience so far with WebODM and really glad to see Saijin_Naib still around to support us.

auto-boundary: true, camera-lens: brown, dem-resolution: 2, dsm: true, feature-quality: ultra, mesh-size: 300000, orthophoto-resolution: 2, pc-quality: ultra
Average GSD: 1.3 cm
Area: 17,112.23 m²
Reconstructed Points: 166,762,656

Note that these options gave me close to 50,000 less reconstructed points than a previous processing I noted in the above comments. However my average GSD dropped to 1.3cm from 1.7cm.(thumbs up!)


Awesome, ODM + QGIS are great together. The only area I don’t yet have complete confidence is in the QGIS deliverable transformations from WGS84 (ITRF2014/2020) to NAD83 (2010) / State Plane. A different topic…

1 Like

Hi guys!

Interesting thread. Have you processed any larger datasets from the P1?
I’m having trouble processing even a few hundred, as usually I get this error message:

HTTPSConnectionPool(host=‘’, port=443): Read timed out. (read timeout=5)

Saijin_Naib, what setting should you recommend for larger ( >300 images ) datasets from the P1?
I think I have plenty of computing power, with 128 Gb RAM and an AMD Ryzen 9 5950X 16-core.
One of the issues seems to be that almost no resources are being used during the first stages related to the camera calibration.

1 Like

It looks like you’re sending that Task to WebODM Lightning, so your network connection is going to be the most important link and you shouldn’t see much resource utilization locally at all.

I think I’ve processed up to around 1200 P1 images on a single node. I haven’t tried more than that yet.

1 Like

What are your typical settings? I can’t process anymore than a few hundred without it taking almoat a day. Have plenty of processing power.

Hi i have a project i have mapped with PH4 RTK 670 pictures i try to process all in one it will not do
i have started the project again using 60,70 images at a time they complete fast and fine when complete with all images can it be viewed as one workable image with all tools it is a 2d project. In particular need contours showing? cheers

1 Like

We don’t support compositing Tasks like that, you’d need to have all the images in one Task and use the --split parameter to sub-divide the Task.