Fine tuning DEMs

Now then. Having got WebODM running natively in Windows (and loving it!), the next stage, perhaps inevitably, is trying to fine tune some of the settings to get the most out of it - and herein lies my problem.
When I processed a data set with WebODM, the buildings are a bit crinkly round the edges, as compared to the same data set processed in MapsMadeEasy. (see images attached)

Despite reading several threads on similar issues and referring to the most excellent “Missing Guide”, it seems that any tweak I make results in WebODM tripping up and giving an error message that the data set cannot be processed.

Is there anybody who can guide me through the tweaking process to optimize the settings for me, please? And also, why is WebODM tripping up? Is it my set up or am I just pushing it too hard?

Data set is c.550 images from Phantom 4 Pro. Using Intel i9 - 10900K with 128GB RAM on Windows 10 Pro.

Any pointers would be gratefully appreciated.

Mapsmadeeasy Mapsmadeeasy

webODM WebODM

1 Like

Andy,

Looks great so far to me!

Could you please post your processing parameters for the above image so we can get an idea of what you’re trying already?

I don’t think it is your setup, but invariably, you could make almost anything run out of memory if you go nuts with the parameters. Ask Stephen what I did to his server :rofl:

I do know that some members here have experimented with manual cleaning of the tiepoints and pointcloud outside of the processing pipeline in something like CloudCompare/MeshLab/Blender, and then passing that cleaned cloud back in for the rest of the processing pipeline.

The results were frankly astounding.

Process/results:

Workflow overview:

If you look at the top image, the roof lines are crisp and the trees show sharp delineation of the foliage. That is from Mapsmadeeasy.

Compare with the roof lines and vegetation in the bottom image, which is from WebODM, and you can see the lines are not so crisp. That is using WebODMs default settings.

The parameters I have tweaked were: Depthmap Resolution 1200, Dem Resolution 2, Feature quality High and ignore GSD - too much, do you think?

1 Like

I would drop the ignore-gsd option for now, and maybe try all defaults except pc-quality ultra and use-3dmesh to start.

See what you get and post it here, if you could.

Thank you - I will try that and see where it goes!

1 Like

Did you resize your images? If so, that might affect results. Instead of using depthmap-resolution, use the newer pc-quality (set it to ultra).

1 Like

Yes - I allowed the images to resize.

1 Like

That would explain :slight_smile: Skip the image resize, you’ll probably notice big improvements.

1 Like

Ok. I’ll try that too. Just trying Saijin_Naib’s suggestion at the moment. If I forego image resizing, should I still implement pc-quality ultra, or might that fry my computer’s brain?

1 Like

Only one way to find out… :slight_smile: :boom:

3 Likes

Tweak 1. 554 images processed in 1:53 (images resized). HUGE improvement - thank you!

tweak1

Now going to rerun that sequence, but this time keep images at their original size - wish me luck!

4 Likes

That should help a ton, as Piero hinted at.

Keep us posted!

So Windows decided to do an update overnight, and I had to restart the process this morning. Boy, this does seem to be stressing the computer - 8 hours in, and the computer fan is blowing hard, with 100% CPU usage - is this normal/expected behaviour?

image

1 Like

If you didn’t reduce the max-concurrency and/or reduce the Process Priority, then yeah, at certain parts of the processing pipeline it will absolutely saturate your CPU and get it cookin’

Maths are hard :sunglasses:

Eeeek! Is there a way that can be done in WebODM? Frantically thumbing through the Bible now! (Missing guide book)

1 Like

It is in your task options:
image

However, looking at your screenshot of Task Manager, I don’t think you need to be worried.

It is running above its base clock, so thermals aren’t a concern for the CPU.

You are talking a different language to me, but your words sound suitably reassuring! :slightly_smiling_face:

1 Like

Most newer processors have a nominal speed (what’s marked on the box), and a variable speed envelope where they can run slower and/or higher depending upon system utilization/load, thermals (temperature), etc.

So, if your processor is running higher than its nominal, that means that it isn’t hot enough that the processor needs to slow itself down to be “safe”. So, it has plenty of cooling/airflow to run hard without worry.

For caution’s sake, you can keep an eye on the thermals with something like LibreHardwareMonitor, but again, if it isn’t thermal throttling (running consistently slower than nominal at full utilization), I’d just let it keep cooking along.

It’s there. image

1 Like

Thank you - for future reference, what should I set it to to prevent my computer causing a Chernobyl-style melt-down? :thinking: :rofl:

1 Like