Optimum Docker's CPUs settings in Intel CPU

I’m running ODM on a laptop with an 8-core 16-thread CPU, and the execution time is shorter if I keep the CPUs allocated to Docker to 6 rather than 14.
This may be due to the speed step feature of Intel CPUs, which suppresses the clock when running many threads.
If possible, I would like to use the available CPU cores as much as possible to finish the process faster. Is there a good way to set this up ?

Test Results ===== MacBookPro 15inch 2019, i9-9980HK
(Allocated memory to docker is 14 GB)

CPU Threads Execution Time
2 threads => 299 s
3 threads => 243 s
6 threads => 227 s
8 threads => 235 s
10 threads => 241 s
14 threads => 278 s

Have access to a laptop cooling dock?

It should not be throttling that severely unless it is getting too warm.

You have 8 physical cores, the other 8 threads are “just” hyperthreads. This means that your physical core can switch between two pipelines. This is useful in cases where the pipelines get blocked by e.g. long lasting load operations. Usually, the core has to wait for the data to arrive, but with hyperthreads it can just work on the other pipeline where hopefully the data already arrived and thus work is pending. Hyperthreading is not real parallel computing.

Several parts of ODM use highly optimized code. This means, the code is optimized to

  • store the data in a meaningful way so that fetch and load operations are minimized
  • keep the processing unit under load as long as possible
  • use architecture specific instructions like vector registers, AVX, SSE, …

This together results in workloads that tend to keep your compute cores busy as long as possible. And thus, there is no real benefit from hyperthreading. On the contrary, using more threads than there are actual compute cores introduces communication overhead as well as overhead from switching pipelines. Having that in mind, the theoretical optimum for your setup with 8 cores is 8 threads.

According to your measurements (btw. kudos that you did some measurements to find the sweet spot), 6 threads are faster 8 threads. This can have several reasons. I expect the most prominent to be that your operating system needs time do stuff. This results in the same thing a hyperthread would cause, which is: One core can’t work solely on one highly optimized task, but has to switch between tasks. Another reason can be that using highly optimized code usually creates quite some heat, as it uses more parts of the processor at the same time. There are processors that throttle their frequency when they find vector instructions (e.g. AVX for Intel). Others, as yours, increase it’s speed as long as it isn’t to hot. So keeping one or two cores out of heavy computations can be beneficial.

According to your specs and your operating system, I expect the sweet spot to be around 7 cores used for ODM.

1 Like

Thank you very much for your detailed explanation about hyperthreading.
I have found a way to disable hyperthreading on my machine MacBookPro15 (2019, i9-9980HK) and I have benchmarked it using the following dataset

The only calculation option I specified was “radiometric-calibration camera”.

No._of_CPUs → HT on ----- HT off
1 -----------------> 1249 s ---- 1246 s
2 -----------------> 773 s ------ 767 s
3 -----------------> 623 s ------ 624 s
4 -----------------> 561 s ------ 561 s
5 -----------------> 527 s ------ 530 s
6 -----------------> 514 s ------ 513 s
7 -----------------> 516 s ------ 514 s
8 -----------------> 527 s ------ 518 s
9 -----------------> 537 s ------ -------
10 ----------------> 547 s ------ --------
12 ----------------> 572 s ------ --------
14 ----------------> 617 s ------ --------
16 ----------------> 642 s ------ --------

(Docker 3.4.0 (65384), Memory 14 GB, Swap 1GB, Disk image size 120GB)

On my machine, the best setting seems to be 6 CPUs, so I’ll use this setting for a while until I get a faster desktop machine.


Great stuff. But don’t forget to reenable hyperthreading, as most applications benefit from it.


This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.