What is the ideal machine? New intel/AMD desktop or older dual Xeon server?

I have been using WebODM for several weeks now and I’ve had great success using an old 6th gen intel 4 core i5 6600K 3.5Ghz with 64GB DDR 4 and a M.2 drive. Maps build well and while it can take some time, the results are impressive enough to keep me searching for the optimal flight/odm settings. Bravo to the developers! Thank you for building such a useful piece of software. The cost of the commercial solutions is prohibitive for someone like myself who is just getting his business off the ground. Now on to my question:

What would you consider the “ideal” computer system for webODM? I’m considering buying or building a dedicated system that will build maps and models full time for my business. I can see two different routes. First is the obvious, build the best desktop system possible, with a new 12th gen intel i9 and go for high core count/frequency and deal with the 128GB RAM limitation on a z690/790 chip. Or AMD threadripper and a TRX40 chip. Costs more but you get 256GB RAM, and from my observations RAM is certainly a big concern.

Second approach is pick up an old server off Ebay. Maybe an older dual E5 Xenon system with 20+ cores and up to 512GB memory. Downside here is the lack of M.2 drives. It’s possible to set up the system with SSD drives, but the big advantage is these older 1&2U rack servers can be had for $1,000 USD and that’s half or a quarter of the two fast desktop systems I listed first. Rackspace aint an issue. I’ve got 2 full racks in the garage.

I understand GPU isn’t big deal, CPU, RAM and drive speed are much more important factors. That said where is my money best spent? What system will build maps and models best? I’m looking at maybe a max of 2,000 - 3,000 images but most of the stuff I’ll probably build will come in around 1,000-1,500. I typically fly a m300 with P1 45MP or Autel Evo II v3 20MP.

I’m sorry if this question has been asked and answered, and I searched and couldn’t find anything new or close to the question I need answered. Thanks in advance.


Hi. I’ve been using WebODM for a similar amount of time as you. I feel new here. I had to choose equipment a few days ago and after analyzing the threads found on this forum about equipment, I chose:

  • processor AMD Ryzen 7 7700X,

  • memory: Kingston Fury Beast Black 64GB [2x32GB 5200MHz DDR5 CL40 DIMM]

  • no GPU

Maybe it’s best to start a separate thread in the forum where we will test different hardware configurations based on one project or two or three with specific parameter settings in WebODM? What do you think about it? would you join?



Below is just my opinion, don’t take it as anything authoritative.

I’ll ask you some questions I ask most folks:

  1. How many images at what resolution (you answered this already)
  2. Do you need a particular turn around time?
  3. What is your budget?

It seems like an older server would suit you best, especially if you don’t need the absolute fastest single thread performance. 2x Xeon with 512GB RAM should suit your needs for a very long time, especially for those 45MP images, and if you get into cranking quality up.


Thanks for the reply. I’d be curious what kind of build time you have with a given # of images on that system. Yes! absolutley love to join and contribute to any forum with this kind of information. Thanks for the suggestion please give me a link =)

1 Like

Thanks for the reply! I appreciate the input. I had some suspicion that the older xenon server systems might have some value in this scenario. To answer your questions:

  1. Typically I expect to use anywhere between 500-3,000 images and yes, I do expect to need higher quality and low GSD. Right now I’m playing around with between 1 and .5 GSD as my drones can generate the data. The results are blowing me away. Amazing details. I am seriously impressed. My business is going to focus on inspections and construction progression. So low gsd detail is an interest.

  2. Turn around time, that’s the reason why I’m asking the questions. Right now with my existing system 300 images takes 7-8 hrs. 700 images took 17 hours. But the results are beautiful. I’m still playing around with params, but this is the set I used last night for data off the Autel 6K 20MP with 695 images in a double grid flown like the recommended method: auto-boundary: true, dsm: true, feature-quality: high, ignore-gsd: true, mesh-size: 400000, orthophoto-resolution: .5, pc-quality: high, rolling-shutter: true, use-fixed-camera-params: true, use-hybrid-bundle-adjustment: true Bottom line here, the faster I can build maps the better, for business deliverables I expect less than 48 hours turn-around. But less than 24 is of course better.

  3. Budget, well if I can get 3 Xenon systems for the price of one Threadripper…then the threadripper would have to be at least 3X as fast as the old xenon to make any kind of sense. Thing is this is going to be a fully dedicated machine and I’ve spent the last 25 years in IT systems management, so building out a rack server isn’t a big issue. I’ve got plenty of space for it and I know how to deal with a production server system. (heck I could use an extra heater in the garage, it’s been a cold winter =P) Bottom line on budget is I will spend what it takes but I don’t want to over-spend. If I can get away with a $1500 Xenon server then I would need a real performance reason to spend $2K or $5K+ on an intel or AMD desktop system.

My last questions then would come down to system priority. How would you rank these components in order of importance?

  1. CPU Core count
  2. Core Frequency
  3. Amount of RAM
  4. Disk Type ( M.2, Sata SSD, SAS)

Just as an example, This is a typical system off ebay: Dell PowerEdge R430 Server | 2x E5-2695 v3 = 28 Cores | 384GB RAM | 4x 8TB SAS ~ $1500 USD


Final question, would WebODM have any issue utilizing all 28 cores in that server seeing that its a dual CPU setup? How would the performance stack up against a 16 core threadripper or similar 16 core i9? Is there any advantage to be had with ECC memory over non-ECC?

Sorry for the text wall. I do appreciate the feedback more than you know!


I’m on mobile so forgive the brevity.

You need RAM, and lots of it, if you want to do 3000 image tasks at less than 1cm GSD.

Please do not use ignore-gsd. It causes issues, disables optimizations, and at best will let you upsample data, or interpolate it, or when I’m being most honest, make stuff up from nothing.

Our GSD estimation should be quite good. Put something really low like 0.1 for dem-resolution and orthophoto-resolution and remove ignore-gsd.

The most helpful metric I’ve found for assessing CPU performance is IPC, or Instructions Per Clock. This normalizes performance and lets you know how fast it actually is, not just how parallel or how high the clock is cranked. Find the highest IPC CPU you can afford, and it should be very well balanced.


a thread with a similar question and somebody with a 16 core machine posting his experience posted just a few minutes ago:

I had a similar question considering what hardware would be best to buy. Few months ago I ran some testing about what hardware affects the process in what way. I don’t have much time now, but would want to share the following with you:

  • fast storage can be a serious upgrade
    You mention M.2 drives, I think you may mean NVMe drives?! Because M.2 drives can also use SATA connector. But NVMe drives can have a serious effect on your processing time. SAS or anything can’t compete with NVMe and NVMe got sooooo cheap in the last months. I would even rather consider a mainboard with 3 M.2 slots then going with an old server. Though there are some very capable PCIe cards to add multiple M.2 slots to any machine with free PCIe slots.

  • memory usage
    Depending on the settings, even 500-750 pictures at 20 megaxpixel or higher can use >320 Gbyte of memory (phyiscal and swap combined). Though for most of the process 128Gbyte will do and for the short time more is needed, using an NVMe as a swap (virtual memory/pagefile) drive can do an amazing job. And you can use a super fast 2TB NVMe (best PCIe 4x) and have kind of “unlimited” amount of memory available.

  • CPU cores
    If I could choose between 48 cores @ 2,5 Ghz or 16 cores @ 4Ghz+, I would definitely choose the 4Ghz+. When monitoring the processor during an ODM run, the processor is used in short bursts and in between only a few cores are loaded. For me an indicator that if the individual cores run at higher frequencies, it would speed up the overall process.
    When I did some benchmarking running the same task with 4 and then with 8 cores (double that for available threads) enabled, the overall processing time was not much different. Though when deactivating some cores, the remaining cores can boost to much higher frequencies.

  • GPU
    If you plan on computing with very low GSD and point cloud quality high or even ultra, definitely get yourself a graphics card. But it does not need to be something major, already a 2000series or even a 1600 series Geforce card have very capable CUDA cores (only NVIDIA has CUDA btw.). And in case you think that you would need much video memory: nope, you don’t.

If you are interested, you can find more details about what hardware usage in the following post:


Excellent advice. Thank you! I will re-run my dataset with your suggested settings. So if I read into your reply you would rank

  1. RAM
  2. cores
  3. clock
  4. drive speed/bus

Any advantages to ECC?

1 Like

Shiva, Thank you so very much for the reply. That was exactly what I was hoping to find. when I say M.2 I suppose I mean NVMe. M.2 being the slot. But in the end, yes, I suppose it’s all NVMe of different variations.
I guess I had envisioned the 128GB of physical RAM on a 690/790 chip being a limiting factor, but from your reply, that’s not as big of an issue when you can rely on NVMe for swap without significant penalty.
As far as the video card, I didn’t realize that it was that big of an issue for larger datasets. I will spec out a system and link it here for your feedback. Thank you again for the very thoughtful and helpful reply. I appreciate your time and wisdom!


Yeah, pretty much! Though 2 and 3 would be equal for me since I look at IPC primarily.

ECC prevents errors when reading and writing to RAM. It can incur a negligible speed penalty due to the checks.

If you can use it, I’d go for it.


I just ran the question through ChatGPT for fun and this is what it came up with.

LLM-Derived Information (Use At Your Own Risk)

Here are some optimized hardware setups for WebODM:

  1. Small Projects:

For small projects with a limited number of images, a desktop computer with the following specifications can be used:

  • CPU: Intel Core i7 or AMD Ryzen 7 processor
  • RAM: 16 GB DDR4
  • Storage: 256 GB SSD
  • GPU: NVIDIA GeForce GTX 1060 or higher
  1. Medium Projects:

For medium-sized projects with more images and higher resolution, a dedicated server with the following specifications is recommended:

  • CPU: Intel Xeon or AMD Ryzen Threadripper processor
  • RAM: 32 GB DDR4
  • Storage: 500 GB SSD or larger
  • GPU: NVIDIA GeForce RTX 2060 or higher
  1. Large Projects:

For large-scale projects with high-resolution images and complex 3D models, a high-end server with the following specifications is recommended:

  • CPU: Dual Intel Xeon or AMD EPYC processors
  • RAM: 64 GB DDR4 ECC
  • Storage: 1 TB SSD or larger
  • GPU: NVIDIA Quadro RTX 8000 or higher

Additional considerations:

  1. Cooling: Running WebODM can generate a lot of heat, so adequate cooling is essential for all setups. Make sure your hardware has proper cooling systems in place, such as fans or liquid cooling.
  2. Network connectivity: WebODM requires a fast and reliable internet connection to process and upload images. Make sure your network connection is fast and stable.
  3. Power supply: Make sure your hardware setup has an adequate power supply to handle the processing load.

By following these optimized hardware setups, users can maximize the processing power and efficiency of WebODM, resulting in faster and more accurate mapping and modeling.

1 Like

We don’t have any policies on generative AI use for answering questions, but I would defer to experts before generative text.


I can chew through a 4000 image job in about 24hrs

All from Ebay
Supermicro 6018U-TR4+
2 Xeon E5-2699V4 (44 cores 88 threads)
512GB DDR4
NVIDIA Tesla P40
Hacked BIOS to enable boot from NVME

The ability to do huge jobs with zero hassle is the biggest plus for me.


Clearly it doesn’t really know what’s best :wink:

Re AI for answers, I think Brett has previously commented words to the effect of: sounds ok, but is wrong


Chat GPT spit out this bit of legal protection for me. Very happy with it. Feel free to use it for your projects.

“Disclaimer: Our drone mapping business provides high-quality and accurate data using advanced drone technology and data processing techniques. However, we would like to emphasize that the data generated by our service is not intended to be used for survey-grade applications or legal purposes. While our data can provide valuable insights and information, it is important to note that it is not a substitute for professional land surveying or other forms of precise measurement. Therefore, we strongly recommend that you seek the services of a licensed land surveyor or other qualified professionals for any legal or survey-grade applications.”


If I wouldn’t violate my own guidelines for the Community, I’d post my complete thoughts :rofl:


OK Thanks for that. This is almost exactly the system I was considering before. That’s a 2016 processor and a boatload of RAM. I’m not sure how much you paid for that system but today I priced out a 24 core threadripper desktop system with 256GB of nonECC DDR4 ram. Almost $5K for the full setup. Lets just pretend you paid $2500 for that setup. (My guess is you paid a little less than that) To make the threadripper make sense it would have to chew through the same 4000 image dataset in 12 hours or less.

So for my part that’s the question. Does anyone have any experience with a high end new system like a Threadripper 5965WX with 256GB RAM? And if so, can it produce enough speed to justify the price point? I seriously suspect there is substantial value in older server setups but I have no data to prove the hunch.

I’d love to hear more real world cases like this. Paying 5K for a high end system isn’t outside my realm of possibility, but I hate like hell to spend that kind of money on a single system unless it can chew through big datatsets very, very quickly. In that case it’s worth it. But in my mind I’d rather have three older Xenon systems for the same price as a single high end desktop if the tradeoff isn’t significant.

I REALLY appreciate what you just added to the discussion. I’d love to learn more about this system and a lot more details about what you did to build it, how you hacked the BIOS, and what you have to say about the P40. That is a very interesting system to me. Thanks for jumping in the thread!!


Spot on! $2409 :slight_smile: $2200ish if you don’t include the 24U enclosure.

As far as the BIOS goes I never planned on running this off of NVME until I started using this server for photogrammetry so I got lucky in that there were others who had successfully injected NVME DXE drivers into their BIOS and posted detailed instructions. If I was to do this over I would just find a newer generation of server that supported it natively.

I was worried about airflow with the P40 but I have never seen it rise above 54C yet when running WebODM.

For those thinking about going a similar route just know that enterprise hardware can be extremely loud compared to a PC, especially a 1U server. There are mods you can do with 2 & 4U servers with Noctua fans but there is no getting over the fact that servers love tons of airflow and are designed that way.

I have mine in my basement office and I love it because it drowns out my tinnitus :slight_smile:

I hope that more of the ODM pipeline can be directed to the GPU in the future with multi-GPU capability being the end goal. I would upgrade to a multi-GPU server if that were the case.


Thanks for the reply. You are 100% correct about the noise issue. I’ve spent a fair amount of time in data centers and one thing is certain, unless you’ve spent time around rack servers, you probably have no clue how load they are, especially when they first boot up! Not anything you’d want indoors in a residential setting. Years ago I got lucky and pulled a dual 84U APC cabinet off a job site for free. Sold originally for almost 10K =P. Nice unit, removeable panels, smoke acrylic vented doors. But you can’t argue with the price!

I got to looking last night and it seems like a few years ago dual Xenon workstations got popular briefly. Who knew? Some of the low core count processors can be had for less than a six pack of lite beer. So cheap motherboards from China can be found all over ebay for $100-$250. examples like this are easy to find https://www.ebay.com/itm/285110240848?hash=item4261e50650

These ones come with native M.2 slots and 8 DDR4 slots in 4 channels. Not bad considering a pair of Xenon 2699v4 22 core processors can be had for around $300 if you get lucky. The Chinese 2696 are even less. If you could get away with 256GB DDR, I bet you could go all in minus the P40 for close to a grand. I just have to wonder about the performance vs a new 12 core R9 or 8 core i9… No way you could get into a R9/i9 system for 1K, but the 128GB max memory with those chipsets is what concerns me.

I wish someone would chime in that’s running big datasets off a new top of the line desktop system and share their experience. If a R9/i9 128GB system can chew through 4,000 images in under 12 hours then that starts to justify spending the extra money and gives me some confidence that 128 is enough physical memory and the NVMe can handle swap for the rest of what’s needed. By the time you toss in a video card and all the bells and BS that goes into building a new system it’s $2000 at a bare minimum for a R9/i9 system and like I mentioned before, if you want to get in the threadripper game it’s easily double that, maybe more.

I do really appreciate you sharing your experience. I had a feeling that older dual xenon were viable, but the proof is really in the doing. I’d love to hear more about what kind of work you’re doing, the size of images you are sending to it and the odm parameters you’re using, and what if any issues you’ve encountered along the way. cheers!


My PC is a 5900x system with 128GB RAM and a 3090FE. It ran out of RAM in a hurry. In fact, the server was running out of RAM at 256GB so I doubled it to the 512 is has now. That’s the only issue that I’ve experienced with it.

I’m feeding it 20MP images from my Mavic 3E.

I run it through WebODM on the server and then bring it over to the PC for post-processing, QGIS, etc… That way the PC is freed up for other work and the server can be cranking away on the compute side of things.

The reason I bought the server in the first place was Chia (cryptocurrency) farming. That’s why they were popular recently. Just so happens to also be an excellent machine for photogrammetry :smiley: