I can only speak to my personal experience. I have 128GB Ram, 16 Core AMD 3950X, RTX3090/24GB CUDA graphics and fast 8TB PCIEv4 M.2 drives. This is using Docker for Windows 10 via WSL2. In WSL2 config, I dedicate 14 cores, 122GB Ram, 256GB Swap on non-system drive to Docker. I leave 2 free cores and 6GB RAM for windows, but throw everything else I can get away with at Docker.
My images are 20MP in 3:4 (not 16:9) from DJI Air 2S with 85% horizontal and vertical overlap. I normally fly in a cross stitch pattern (lawnmower pattern one way, then switch 90 degrees and do another lawn mower pattern). So multiple images with multiple angles with very good overlap of each piece of land.
Up to about 800 Images or so it’s fine on high quality settings. I can do ortho and 3d model in a single go without it hitting the swap file. It can do that in less than 8 hours.
Once I hit about 1100 images, it maxes out ram and hits the swap file hard. I start to separate steps and do ortho in one go, wipe VM, then do 3d model on fresh instance. It still hits the swap file pretty damn hard on 3d portion after bumping some settings up (detected features, max points) but leaving it on high quality (not ultra…I wish I could do ultra). That can take about 60 hours and with it hitting the swap so hard, the machine is barely accessible. Like it can take hours for the logs to update. Once over 12 hours. I’m glad I didn’t shut it off as it actually completed and I would have had to start over, which is time consuming with these large image sizes.
I am no expert in this software, and there are a ton of options available with some having drastic effects on memory/processing power. I’m still very much learning. I’m hoping to get the highest quality orthos and 3d models as I can of this project we’re working on, which is a hobby for us just documenting our work. I have 1 ft resolution lidar data from one of my state’s universities, so don’t really need DEM from images, etc. That has saved a ton of time, plus it’s a lot better for our use case as we want ground level elevation curves, not tree top level, as this is heavily forested land.
I’ve considered offloading to some of the various cloud services but none of them seem to have a pricing model that really works for my occasional, but fairly large dataset, usage. I’m still experimenting with splitting the images up, but that is fairly new for me and haven’t dialed in the settings right yet (overlap, etc).
In short, I think 64G RAM and 250G SSD are under powered for the larger end of what you’re wanting to do. I wouldn’t consider the platter drive/HDD an option to use. I suspect 256G RAM might handle it, but hard to say for sure. It’s on the edge I think.
I look forward to others answers as this info is kind of hard to come by. A good deal of the hardware requirements come from your settings, and there are a lot to play with.