I’ve got a good test case I’ve been running for performance. Thought people might like to see some numbers for GPU vs No-GPU. These are for the same datasets with the same processing parameters selected - basically main features are ‘ultra’ pc quality with pc classify turned on. Default feature parameters:
m5.4xlarge - 1hr 37sec
g5.4xlarge - 28min 10sec
That’s a significant time savings for a single GPU system. How would a bigger, 4 GPU system handle it?
Both produced beautiful output. Next thing to noodle on is how to get auto-scaling using the GPU-enabled machines. As it stands now, auto-scaling launches the standard nodes I believe. The other thing of interest is to get a “use captured LIDAR for PC” switch.
Here’s a little of the Lidar PC for that same shot. Demonstrates why we’d like to combine the photogrammetry and Lidar capture to save a lot of time on trees, etc. during reality capture work.
Your model looks good, what were the options you used?
There is already a new thread with a suggestion about the lidar options. Lidar support - #67 by Maurice.Sobiera
Wow, great jobs. Can you please update ODM benchmark for that testing? (GPU vs No-GPU). Since GPU support is new feature, there are little comparison on it.
That seems to be a big jump in performance. I did some benchmarks of CPU vs GPU and the performance gain was up to about 20%. As far as I know GPU is only used in parts of the process (early in the process), and there’s a glitch being investigated where ODM thinks GPU memory is insufficient. From the settings I used, I don’t think the GPU isn’t used at all for the point cloud.
Started with the High Res template, then set the following options - auto-boundary: true, camera-lens: brown, dem-resolution: 2.0, dsm: true, dtm: true, orthophoto-compression: JPEG, orthophoto-resolution: 2.0, pc-classify: true, pc-filter: 0, pc-quality: ultra, pc-rectify: true, use-3dmesh: true
I have been following the work in that Lidar usage thread.
Agreed. I had an early unrepeatable run on the M5 machine that ran in 43+ minutes so possibly more realistic. Of course there are many factors that I’m probably not considering which is the point I guess of getting a large collection of benchmarks and looking at them statistically. Although I’m not certain which parts of the pipeline can use GPU, I’m fairly certain that feature extraction (sift) is a primary focus.
I’ll learn something by focusing on the ODM Benchmarks and try more in the future. For our use cases, we need to apply whatever processing support is necessary to keep the total time under 1 hr. If “use Lidar PC” is ever implemented, I’m guessing the speed focus is going to turn to mesh generation.
Quick update: reran the data set on the M5 machine, 1 hr, 28 min. Will check the CPU on each machine.