I’ve got a good test case I’ve been running for performance. Thought people might like to see some numbers for GPU vs No-GPU. These are for the same datasets with the same processing parameters selected - basically main features are ‘ultra’ pc quality with pc classify turned on. Default feature parameters:
m5.4xlarge - 1hr 37sec
g5.4xlarge - 28min 10sec
That’s a significant time savings for a single GPU system. How would a bigger, 4 GPU system handle it?
Both produced beautiful output. Next thing to noodle on is how to get auto-scaling using the GPU-enabled machines. As it stands now, auto-scaling launches the standard nodes I believe. The other thing of interest is to get a “use captured LIDAR for PC” switch.
Here’s a little of the Lidar PC for that same shot. Demonstrates why we’d like to combine the photogrammetry and Lidar capture to save a lot of time on trees, etc. during reality capture work.