These links are somewhat outdated.
developer.nvidia. com/cuda/wsl shows
The NVIDIA CUDA on WSL driver brings NVIDIA CUDA and AI together with the ubiquitous Microsoft Windows platform to deliver machine learning capabilities across numerous industry segments and application domains.
Developers can now leverage the NVIDIA software stack on Microsoft Windows WSL environment using the NVIDIA drivers available today.
The NVIDIA Windows GeForce or Quadro production (x86) driver that NVIDIA offers comes with CUDA and DirectML support for WSL and can be downloaded from below.
And the button simply redirects to the generic driver download URL
www.nvidia. com/Download/index.aspx?lang=en-us
IE The Geforce Experience windows package (I use my machine for gaming) driver update system has the driver package necessary.
I have also done a ./webodm.sh update just to be sure
Additionally, the new Docker install process directly asks you to enable WSL2. right there in the installer I did not see this in the WebODM install docs. It was there down the list as an option.
This URL
hub.docker. com/r/opendronemap/nodeodm*
Shows this command
docker run --rm --gpus all nvidia/cuda:10.0-base nvidia-smi
Which DOES return a positive result
$ docker run --rm --gpus all nvidia/cuda:10.0-base nvidia-smi
Mon May 23 17:44:50 2022
±----------------------------------------------------------------------------+
| NVIDIA-SMI 510.68.02 Driver Version: 512.77 CUDA Version: 11.6 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce … On | 00000000:01:00.0 On | N/A |
| 35% 33C P8 7W / 160W | 967MiB / 6144MiB | 1% Default |
I see an older thread
community.opendronemap. org/t/gpu-isn-t-used/10524
that shows issues with this and that’s why I was posting in the first place.
edit
I did find out how to kick on the separate GPU node from here
docker run -p 3000:3000 --gpus all opendronemap/nodeodm:GPU
I can open the page and view the node job directly, but I can’t seem to figure out how to change the ports so that I can load it into the Processing Node list. I have a job that I have run on my local CPU node, and the Lightning system, and I want to test GPU for benchmarking.