Process for a GPU node?

New to ODM, and Docker but NOT new to VMs Linux and Windows.

What is the process for adding a GPU Processing Node in Windows?

Found this

And my RTX 2060 does have CUDA support as per the CLI check.
However, running the --GPU flag shows Warning: GPU support is not available for Windows

I see this
** Command line docker using the opendronemap/odm:gpu image (Linux)*
** WebODM by running (Linux):*
./webodm.sh start --gpu

Does this mean that 100 percent certain that you can’t run a CPU node and a GPU node in Windows Docker? I would like to see this confirmed so I can stop spinning my wheels.
Thanks!

1 Like

Welcome!

Can you give this post a read-through and try?

1 Like

It might be possible, but the script webodm.sh does not currently have an automated workflow to setup GPU on Windows via docker.

2 Likes

These links are somewhat outdated.

developer.nvidia. com/cuda/wsl shows

The NVIDIA CUDA on WSL driver brings NVIDIA CUDA and AI together with the ubiquitous Microsoft Windows platform to deliver machine learning capabilities across numerous industry segments and application domains.

Developers can now leverage the NVIDIA software stack on Microsoft Windows WSL environment using the NVIDIA drivers available today.

The NVIDIA Windows GeForce or Quadro production (x86) driver that NVIDIA offers comes with CUDA and DirectML support for WSL and can be downloaded from below.

And the button simply redirects to the generic driver download URL

www.nvidia. com/Download/index.aspx?lang=en-us

IE The Geforce Experience windows package (I use my machine for gaming) driver update system has the driver package necessary.

I have also done a ./webodm.sh update just to be sure

Additionally, the new Docker install process directly asks you to enable WSL2. right there in the installer I did not see this in the WebODM install docs. It was there down the list as an option.

This URL
hub.docker. com/r/opendronemap/nodeodm*
Shows this command
docker run --rm --gpus all nvidia/cuda:10.0-base nvidia-smi
Which DOES return a positive result

$ docker run --rm --gpus all nvidia/cuda:10.0-base nvidia-smi
Mon May 23 17:44:50 2022
±----------------------------------------------------------------------------+
| NVIDIA-SMI 510.68.02 Driver Version: 512.77 CUDA Version: 11.6 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce … On | 00000000:01:00.0 On | N/A |
| 35% 33C P8 7W / 160W | 967MiB / 6144MiB | 1% Default |

I see an older thread
community.opendronemap. org/t/gpu-isn-t-used/10524
that shows issues with this and that’s why I was posting in the first place.

edit
I did find out how to kick on the separate GPU node from here
docker run -p 3000:3000 --gpus all opendronemap/nodeodm:GPU

I can open the page and view the node job directly, but I can’t seem to figure out how to change the ports so that I can load it into the Processing Node list. I have a job that I have run on my local CPU node, and the Lightning system, and I want to test GPU for benchmarking.

1 Like

I am mostly there. I discovered how to add a separate GPU node and need to get it into the node list.

2 Likes

That’s awesome!

If you figure it out, consider opening a pull request on GitHub to update the webodm.sh script? :pray: I’m sure lots of other people would benefit from it.

2 Likes

If I get it working. I’m amazed I haven’t seen more on this. I can hit the Container Stats and see CPU and RAM but can’t exactly tell if it’s hitting the GPU. The CPU node seems to be not using very much and should be maxed. For instance, my test mesh takes 58 minutes on CPU, 20 minutes on GPU, and 8 minutes on Lightning. But I’m not 100 percent sure I’m using my own CPU and GPU properly, so it’s not exactly a great benchmark.

1 Like

The base script is OK for basic service startups. You definitely need the NodeODM CPU node to be separate. This is great because the Docker script downloads it and runs it automatically. I come from Virtualbox, HyperV, and VMware where you have to do a LOT more work to get things going. Not much more, but a little.

1 Like

Sorry, meant GPU node. I don’t see an edit here for some reason.

1 Like

From the WSL command line you should be able to run this to see if the GPU is being used (-l 1 makes it update every 1 seconds):

nvidia-smi -l 1

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.