Nvidia-smi not found in PATH, using CPU

Hello,

I’m trying to use GPU acceleration to process my drone images in WebODM. But I keep having this messages in the log: nvidia-smi not found in PATH, using CPU

Here’s what I’ve done so far:

  1. Install docker for Desktop (Windows 11)
  2. Install Ubuntu in WSL2
  3. Install NVIDIA drivers for Windows 11
  4. Configure NVIDIA CUDA driver on Ubuntu in WSL2
  5. Configure NVIDIA Container Toolkit on Ubuntu in WSL2

When I run the nvidia-smi command in the Ubuntu terminal I get this output:

[email protected]:~/Drone/WebODM$ nvidia-smi
Fri Jul 29 13:23:54 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.57       Driver Version: 516.59       CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:01:00.0 Off |                  N/A |
| N/A   47C    P8    10W /  N/A |    176MiB /  8192MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

My environment:

  • Laptop: MSI GE76 Raider
  • CPU: Intel i7 12700H
  • GPU: NVIDIA 3070 Ti (8Gb RAM)
  • RAM: 32 Gb
  • OS: Windows 11 (Ubuntu in WSL2) installed

The problem
Despite that it seems that CUDA seems correctly configured, when I run WebODM from Ubuntu with the command ./webodm.sh start --gpu, the NVIDIA GPU is always 0% running and I keep having this output:

[INFO]    nvidia-smi not found in PATH, using CPU

How can I use GPU accelaration with my NVIDIA card to process my drone images with WebODM?

Thanks in advance

Try this procedure. No need for all that work with Ubuntu. Just need to change two files. I think they changed Nvidia one already but not sure. Make a copy of both before changing because you’ll need them for updates.

Dear MarkoM

Could you provide more details about your previous procedure for someone who is not very familiar with docker?

For example. I can successfuly run the cuda-sample command.

E:/Github/WebODM/> docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
        -fullscreen       (run n-body simulation in fullscreen mode)
        -fp64             (use double precision floating point values for simulation)
        -hostmem          (stores simulation data in host memory)
        -benchmark        (run benchmark to measure performance)
        -numbodies=<N>    (number of bodies (>= 1) to run in simulation)
        -device=<d>       (where d=0,1,2.... for the CUDA device to use)
        -numdevices=<i>   (where i=(number of CUDA devices > 0) to use for simulation)
        -compare          (compares simulation results running once on the default GPU and once on the CPU)
        -cpu              (run n-body simulation on the CPU)
        -tipsy=<file.bin> (load a tipsy model file for simulation)

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

> Windowed mode
> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
GPU Device 0: "Ampere" with compute capability 8.6

> Compute 8.6 CUDA device: [NVIDIA GeForce RTX 3090]
83968 bodies, total time for 10 iterations: 73.218 ms
= 962.963 billion interactions per second
= 19259.254 single-precision GFLOP/s at 20 flops per interaction

And then, how to know where should I change the start.sh and nvidia.yml? (where is the start.sh and xxxx.yml generated by cuda-sample that I can compare?)

Thanks for your patient explanation

Its not generated. The files you need to change are webodm files.
From your log i see its in E:/Github/WebODM/ Find those 2 files and change accordingly. Do a backup of files first. Should work.

What works for me is to run WebODM with default settings (“webodm.sh start”), install a separate instance of the nodeODM:GPU, add it as a processing node via the WebODM GUI, and configure the project task to use the GPU node rather than the default non-GPU node.

After it failed on Windows, I decided to make a clean install of PopOS, a Ubuntu-based distribution.

when I run
docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark, I get that error:

$ docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: initialization error: load library failed: libnvidia-ml.so.1: cannot open shared object file: no such file or directory: unknown.
ERRO[0000] error waiting for container: context canceled 

when I run ./webodm.sh start --gpu, I get a similar error, but only with the --gpu flag. Without the flag, no error and webodm lauch normally.

Creating webodm_node-odm_1 ... error

ERROR: for webodm_node-odm_1  Cannot start service node-odm: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hooCreating broker            ... done
Creating db                ... done
.1: cannot open shared object file: no such file or directory: unknown
Creating worker            ... done

ERROR: for node-odm  Cannot start service node-odm: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: initialization error: load library failed: libnvidia-ml.so.1: cannot open shared object file: no such file or directory: unknown
ERROR: Encountered errors while bringing up the project.

It’s been 12 hour that I’m trying to solve this… please help!

NVIDIA drivers are up to date, as well as CUDA and the nvidia-docker2 and nvidia-container-toolkit

The procedure that’s described in that link works with docker on windows. Can’t help you with Linux version…

Dear MarkoM,

Apologize, I did not describe the question well…

I definitely could find the E:/Github/WebODM/start.sh and E:/Github/WebODM/docker-compose.nodeodm.gpu.nvidia.yml.

But the question is, I don’t know which line should change. I don’t know what is the “correct answer”. I guess it should be some file from cuda-sample:nbody as a reference, but I could not find them in my PC.

Thanks again for your patient reply.

REST API to Access ODM - OpenDroneMap

Step by step on how to get the GPU working…

finally got it working, running the following command except for one thing

docker run -p 3000:3000 --gpus all opendronemap/nodeodm:gpu

on Git Bash and adding it as a new processing node in the web interface of WebODM.

But how come does it take twice the time for the same data set with my RTX 3070 Ti than with my CPU only? (Intel i7 12700H)

With GPU: 4 min 44 sec (Detected features: 2 361)
with CPU: 1 min 54 sec (Detected features: 2 772)

I was expecting that te processing time would be faster exploiting an NVIDIA GPU…
Can you explain it? Or Am I doing something wrong?

Step 9 and 10 of this thread has work what needs to be charged. Make a copy of files first… NodeODM:GPU exited with strange error code - #15 by ichsan2895

1 Like

Thank you for your kindly help!

Solution to all my problems

After a lot of reading, testing and help from you guys, here are the solutions to the problem I encountered.

Before you read, know that the tests were made on:

  • OS: Windows 11, Ubuntu 22.04 LTS, Pop!_OS 22.04 LTS

  • Laptop: MSI GE76 Raider

  • CPU: Intel 12th gen i7-12700H

  • RAM: 32 Gb

  • GPU: RTX 3070 Ti

Problem 1

Description

At first, on Windows 11, I used Docker and Ubuntu in Windows Subsystem for Linux (WSL) to run WebODM, but my NVIDIA GPU wasn’t recognized nor used to process the data. In the log, we could read the following:

[INFO] nvidia-smi **not** found **in** PATH, **using** CPU

In Linux, I got the same error.

Solution

  • Installing the right OS and Kernel version
  • Installing NVIDIA drivers
  • Installing docker and docker-compose
  • Installing NVIDIA-container-toolkit
  • Check if everything is ready
  • Running WebODM with GPU

Installing the right OS and Kernel version

Windows

If you choose to work on Windows, it’s a pretty straight forward process to install it. Just go on the Microsoft website and follow the instructions.

Ubuntu
  1. If you choose Ubuntu, go to the download page of the Ubuntu website and download the .iso file.
  2. Create a bootable USB (e.g., using Etcher)
  3. Boot from your USB drive
  4. Follow the instructions on the screen for the installation. If you check the box “Download update during installation”, it will also install your NVIDIA drivers.

BUT! I realized that Ubuntu 22.04 comes with the Linux kernel version 5.15. After reading on the internet (and that was also my case), there seem to be a bug with this version that leads to not recognizing my Wi-Fi card… Upgrading the kernel version solved the issue (connected with an ethernet cable…). To upgrade it, I used the “mainline” utility. To install mainline:

sudo add-apt-repository ppa:cappelikan/ppa
sudo apt update
sudo apt install mainline

To list the kernel versions available, use

mainline --list

Then, install the one you want to use

mainline --install <version number>

Or to install the latest version,

mainline --install-latest

After upgrading the kernel version, my Wi-Fi adapter appeared again in "Settings"… but now the NVIDIA drivers were not recognized anymore using the nvidia-smi command…

I had to choose whether I wanted internet or my NVIDIA driver to work for WebODM…

I finally decided to install Pop!_OS instead.

Pop!_OS

To install Pop!_OS:

  1. Go to the download page and choose the version with NVIDIA drivers preloaded
  2. Create a bootable USB (e.g., using Etcher)
  3. Boot from your USB drive
  4. Follow the instructions on the screen for the installation

That is a pretty fluent installation. Pop!_OS recognized my Wi-Fi card (since it uses a more recent kernel version) and the NVIDIA drivers were already installed.

I still recommend updating your system using

sudo apt update
sudo apt full-upgrade

Installing NVIDIA drivers

Windows

To install the latest NVIDIA driver, use the installer provided by NVIDIA

Ubuntu

If you chose to download the updates during installation, the drivers are probably already installed.

If not, you have 2 choices

  1. Install the drivers following NVIDIA documentation
  2. Install the drivers from the NVIDIA driver page
Pop!_OS

If you chose the .iso file with the NVIDIA drivers preloaded, you have nothing to do. The drivers are already installed.

In either case, you can check if they’re correctly installed by running nvidia-smi in a terminal. You should see something like that:

[email protected]:~/Drone/WebODM$ nvidia-smi
Fri Jul 29 13:23:54 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.57       Driver Version: 516.59       CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:01:00.0 Off |                  N/A |
| N/A   47C    P8    10W /  N/A |    176MiB /  8192MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

Installing docker and docker-compose

Windows
  1. Install Docker Desktop by running the installer. Get it on the official website.
  2. Check that the installation work correctly by running the following command in a terminal:
  3. sudo docker run hello-world

Pop!_OS

Install the docker engine following the documentation. It’s a copy-paste process of a couple of commands. Just follow the steps.

Install also docker-compose by running

apt install docker-compose

Check that the installation work correctly by running the following command in a terminal:

sudo docker run hello-world

Installing NVIDIA-container-toolkit

To be able to use CUDA through Docker, you’ll need to install the nvidia-container-toolkit. You don’t have to do it on Windows, though.

Ubuntu

Install it from the NVIDIA documentation

  1. Set up Docker
  2. curl https://get.docker.com | sh \
      && sudo systemctl --now enable docker
    
  3. Set up NVIDIA container toolkit
  4. distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
          && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
          && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
                sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
                sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
    

    sudo apt-get update

    sudo apt-get install -y nvidia-docker2

    sudo systemctl restart docker

  5. Test it

    sudo docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi

    The output should be:

  6. +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 450.51.06    Driver Version: 450.51.06    CUDA Version: 11.0     |
    |-------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |                               |                      |               MIG M. |
    |===============================+======================+======================|
    |   0  Tesla T4            On   | 00000000:00:1E.0 Off |                    0 |
    | N/A   34C    P8     9W /  70W |      0MiB / 15109MiB |      0%      Default |
    |                               |                      |                  N/A |
    +-------------------------------+----------------------+----------------------+
    
    +-----------------------------------------------------------------------------+
    | Processes:                                                                  |
    |  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
    |        ID   ID                                                   Usage      |
    |=============================================================================|
    |  No running processes found                                                 |
    +-----------------------------------------------------------------------------+
    
Pop!_OS

Because the nvidia-container-toolkit is only supported by a couple of distribution (see this page), you have some manipulations to do to be able to install it on Pop!_OS. You can also find them here:

distribution="ubuntu20.04" \
   && curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \
   && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
vi /etc/apt/preferences.d/nvidia-docker-pin-1002
with content;
Package: *
Pin: origin nvidia.github.io
Pin-Priority: 1002

sudo apt-get install -y nvidia-docker2
sudo systemctl restart docker
sudo docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi

Check if everything is ready

  1. Test NVIDIA drivers detection
  2. nvidia-smi

  3. Test Docker
  4. sudo docker run hello-word

  5. Test CUDA
  6. sudo docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi

Running WebODM with GPU

Windows
  1. Open Docker
  2. Open a terminal (e.g. with Git Bash, not Ubuntu in WSL)
  3. Go to the WebODM folder
  4. Run

    webodm.sh start --gpu

Pop!_OS
  1. Open a terminal (e.g. with Git Bash, not a Ubuntu in WSL)
  2. Go to the WebODM folder
  3. Run

    webodm.sh start --gpu

Then, when you process a project, you should see in the log at the beginning something like nvidia-smi was detected.

Problem 2

Description

The processing time was faster with CPU only than with the GPU acceleration… which was not logical.

Solution

In my first tests, I use the fast-ortho option to process 120 images. Then the CPU-only option was faster (about 1 min 54 sec vs. 4 min for the GPU).

Then, by selecting the Default option, with the 3D rendering and DSM, that was a lot faster with GPU acceleration (about 9 min vs 16 min with CPU only).

I suppose that the GPU is more efficient for more complex map processing.

I hope this helps. 🚁

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.