Cannot initialize CUDA

I’m using a WebODM docker image to generate orthophotos; Although the orthophoto data was successfully generated, the processing process did not use GPU acceleration. I see a message in the log of the log.json: Cannot initialize CUDA, nvidia-smi detected
I checked the source code of webodm, it seems to be built based on the CUDA-11.2, and my CUDA version is 12.9. So, is it because the CUDA version is inconsistent?
Looking forward to your reply.

The attachment is the complete log.json
log.json (75.2 KB)

What graphics card are you using


It’s same result when i run nvidia-smi in docker container or local machine.

Yes, it’s because the versions are inconsistent. Docker still uses the system’s nvidia to hand off. You will have to match the versions.

Finally, I found the solution in the following resources: Cannot initialize CUDA even if nvidia-smi detected by fri-K · Pull Request #1813 · OpenDroneMap/ODM · GitHub and Opendronemap/nodeodm:gpu - nvidia-smi detected - cannot initialize CUDA - #5 by jamesgriffiths.

Following the instructions there, I removed all environment configurations related to cuda-11-2, rebuilt a new Docker image, and the issue was successfully resolved.

But, is it only worked at openmvs stage?

1 Like

It’s infrequently used in the code. I tried to build the docker image to enable opencv to use GPU, but the image uses an old version of ubuntu, and it errored out. The build step is convoluted since it tries to build everything in the “SuperBuild” and does so in parallel, so errors are very confusing.

Anyway, most of the code is not using the GPU, so it only accelerates a few areas. I use nvtop to watch usage of the GPU. It does do a lot of multi-threaded calcs, so that is an area you can speed things up, but there’s definitely a few bottlenecks that are single threaded.

Also, watchout for long running containers, they can lose the GPU silently and degrade to non-gpu use.

1 Like

OpenSfM and OpenMVS are the two stages where it is used, as far as I remember.

As you mentioned, I ran watch -n 1 nvidia-smi on my device to monitor the GPU-Util value. I noticed that the GPU was only active for less than 10 seconds while DensifyPointCloud was running, and the total time taken was approximately 13 minutes.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.