torch.cuda.is_available()The function of this command is to see if the GPU of your computer can be called by pytorch.
If the returned result is false, you can follow the following procedure for troubleshooting.
Step 1: confirm the hardware support, confirm whether your GPU supports CUDA (whether it supports being called by pytorch)
1. Determine whether the computer is an independent graphics card and whether it is an NVIDIA graphics card. You can view the model of the graphics card from task manager or device manager.
- Task manager > performance > GPU:
- My computer > Properties > device manager > display adapter:
The word “NVIDIA geforce 840m” indicates that there is an independent graphics card.
2. GoNVIDIA websiteCheck whether there is your graphics card model. If there is, it means that your graphics card supports calling by pytorch.
Step 2: check the driver version of the graphics card and update it.
1. Open the command line and enter
nvidia-smi, view your own “driver version”
Note: in case of the following error:
'NVIDIA SMI' is not an internal or external command, nor is it a runnable program
Please refer to the “possible problems” at the end of the article.
Generally speaking, different versions of CUDA require different NVIDIA driver versions, and the driver version of graphics card should not be lower than the installation version of CUDA.
CUDA’s requirements for the driver version of the graphics card are as follows:
For example, I installed pytorch 1.5 + CUDA 9.2, which requires that the graphics driver of the computer be greater than 398.26.
2. If the driver version is too low, in theOfficial website, download the latest driver and install the update.
Select the appropriate graphics card model, operating system, download type and default language. Notebooks are notebooks.
After that, click search, download the latest driver and install it according to the instructions.
Step 3: verify whether the driver version and GPU are callable.
1. Input in the terminal window again
nvidia-smiTo see if the latest version is installed successfully.
2. Enter the python environment:
conda activate pytorch python
3. In Python environment:
import torch torch.cuda.is_available()
Check whether the returned result is true
1. When executing the ‘NVIDIA SMI’ command, you will be prompted that ‘NVIDIA SMI’ is neither an internal or external command nor a runnable program
Reason: because it can’t find the command. This file is an EXE file, generally in the following folder.
C:\Program Files\NVIDIA Corporation\NVSMI
So if you want to use this command, you must get this folder before you can execute it.
Solution: add environment variables
My computer, right-click, properties, advanced system settings, advanced environment variables, system variables, path, edit, new, copy nvidia-smi.exe
C:\\Program Files\\NVIDIA Corporation\\NVSMI）