To enable GPU acceleration in TensorFlow, you need to follow these steps:
- Install the GPU version of TensorFlow: You can install TensorFlow with GPU support by running the following command in your terminal:
pip install tensorflow-gpu
- Verify that you have a compatible NVIDIA GPU: TensorFlow requires an NVIDIA GPU with CUDA compute capability 3.0 or higher.
- Install the NVIDIA GPU drivers: Visit the NVIDIA website and download the appropriate GPU drivers for your system.
- Install CUDA toolkit and cuDNN library: CUDA is a parallel computing platform that allows you to use NVIDIA GPUs for general-purpose computing. cuDNN is a library that provides GPU-accelerated primitives for deep neural networks. Visit the NVIDIA website to download the CUDA toolkit and cuDNN library that are compatible with your version of TensorFlow.
- Verify that TensorFlow is using GPU: Once you have installed TensorFlow with GPU support and all the necessary drivers and libraries, you can verify that TensorFlow is using GPU by running the following code:
import tensorflow as tf
print(tf.config.list_physical_devices('GPU'))
This will output a list of available GPUs on your system.
- Run your TensorFlow code with GPU acceleration: Once you have enabled GPU acceleration in TensorFlow, you can run your TensorFlow code with GPU acceleration by creating a TensorFlow session and specifying the device to use:
import tensorflow as tf
with tf.device('/device:GPU:0'):
# Your TensorFlow code here
In this example, the code will be executed on the first available GPU. If you have multiple GPUs, you can specify a different device using /device:GPU:n, where n is the index of the GPU you want to use.
