Build and run a DevContainer with Python 3, CUDA 11.8 and cuDNN. This is a better way to run Tensorflow/AutoKeras on Windows with GPU support without frustrating installation and compatibility issues. .py
and .ipynb
scripts are supported without the need to install Anaconda/Jupyter Notebook.
- An amd64 (x64) machine with a CUDA-compatible NVIDIA graphic card
- Docker engine or Docker Desktop (and setup .wslconfig to use more cores and memory than default if you are on Windows.)
- NVIDIA graphic card driver
- NVIDIA Container Toolkit (which is already included in Windows’ Docker Desktop)
- Visual Studio Code with DevContainer extension installed
See here for more detailed hardware and system requirements of running Tensorflow.
Be warned that some deep learning models require more GPU memory than others and may cause the Python kernel to crash. You may need to set a smaller batch for training.
Modify requirements.txt to include packages you'd like to install. ipykernel
is required for executing IPython notebook cells in VS Code.
Open the folder in VS Code, press F1
to bring up the Command Palette, and select Dev Containers: Open Folder in Container...
Wait until the DevContainer is up and running, then test if the Tensorflow can detect the GPU correctly:
python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
Test run using the example file:
python3 autokeras-test.py
Or open autokeras-test.ipynb
and run the cells.
After that, simply start Docker then open the directory in VS Code to use the built container.
- Developing inside a Container
- NVIDIA cuDNN Installation Guide
- Setup a NVIDIA DevContainer with GPU Support for Tensorflow/Keras on Windows
See here for the latest version of libcudnn8
and libcudnn8-dev
in install-dev-tools.sh.