How To Install DeepLabCut2.0:

DeepLabCut can be run on Windows, Linux, or MacOS (see more details at technical considerations).

Step 1: decide on how you want to install DeepLabCut:

There are several modes of installation, and the user should decide to either use a system-wide (see note below), Anaconda environment based installation (recommended), or the supplied Docker container (recommended for Ubuntu advanced users). One can of course also use other Python distributions than Anaconda, but Anaconda is the easiest route.

## Step 2: You need to have Python 3 installed, and we highly recommend using Anaconda to do so. Simply download the appropriate files here:

Anaconda is perhaps the easiest way to install Python and additional packages across various operating systems. With Anaconda you create all the dependencies in an environment on your machine.

Step 3: Easy install for Windows and MacOS: use our supplied Anaconda environments.

Please click here:

For LINUX (Ubuntu 16.04 or 18.04 LTS) users, we recommend using Docker for the GPU-based steps. The Docker files are here: and here is a quick video tutorial to show you how to use both Docker + Anaconda for DLC:

Otherwise, here is how you can create a tailored Anaconda environment:

conda create -n nameyourenvironment python=3.6
activate nameyourenvironment

Once the environment is activated you can install DeepLabCut, wxPython, and TensorFlow.

In the terminal type (for Ubunutu 18.04, just change the distribution in the wxpython link below):

pip install deeplabcut
pip install

Install TensorFlow with GPU support or CPU support:

As users can use a GPU or CPUs, TensorFlow is not installed with the command pip install deeplabcut. Here is more information on how to best install TensorFlow with pip:


pip install tensorflow


pip install tensorflow-gpu

*If you have a GPU, you should FIRST then install the NVIDIA CUDA package and an appropriate driver for your specific GPU. Please follow the instructions found here:, and more tips below. The order of operations matters.

In the Nature Neuroscience paper, we used TensorFlow 1.0 with CUDA (Cuda 8.0). Some other versions of TensorFlow have been tested (i.e. these versions have been tested 1.2, 1.4, 1.8 and 1.10-1.13, but might require different CUDA versions)! Please check your driver/cuDNN/CUDA/TensorFlow versions on this StackOverflow post.

For a specific version, run pip install tensorflow-gpu==1.12

Here is an example on how to install the GPU driver + CUDA + TensorFlow 1.8 will follow here:

FIRST, install a driver for your GPU (we recommend the 384.xx) Find DRIVER HERE:

SECOND, install CUDA (9.0 here) + cuDNN: and cuDNN,

THIRD, install TensorFlow:

Package for pip install:

pip install tensorflow-gpu==1.8 —with GPU support (Ubuntu and Windows)

Note, the version is specified by using: ==1.8

FOURTH, Please check your CUDA and TensorFlow installation with the lines below:

Start a python session: ipython

import tensorflow as tf

sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

You can test that your GPU is being properly engaged with these additional tips.


TensorFlow: Here are some additional resources users have found helpful (posted without endorsement):



You’re ready to Run DeepLabCut!

Now you can use Jupyter Notebooks, Spyder, and to train just use the terminal, to run all the code!

System-wide considerations:

If you perform the system-wide installation, and the computer has other Python packages or TensorFlow versions installed that conflict, this will overwrite them. If you have a dedicated machine for DeepLabCut, this is fine. If there are other applications that require different versions of libraries, then one would potentially break those applications. The solution to this problem is to create a virtual environment, a self-contained directory that contains a Python installation for a particular version of Python, plus additional packages. One way to manage virtual environments is to use conda environments (for which you need Anaconda installed).

Technical Considerations:

Return to readme.