This page contains a list of the essential functions of DeepLabCut as well as demos. There are many optional parameters with each described function, which you can find here. For additional assistance, you can use the help function to better understand what each function does.

Overview of the work-flow:

Option 1: Demo Notebooks:

We also provide Jupyter notebooks for using DeepLabCut on both a pre-labeled dataset, and on the end user’s own dataset. See all the demo’s here!

Option 2: using terminal, Start Python:

Open an ipython session and import the package by typing in the terminal: Please note, if you are using MacOS, you must use pythonw vs. ipython (also, GUIs are not supported in Jupyter in MacOS so please follow the instructions below!)


import deeplabcut

TIP: for every function there is a associated help document that can be viewed by adding a ? after the function name; i.e. deeplabcut.create_new_project?. To exit this help screen, type q.

Create a New Project:

deeplabcut.create_new_project(`Name of the project',`Name of the experimenter', [`Full path of video 1',`Full path of video2',`Full path of video3'], working_directory=`Full path of the working directory',copy_videos=True/False)

(more details here)

Configure the Project:

(PLEASE see more details here)

mini-demo: create project and edit the yaml file

Select Frames to Label:

deeplabcut.extract_frames(config_path,`automatic/manual',`uniform/kmeans', crop=True/False)

(more details here) *update: as of 2.0.5 checkcropping=True is droppped; you now just the option to directly draw a rectangle over the image to crop before extraction (i.e. there no need to manually change in config.yaml then check).

Label Frames:


(more details here)

mini-demo: using the GUI to label

Check Annotated Frames:


(more details here)

Create Training Dataset:


(more details here)

Train The Network:


(more details here)

Evaluate the Trained Network:

deeplabcut.evaluate_network(config_path,shuffle=[1], plotting=True)

(more details here)

Video Analysis and Plotting Results:

deeplabcut.analyze_videos(config_path,[`/fullpath/project/videos/reachingvideo1.avi'], shuffle=1, save_as_csv=True)

deeplabcut.create_labeled_video(config_path, [`/analysis/project/videos/reachingvideo1.avi',`/fullpath/project/videos/reachingvideo2.avi'])


You can also filter the points by:

deeplabcut.filterpredictions(config_path,[`/fullpath/project/videos/reachingvideo1.avi'], shuffle=1) Note, this creates a file with the ending filtered.h5 that you can use for further analysis. This filtering step has many parameters, so please see the full docstring by typing: deeplabcut.filterpredictions?

(more details here)

optional Refinement: Extract Outlier Frames:


(more details here)

optional refinement of the labels with the GUI:

(refinement and augmentation of the training dataset)


mini-demo: using the refinement GUI, a user can load the file then zoom, pan, and edit and/or remove points:

When done editing the labels, merge:


(more details here)


In ipython/Jupyter notebook:


In Python:


Return to readme.