Lectures#
Today we have lectures on several topics –> 1️⃣ how to track multiple animals with DeepLabCut (maDeepLabCut), 2️⃣ how to share models with the community and how we create SuperModels (Model Zoo), 3️⃣ how to use DeepLabCut in closed-loop behavior experiments (DeepLabCut-live), as well as 4️⃣ how to integrate it with other camera views for 3D pose estimation (3D DeepLabCut).
Part 1: maDeepLabCut (⏳50 min)#
Multi-animal pose estimation with DeepLabCut by Prof. Alexander Mathis
For further information, check out: [LZY+22].
We also note, that recently we developed a novel approach for dealing with crowded scenes: Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity.
Check out this 5min video summarizing the approach: [YouTube Video]
We are currently integrating this in DeepLabCut 3, which will be released in the fall! Stay tuned.
Part 2: Model Zoo (⏳12 min)#
DeepLabCut Model Zoo by Prof. Mackenzie Mathis
For further information, check out: [YMM22].
Further reading (recommended in the scope of this course):
How to contribute data + labels 🙏 - https://contrib.deeplabcut.org
How to use models: browse available models.
Part 3: DeepLabCut-live (⏳15 min)#
DeepLabCut provides additional software packages that allow you to record and stream camera data and run DeeplabCut models in real-time. You can get a quick overview by watching this short clip.
For more details, please watch 👀 Gary Kane’s DLC-live Neuromatch talk. This recording also contains questions from the audience, so you can stop after minute 7.
For more information on this work, please check out [KLS+20]. Regarding inference speeds, please also check out the DLC Inference Speed Benchmark. You can also contribute inference speeds for additional hardware here, plus check out [KLS+20] for more details.
DLC-live also provides a convenient API for combining animal pose estimation with other tasks. For example, DLC-live was used here in combination with MegaDetector to estimate the pose of animals detected in camera trap data. You can see a demo which also uses the Model Zoo here and inspect the code further here.
Part 4: 3D DeepLabCut (⏳13 min)#
For further information, check out: [NMC+19], [JCM+21], [WTZ+21].
If you want to expand your knowledge, check out these materials:
Excellent textbook on Multiple View Geometry in Computer Vision by Richard Hartley and Andrew Zisserman
References
Daniel Joska, Liam Clark, Naoya Muramatsu, Ricardo Jericevich, Fred Nicolls, Alexander Mathis, Mackenzie W Mathis, and Amir Patel. Acinoset: a 3d pose estimation dataset and baseline models for cheetahs in the wild. In 2021 IEEE International Conference on Robotics and Automation (ICRA), 13901–13908. IEEE, 2021.
Gary A Kane, Goncalo Lopes, Jonny L Saunders, Alexander Mathis, and Mackenzie W Mathis. Real-time, low-latency closed-loop feedback using markerless posture tracking. eLife, December 2020. URL: https://doi.org/10.7554/elife.61909, doi:10.7554/elife.61909.
Jessy Lauer, Mu Zhou, Shaokai Ye, William Menegas, Steffen Schneider, Tanmay Nath, Mohammed Mostafizur Rahman, Valentina Di Santo, Daniel Soberanes, Guoping Feng, Venkatesh N. Murthy, George Lauder, Catherine Dulac, Mackenzie Weygandt Mathis, and Alexander Mathis. Multi-animal pose estimation, identification and tracking with DeepLabCut. Nature Methods, 19(4):496–504, April 2022. URL: https://doi.org/10.1038/s41592-022-01443-0, doi:10.1038/s41592-022-01443-0.
Tanmay Nath, Alexander Mathis, An Chi Chen, Amir Patel, Matthias Bethge, and Mackenzie Weygandt Mathis. Using DeepLabCut for 3d markerless pose estimation across species and behaviors. Nature Protocols, 14(7):2152–2176, June 2019. URL: https://rdcu.be/bHpHN, doi:10.1038/s41596-019-0176-0.
Jinbao Wang, Shujie Tan, Xiantong Zhen, Shuo Xu, Feng Zheng, Zhenyu He, and Ling Shao. Deep 3d human pose estimation: a review. Computer Vision and Image Understanding, 210:103225, 2021.
Shaokai Ye, Alexander Mathis, and Mackenzie Weygandt Mathis. Panoptic animal pose estimators are zero-shot performers. arXiv preprint arXiv:2203.07436, 2022.