Anipose is an open-source toolkit for robust, markerless 3D pose estimation of animal behavior from multiple camera views. It leverages the machine learning toolbox DeepLabCut to track keypoints in 2D, then triangulates across camera views to estimate 3D pose.
Check out the Anipose preprint for more information.
The name Anipose comes from Animal Pose, but it also sounds like "any pose".
Up to date documentation may be found at anipose.org .
Videos of flies by Evyn Dickinson (slowed 5x), Tuthill Lab
Videos of hand by Katie Rupp
Here are some references for DeepLabCut and other things this project relies upon:
- Mathis et al, 2018, "DeepLabCut: markerless pose estimation of user-defined body parts with deep learning"
- Romero-Ramirez et al, 2018, "Speeded up detection of squared fiducial markers"
How to launch Anipose on new versions of TensorFlow 2+ and on GPU and Linux (tested on AWS with Tesla)
I used all instructions from Anipose Installation Moving from CPU to GPU on Tesla provided 15x speed boost
Installation tensorflow 2.12.* failed cause python 3.7 is too old, so I selected version from suggested 2.11.0
conda install -c conda-forge cudatoolkit=11.8.0
python3 -m pip install nvidia-cudnn-cu11==8.6.0.163 tensorflow==2.12.*
mkdir -p
python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"