diff --git a/.circleci/test.yml b/.circleci/test.yml index e9d92f92e9..305544a94e 100644 --- a/.circleci/test.yml +++ b/.circleci/test.yml @@ -135,7 +135,7 @@ workflows: branches: ignore: - dev-1.x - - 1.x + - main pr_stage_test: when: not: @@ -147,7 +147,7 @@ workflows: branches: ignore: - dev-1.x - - 1.x + - main - build_cpu: name: minimum_version_cpu torch: 1.7.1 @@ -189,3 +189,4 @@ workflows: branches: only: - dev-1.x + - main diff --git a/README.md b/README.md index 28e82fda15..ba653d997a 100644 --- a/README.md +++ b/README.md @@ -18,19 +18,19 @@ </div> <div> </div> -[![Documentation](https://readthedocs.org/projects/mmpose/badge/?version=latest)](https://mmpose.readthedocs.io/en/1.x/?badge=latest) +[![Documentation](https://readthedocs.org/projects/mmpose/badge/?version=latest)](https://mmpose.readthedocs.io/en/latest/?badge=latest) [![actions](https://github.com/open-mmlab/mmpose/workflows/build/badge.svg)](https://github.com/open-mmlab/mmpose/actions) -[![codecov](https://codecov.io/gh/open-mmlab/mmpose/branch/1.x/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmpose) +[![codecov](https://codecov.io/gh/open-mmlab/mmpose/branch/latest/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmpose) [![PyPI](https://img.shields.io/pypi/v/mmpose)](https://pypi.org/project/mmpose/) [![LICENSE](https://img.shields.io/github/license/open-mmlab/mmpose.svg)](https://github.com/open-mmlab/mmpose/blob/master/LICENSE) [![Average time to resolve an issue](https://isitmaintained.com/badge/resolution/open-mmlab/mmpose.svg)](https://github.com/open-mmlab/mmpose/issues) [![Percentage of issues still open](https://isitmaintained.com/badge/open/open-mmlab/mmpose.svg)](https://github.com/open-mmlab/mmpose/issues) -[📘Documentation](https://mmpose.readthedocs.io/en/1.x/) | -[🛠️Installation](https://mmpose.readthedocs.io/en/1.x/installation.html) | -[👀Model Zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo.html) | -[📜Papers](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/algorithms.html) | -[🆕Update News](https://mmpose.readthedocs.io/en/1.x/notes/changelog.html) | +[📘Documentation](https://mmpose.readthedocs.io/en/latest/) | +[🛠️Installation](https://mmpose.readthedocs.io/en/latest/installation.html) | +[👀Model Zoo](https://mmpose.readthedocs.io/en/latest/model_zoo.html) | +[📜Papers](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/algorithms.html) | +[🆕Update News](https://mmpose.readthedocs.io/en/latest/notes/changelog.html) | [🤔Reporting Issues](https://github.com/open-mmlab/mmpose/issues/new/choose) | [🔥RTMPose](/projects/rtmpose/) @@ -97,16 +97,9 @@ https://user-images.githubusercontent.com/15977946/124654387-0fd3c500-ded1-11eb- ## What's New -- We are excited to release **RTMPose**, a real-time pose estimation framework including: +- We are excited to release **YOLOX-Pose**, a One-Stage multi-person pose estimation model based on YOLOX. Checkout our [project page](/projects/yolox-pose/) for more details. - - A family of lightweight pose estimation models with state-of-the-art performance - - Inference APIs for Python, C++, C#, Java, etc. Easy to integrate into your applications and empower real-time stable pose estimation - - Cross-platform deployment with various backends - - A step-by-step guide to training and deploying your own models - - Checkout our [project page](/projects/rtmpose/) and [technical report](https://arxiv.org/abs/2303.07399) for more information! - -![rtmpose_intro](https://user-images.githubusercontent.com/13503330/219269619-935499e5-bdd9-49ea-8104-3c7796dbd862.png) +![yolox-pose_intro](https://user-images.githubusercontent.com/26127467/226655503-3cee746e-6e42-40be-82ae-6e7cae2a4c7e.jpg) - Welcome to [*projects of MMPose*](/projects/README.md), where you can access to the latest features of MMPose, and share your ideas and codes with the community at once. Contribution to MMPose will be simple and smooth: @@ -115,138 +108,160 @@ https://user-images.githubusercontent.com/15977946/124654387-0fd3c500-ded1-11eb- - Build individual projects with full power of MMPose but not bound up with heavy frameworks - Checkout new projects: - [RTMPose](/projects/rtmpose/) - - [YOLOX-Pose (coming soon)](<>) - - [MMPose4AIGC (coming soon)](<>) + - [YOLOX-Pose](/projects/yolox_pose/) + - [MMPose4AIGC](/projects/mmpose4aigc/) - Become a contributors and make MMPose greater. Start your journey from the [example project](/projects/example_project/) <br/> -- 2022-03-15: MMPose [v1.0.0rc1](https://github.com/open-mmlab/mmpose/releases/tag/v1.0.0rc1) is released. Major updates include: +- 2022-04-06: MMPose [v1.0.0](https://github.com/open-mmlab/mmpose/releases/tag/v1.0.0) is officially released, with the main updates including: - - Release [RTMPose](/projects/rtmpose/), a high-performance real-time pose estimation framework based on MMPose - - Support [ViTPose](/configs/body_2d_keypoint/topdown_heatmap/coco/vitpose_coco.md) (NeurIPS'22), [CID](/configs/body_2d_keypoint/cid/coco/hrnet_coco.md) (CVPR'22) and [DEKR](/configs/body_2d_keypoint/dekr/) (CVPR'21) - - Add [*Inferencer*](/docs/en/user_guides/inference.md#out-of-the-box-inferencer), a convenient interface for inference and visualization + - Release of [YOLOX-Pose](/projects/yolox-pose/), a One-Stage multi-person pose estimation model based on YOLOX + - Development of [MMPose for AIGC](/projects/mmpose4aigc/) based on RTMPose, generating high-quality skeleton images for Pose-guided AIGC projects + - Support for OpenPose-style skeleton visualization + - More complete and user-friendly [documentation and tutorials](https://mmpose.readthedocs.io/en/latest/overview.html) - See the full [release note](https://github.com/open-mmlab/mmpose/releases/tag/v1.0.0rc1) for more exciting updates brought by MMPose v1.0.0rc1! + Please refer to the [release notes](https://github.com/open-mmlab/mmpose/releases/tag/v1.0.0) for more updates brought by MMPose v1.0.0! ## Installation -Please refer to [installation.md](https://mmpose.readthedocs.io/en/1.x/installation.html) for more detailed installation and dataset preparation. +Please refer to [installation.md](https://mmpose.readthedocs.io/en/latest/installation.html) for more detailed installation and dataset preparation. ## Getting Started We provided a series of tutorials about the basic usage of MMPose for new users: -- [A 20 Minute Guide to MMPose](https://mmpose.readthedocs.io/en/1.x/guide_to_framework.html) -- [About Configs](https://mmpose.readthedocs.io/en/1.x/user_guides/configs.html) -- [Add New Dataset](https://mmpose.readthedocs.io/en/1.x/user_guides/prepare_datasets.html) -- [Keypoint Encoding & Decoding](https://mmpose.readthedocs.io/en/1.x/user_guides/codecs.html) -- [Inference with Existing Models](https://mmpose.readthedocs.io/en/1.x/user_guides/inference.html) -- [Train and Test](https://mmpose.readthedocs.io/en/1.x/user_guides/train_and_test.html) -- [Visualization Tools](https://mmpose.readthedocs.io/en/1.x/user_guides/visualization.html) -- [Other Useful Tools](https://mmpose.readthedocs.io/en/1.x/user_guides/how_to.html) +1. For the basic usage of MMPose: + + - [A 20-minute Tour to MMPose](https://mmpose.readthedocs.io/en/latest/guide_to_framework.html) + - [Demos](https://mmpose.readthedocs.io/en/latest/demos.html) + - [Inference](https://mmpose.readthedocs.io/en/latest/user_guides/inference.html) + - [Configs](https://mmpose.readthedocs.io/en/latest/user_guides/configs.html) + - [Prepare Datasets](https://mmpose.readthedocs.io/en/latest/user_guides/prepare_datasets.html) + - [Train and Test](https://mmpose.readthedocs.io/en/latest/user_guides/train_and_test.html) + +2. For developers who wish to develop based on MMPose: + + - [Learn about Codecs](https://mmpose.readthedocs.io/en/latest/advanced_guides/codecs.html) + - [Dataflow in MMPose](https://mmpose.readthedocs.io/en/latest/advanced_guides/dataflow.html) + - [Implement New Models](https://mmpose.readthedocs.io/en/latest/advanced_guides/implement_new_models.html) + - [Customize Datasets](https://mmpose.readthedocs.io/en/latest/advanced_guides/customize_datasets.html) + - [Customize Data Transforms](https://mmpose.readthedocs.io/en/latest/advanced_guides/customize_transforms.html) + - [Customize Optimizer](https://mmpose.readthedocs.io/en/latest/advanced_guides/customize_optimizer.html) + - [Customize Logging](https://mmpose.readthedocs.io/en/latest/advanced_guides/customize_logging.html) + - [How to Deploy](https://mmpose.readthedocs.io/en/latest/advanced_guides/how_to_deploy.html) + - [Model Analysis](https://mmpose.readthedocs.io/en/latest/advanced_guides/model_analysis.html) + - [Migration Guide](https://mmpose.readthedocs.io/en/latest/migration.html) + +3. For researchers and developers who are willing to contribute to MMPose: + + - [Contribution Guide](https://mmpose.readthedocs.io/en/latest/contribution_guide.html) + +4. For some common issues, we provide a FAQ list: + + - [FAQ](https://mmpose.readthedocs.io/en/latest/faq.html) ## Model Zoo Results and models are available in the **README.md** of each method's config directory. -A summary can be found in the [Model Zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo.html) page. +A summary can be found in the [Model Zoo](https://mmpose.readthedocs.io/en/latest/model_zoo.html) page. <details close> <summary><b>Supported algorithms:</b></summary> -- [x] [DeepPose](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/algorithms.html#deeppose-cvpr-2014) (CVPR'2014) -- [x] [CPM](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#cpm-cvpr-2016) (CVPR'2016) -- [x] [Hourglass](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#hourglass-eccv-2016) (ECCV'2016) -- [ ] [SimpleBaseline3D](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/algorithms.html#simplebaseline3d-iccv-2017) (ICCV'2017) -- [ ] [Associative Embedding](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/algorithms.html#associative-embedding-nips-2017) (NeurIPS'2017) -- [x] [SimpleBaseline2D](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/algorithms.html#simplebaseline2d-eccv-2018) (ECCV'2018) -- [x] [DSNT](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/algorithms.html#dsnt-2018) (ArXiv'2021) -- [x] [HRNet](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#hrnet-cvpr-2019) (CVPR'2019) -- [x] [IPR](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/algorithms.html#ipr-eccv-2018) (ECCV'2018) -- [ ] [VideoPose3D](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/algorithms.html#videopose3d-cvpr-2019) (CVPR'2019) -- [x] [HRNetv2](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#hrnetv2-tpami-2019) (TPAMI'2019) -- [x] [MSPN](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#mspn-arxiv-2019) (ArXiv'2019) -- [x] [SCNet](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#scnet-cvpr-2020) (CVPR'2020) -- [ ] [HigherHRNet](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#higherhrnet-cvpr-2020) (CVPR'2020) -- [x] [RSN](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#rsn-eccv-2020) (ECCV'2020) -- [ ] [InterNet](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/algorithms.html#internet-eccv-2020) (ECCV'2020) -- [ ] [VoxelPose](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/algorithms.html#voxelpose-eccv-2020) (ECCV'2020) -- [x] [LiteHRNet](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#litehrnet-cvpr-2021) (CVPR'2021) -- [x] [ViPNAS](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#vipnas-cvpr-2021) (CVPR'2021) -- [x] [Debias-IPR](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/algorithms.html#debias-ipr-iccv-2021) (ICCV'2021) -- [x] [SimCC](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/algorithms.html#simcc-eccv-2022) (ECCV'2022) +- [x] [DeepPose](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/algorithms.html#deeppose-cvpr-2014) (CVPR'2014) +- [x] [CPM](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#cpm-cvpr-2016) (CVPR'2016) +- [x] [Hourglass](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#hourglass-eccv-2016) (ECCV'2016) +- [ ] [SimpleBaseline3D](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/algorithms.html#simplebaseline3d-iccv-2017) (ICCV'2017) +- [ ] [Associative Embedding](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/algorithms.html#associative-embedding-nips-2017) (NeurIPS'2017) +- [x] [SimpleBaseline2D](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/algorithms.html#simplebaseline2d-eccv-2018) (ECCV'2018) +- [x] [DSNT](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/algorithms.html#dsnt-2018) (ArXiv'2021) +- [x] [HRNet](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#hrnet-cvpr-2019) (CVPR'2019) +- [x] [IPR](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/algorithms.html#ipr-eccv-2018) (ECCV'2018) +- [ ] [VideoPose3D](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/algorithms.html#videopose3d-cvpr-2019) (CVPR'2019) +- [x] [HRNetv2](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#hrnetv2-tpami-2019) (TPAMI'2019) +- [x] [MSPN](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#mspn-arxiv-2019) (ArXiv'2019) +- [x] [SCNet](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#scnet-cvpr-2020) (CVPR'2020) +- [ ] [HigherHRNet](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#higherhrnet-cvpr-2020) (CVPR'2020) +- [x] [RSN](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#rsn-eccv-2020) (ECCV'2020) +- [ ] [InterNet](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/algorithms.html#internet-eccv-2020) (ECCV'2020) +- [ ] [VoxelPose](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/algorithms.html#voxelpose-eccv-2020) (ECCV'2020) +- [x] [LiteHRNet](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#litehrnet-cvpr-2021) (CVPR'2021) +- [x] [ViPNAS](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#vipnas-cvpr-2021) (CVPR'2021) +- [x] [Debias-IPR](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/algorithms.html#debias-ipr-iccv-2021) (ICCV'2021) +- [x] [SimCC](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/algorithms.html#simcc-eccv-2022) (ECCV'2022) </details> <details close> <summary><b>Supported techniques:</b></summary> -- [ ] [FPN](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/techniques.html#fpn-cvpr-2017) (CVPR'2017) -- [ ] [FP16](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/techniques.html#fp16-arxiv-2017) (ArXiv'2017) -- [ ] [Wingloss](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/techniques.html#wingloss-cvpr-2018) (CVPR'2018) -- [ ] [AdaptiveWingloss](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/techniques.html#adaptivewingloss-iccv-2019) (ICCV'2019) -- [x] [DarkPose](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/techniques.html#darkpose-cvpr-2020) (CVPR'2020) -- [x] [UDP](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/techniques.html#udp-cvpr-2020) (CVPR'2020) -- [ ] [Albumentations](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/techniques.html#albumentations-information-2020) (Information'2020) -- [ ] [SoftWingloss](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/techniques.html#softwingloss-tip-2021) (TIP'2021) -- [x] [RLE](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/techniques.html#rle-iccv-2021) (ICCV'2021) +- [ ] [FPN](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/techniques.html#fpn-cvpr-2017) (CVPR'2017) +- [ ] [FP16](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/techniques.html#fp16-arxiv-2017) (ArXiv'2017) +- [ ] [Wingloss](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/techniques.html#wingloss-cvpr-2018) (CVPR'2018) +- [ ] [AdaptiveWingloss](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/techniques.html#adaptivewingloss-iccv-2019) (ICCV'2019) +- [x] [DarkPose](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/techniques.html#darkpose-cvpr-2020) (CVPR'2020) +- [x] [UDP](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/techniques.html#udp-cvpr-2020) (CVPR'2020) +- [ ] [Albumentations](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/techniques.html#albumentations-information-2020) (Information'2020) +- [ ] [SoftWingloss](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/techniques.html#softwingloss-tip-2021) (TIP'2021) +- [x] [RLE](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/techniques.html#rle-iccv-2021) (ICCV'2021) </details> <details close> -<summary><b>Supported <a href="https://mmpose.readthedocs.io/en/1.x/dataset_zoo.html">datasets</a>:</b></summary> - -- [x] [AFLW](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#aflw-iccvw-2011) \[[homepage](https://www.tugraz.at/institute/icg/research/team-bischof/lrs/downloads/aflw/)\] (ICCVW'2011) -- [x] [sub-JHMDB](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#jhmdb-iccv-2013) \[[homepage](http://jhmdb.is.tue.mpg.de/dataset)\] (ICCV'2013) -- [x] [COFW](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#cofw-iccv-2013) \[[homepage](http://www.vision.caltech.edu/xpburgos/ICCV13/)\] (ICCV'2013) -- [x] [MPII](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#mpii-cvpr-2014) \[[homepage](http://human-pose.mpi-inf.mpg.de/)\] (CVPR'2014) -- [x] [Human3.6M](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#human3-6m-tpami-2014) \[[homepage](http://vision.imar.ro/human3.6m/description.php)\] (TPAMI'2014) -- [x] [COCO](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#coco-eccv-2014) \[[homepage](http://cocodataset.org/)\] (ECCV'2014) -- [x] [CMU Panoptic](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#cmu-panoptic-iccv-2015) \[[homepage](http://domedb.perception.cs.cmu.edu/)\] (ICCV'2015) -- [x] [DeepFashion](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#deepfashion-cvpr-2016) \[[homepage](http://mmlab.ie.cuhk.edu.hk/projects/DeepFashion/LandmarkDetection.html)\] (CVPR'2016) -- [x] [300W](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#300w-imavis-2016) \[[homepage](https://ibug.doc.ic.ac.uk/resources/300-W/)\] (IMAVIS'2016) -- [x] [RHD](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#rhd-iccv-2017) \[[homepage](https://lmb.informatik.uni-freiburg.de/resources/datasets/RenderedHandposeDataset.en.html)\] (ICCV'2017) -- [x] [CMU Panoptic HandDB](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#cmu-panoptic-handdb-cvpr-2017) \[[homepage](http://domedb.perception.cs.cmu.edu/handdb.html)\] (CVPR'2017) -- [x] [AI Challenger](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#ai-challenger-arxiv-2017) \[[homepage](https://github.com/AIChallenger/AI_Challenger_2017)\] (ArXiv'2017) -- [x] [MHP](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#mhp-acm-mm-2018) \[[homepage](https://lv-mhp.github.io/dataset)\] (ACM MM'2018) -- [x] [WFLW](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#wflw-cvpr-2018) \[[homepage](https://wywu.github.io/projects/LAB/WFLW.html)\] (CVPR'2018) -- [x] [PoseTrack18](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#posetrack18-cvpr-2018) \[[homepage](https://posetrack.net/users/download.php)\] (CVPR'2018) -- [x] [OCHuman](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#ochuman-cvpr-2019) \[[homepage](https://github.com/liruilong940607/OCHumanApi)\] (CVPR'2019) -- [x] [CrowdPose](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#crowdpose-cvpr-2019) \[[homepage](https://github.com/Jeff-sjtu/CrowdPose)\] (CVPR'2019) -- [x] [MPII-TRB](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#mpii-trb-iccv-2019) \[[homepage](https://github.com/kennymckormick/Triplet-Representation-of-human-Body)\] (ICCV'2019) -- [x] [FreiHand](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#freihand-iccv-2019) \[[homepage](https://lmb.informatik.uni-freiburg.de/projects/freihand/)\] (ICCV'2019) -- [x] [Animal-Pose](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#animal-pose-iccv-2019) \[[homepage](https://sites.google.com/view/animal-pose/)\] (ICCV'2019) -- [x] [OneHand10K](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#onehand10k-tcsvt-2019) \[[homepage](https://www.yangangwang.com/papers/WANG-MCC-2018-10.html)\] (TCSVT'2019) -- [x] [Vinegar Fly](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#vinegar-fly-nature-methods-2019) \[[homepage](https://github.com/jgraving/DeepPoseKit-Data)\] (Nature Methods'2019) -- [x] [Desert Locust](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#desert-locust-elife-2019) \[[homepage](https://github.com/jgraving/DeepPoseKit-Data)\] (Elife'2019) -- [x] [Grévy’s Zebra](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#grevys-zebra-elife-2019) \[[homepage](https://github.com/jgraving/DeepPoseKit-Data)\] (Elife'2019) -- [x] [ATRW](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#atrw-acm-mm-2020) \[[homepage](https://cvwc2019.github.io/challenge.html)\] (ACM MM'2020) -- [x] [Halpe](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#halpe-cvpr-2020) \[[homepage](https://github.com/Fang-Haoshu/Halpe-FullBody/)\] (CVPR'2020) -- [x] [COCO-WholeBody](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#coco-wholebody-eccv-2020) \[[homepage](https://github.com/jin-s13/COCO-WholeBody/)\] (ECCV'2020) -- [x] [MacaquePose](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#macaquepose-biorxiv-2020) \[[homepage](http://www.pri.kyoto-u.ac.jp/datasets/macaquepose/index.html)\] (bioRxiv'2020) -- [x] [InterHand2.6M](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#interhand2-6m-eccv-2020) \[[homepage](https://mks0601.github.io/InterHand2.6M/)\] (ECCV'2020) -- [x] [AP-10K](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#ap-10k-neurips-2021) \[[homepage](https://github.com/AlexTheBad/AP-10K)\] (NeurIPS'2021) -- [x] [Horse-10](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#horse-10-wacv-2021) \[[homepage](http://www.mackenziemathislab.org/horse10)\] (WACV'2021) +<summary><b>Supported datasets:</b></summary> + +- [x] [AFLW](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#aflw-iccvw-2011) \[[homepage](https://www.tugraz.at/institute/icg/research/team-bischof/lrs/downloads/aflw/)\] (ICCVW'2011) +- [x] [sub-JHMDB](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#jhmdb-iccv-2013) \[[homepage](http://jhmdb.is.tue.mpg.de/dataset)\] (ICCV'2013) +- [x] [COFW](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#cofw-iccv-2013) \[[homepage](http://www.vision.caltech.edu/xpburgos/ICCV13/)\] (ICCV'2013) +- [x] [MPII](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#mpii-cvpr-2014) \[[homepage](http://human-pose.mpi-inf.mpg.de/)\] (CVPR'2014) +- [x] [Human3.6M](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#human3-6m-tpami-2014) \[[homepage](http://vision.imar.ro/human3.6m/description.php)\] (TPAMI'2014) +- [x] [COCO](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#coco-eccv-2014) \[[homepage](http://cocodataset.org/)\] (ECCV'2014) +- [x] [CMU Panoptic](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#cmu-panoptic-iccv-2015) \[[homepage](http://domedb.perception.cs.cmu.edu/)\] (ICCV'2015) +- [x] [DeepFashion](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#deepfashion-cvpr-2016) \[[homepage](http://mmlab.ie.cuhk.edu.hk/projects/DeepFashion/LandmarkDetection.html)\] (CVPR'2016) +- [x] [300W](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#300w-imavis-2016) \[[homepage](https://ibug.doc.ic.ac.uk/resources/300-W/)\] (IMAVIS'2016) +- [x] [RHD](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#rhd-iccv-2017) \[[homepage](https://lmb.informatik.uni-freiburg.de/resources/datasets/RenderedHandposeDataset.en.html)\] (ICCV'2017) +- [x] [CMU Panoptic HandDB](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#cmu-panoptic-handdb-cvpr-2017) \[[homepage](http://domedb.perception.cs.cmu.edu/handdb.html)\] (CVPR'2017) +- [x] [AI Challenger](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#ai-challenger-arxiv-2017) \[[homepage](https://github.com/AIChallenger/AI_Challenger_2017)\] (ArXiv'2017) +- [x] [MHP](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#mhp-acm-mm-2018) \[[homepage](https://lv-mhp.github.io/dataset)\] (ACM MM'2018) +- [x] [WFLW](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#wflw-cvpr-2018) \[[homepage](https://wywu.github.io/projects/LAB/WFLW.html)\] (CVPR'2018) +- [x] [PoseTrack18](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#posetrack18-cvpr-2018) \[[homepage](https://posetrack.net/users/download.php)\] (CVPR'2018) +- [x] [OCHuman](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#ochuman-cvpr-2019) \[[homepage](https://github.com/liruilong940607/OCHumanApi)\] (CVPR'2019) +- [x] [CrowdPose](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#crowdpose-cvpr-2019) \[[homepage](https://github.com/Jeff-sjtu/CrowdPose)\] (CVPR'2019) +- [x] [MPII-TRB](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#mpii-trb-iccv-2019) \[[homepage](https://github.com/kennymckormick/Triplet-Representation-of-human-Body)\] (ICCV'2019) +- [x] [FreiHand](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#freihand-iccv-2019) \[[homepage](https://lmb.informatik.uni-freiburg.de/projects/freihand/)\] (ICCV'2019) +- [x] [Animal-Pose](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#animal-pose-iccv-2019) \[[homepage](https://sites.google.com/view/animal-pose/)\] (ICCV'2019) +- [x] [OneHand10K](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#onehand10k-tcsvt-2019) \[[homepage](https://www.yangangwang.com/papers/WANG-MCC-2018-10.html)\] (TCSVT'2019) +- [x] [Vinegar Fly](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#vinegar-fly-nature-methods-2019) \[[homepage](https://github.com/jgraving/DeepPoseKit-Data)\] (Nature Methods'2019) +- [x] [Desert Locust](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#desert-locust-elife-2019) \[[homepage](https://github.com/jgraving/DeepPoseKit-Data)\] (Elife'2019) +- [x] [Grévy’s Zebra](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#grevys-zebra-elife-2019) \[[homepage](https://github.com/jgraving/DeepPoseKit-Data)\] (Elife'2019) +- [x] [ATRW](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#atrw-acm-mm-2020) \[[homepage](https://cvwc2019.github.io/challenge.html)\] (ACM MM'2020) +- [x] [Halpe](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#halpe-cvpr-2020) \[[homepage](https://github.com/Fang-Haoshu/Halpe-FullBody/)\] (CVPR'2020) +- [x] [COCO-WholeBody](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#coco-wholebody-eccv-2020) \[[homepage](https://github.com/jin-s13/COCO-WholeBody/)\] (ECCV'2020) +- [x] [MacaquePose](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#macaquepose-biorxiv-2020) \[[homepage](http://www.pri.kyoto-u.ac.jp/datasets/macaquepose/index.html)\] (bioRxiv'2020) +- [x] [InterHand2.6M](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#interhand2-6m-eccv-2020) \[[homepage](https://mks0601.github.io/InterHand2.6M/)\] (ECCV'2020) +- [x] [AP-10K](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#ap-10k-neurips-2021) \[[homepage](https://github.com/AlexTheBad/AP-10K)\] (NeurIPS'2021) +- [x] [Horse-10](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#horse-10-wacv-2021) \[[homepage](http://www.mackenziemathislab.org/horse10)\] (WACV'2021) </details> <details close> <summary><b>Supported backbones:</b></summary> -- [x] [AlexNet](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#alexnet-neurips-2012) (NeurIPS'2012) -- [x] [VGG](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#vgg-iclr-2015) (ICLR'2015) -- [x] [ResNet](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#resnet-cvpr-2016) (CVPR'2016) -- [x] [ResNext](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#resnext-cvpr-2017) (CVPR'2017) -- [x] [SEResNet](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#seresnet-cvpr-2018) (CVPR'2018) -- [x] [ShufflenetV1](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#shufflenetv1-cvpr-2018) (CVPR'2018) -- [x] [ShufflenetV2](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#shufflenetv2-eccv-2018) (ECCV'2018) -- [x] [MobilenetV2](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#mobilenetv2-cvpr-2018) (CVPR'2018) -- [x] [ResNetV1D](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#resnetv1d-cvpr-2019) (CVPR'2019) -- [x] [ResNeSt](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#resnest-arxiv-2020) (ArXiv'2020) -- [x] [Swin](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#swin-cvpr-2021) (CVPR'2021) -- [x] [HRFormer](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#hrformer-nips-2021) (NIPS'2021) -- [x] [PVT](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#pvt-iccv-2021) (ICCV'2021) -- [x] [PVTV2](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#pvtv2-cvmj-2022) (CVMJ'2022) +- [x] [AlexNet](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#alexnet-neurips-2012) (NeurIPS'2012) +- [x] [VGG](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#vgg-iclr-2015) (ICLR'2015) +- [x] [ResNet](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#resnet-cvpr-2016) (CVPR'2016) +- [x] [ResNext](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#resnext-cvpr-2017) (CVPR'2017) +- [x] [SEResNet](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#seresnet-cvpr-2018) (CVPR'2018) +- [x] [ShufflenetV1](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#shufflenetv1-cvpr-2018) (CVPR'2018) +- [x] [ShufflenetV2](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#shufflenetv2-eccv-2018) (ECCV'2018) +- [x] [MobilenetV2](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#mobilenetv2-cvpr-2018) (CVPR'2018) +- [x] [ResNetV1D](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#resnetv1d-cvpr-2019) (CVPR'2019) +- [x] [ResNeSt](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#resnest-arxiv-2020) (ArXiv'2020) +- [x] [Swin](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#swin-cvpr-2021) (CVPR'2021) +- [x] [HRFormer](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#hrformer-nips-2021) (NIPS'2021) +- [x] [PVT](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#pvt-iccv-2021) (ICCV'2021) +- [x] [PVTV2](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#pvtv2-cvmj-2022) (CVMJ'2022) </details> @@ -256,7 +271,7 @@ We will keep up with the latest progress of the community, and support more popu ## Contributing -We appreciate all contributions to improve MMPose. Please refer to [CONTRIBUTING.md](https://mmpose.readthedocs.io/en/1.x/contribution_guide.html) for the contributing guideline. +We appreciate all contributions to improve MMPose. Please refer to [CONTRIBUTING.md](https://mmpose.readthedocs.io/en/latest/contribution_guide.html) for the contributing guideline. ## Acknowledgement diff --git a/README_CN.md b/README_CN.md index aed7e53942..53a6126890 100644 --- a/README_CN.md +++ b/README_CN.md @@ -18,19 +18,19 @@ </div> <div> </div> -[![Documentation](https://readthedocs.org/projects/mmpose/badge/?version=latest)](https://mmpose.readthedocs.io/en/1.x/?badge=latest) +[![Documentation](https://readthedocs.org/projects/mmpose/badge/?version=latest)](https://mmpose.readthedocs.io/en/latest/?badge=latest) [![actions](https://github.com/open-mmlab/mmpose/workflows/build/badge.svg)](https://github.com/open-mmlab/mmpose/actions) -[![codecov](https://codecov.io/gh/open-mmlab/mmpose/branch/1.x/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmpose) +[![codecov](https://codecov.io/gh/open-mmlab/mmpose/branch/latest/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmpose) [![PyPI](https://img.shields.io/pypi/v/mmpose)](https://pypi.org/project/mmpose/) [![LICENSE](https://img.shields.io/github/license/open-mmlab/mmpose.svg)](https://github.com/open-mmlab/mmpose/blob/master/LICENSE) [![Average time to resolve an issue](https://isitmaintained.com/badge/resolution/open-mmlab/mmpose.svg)](https://github.com/open-mmlab/mmpose/issues) [![Percentage of issues still open](https://isitmaintained.com/badge/open/open-mmlab/mmpose.svg)](https://github.com/open-mmlab/mmpose/issues) -[📘文档](https://mmpose.readthedocs.io/zh_CN/1.x/) | -[🛠️安装](https://mmpose.readthedocs.io/zh_CN/1.x/installation.html) | -[👀模型库](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo.html) | -[📜论文库](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/algorithms.html) | -[🆕更新日志](https://mmpose.readthedocs.io/zh_CN/1.x/notes/changelog.html) | +[📘文档](https://mmpose.readthedocs.io/zh_CN/latest/) | +[🛠️安装](https://mmpose.readthedocs.io/zh_CN/latest/installation.html) | +[👀模型库](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo.html) | +[📜论文库](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/algorithms.html) | +[🆕更新日志](https://mmpose.readthedocs.io/zh_CN/latest/notes/changelog.html) | [🤔报告问题](https://github.com/open-mmlab/mmpose/issues/new/choose) | [🔥RTMPose](/projects/rtmpose/) @@ -95,16 +95,9 @@ https://user-images.githubusercontent.com/15977946/124654387-0fd3c500-ded1-11eb- ## 最新进展 -- 我们发布了 **RTMPose**,一个高性能实时多人姿态检测框架。具体包括: +- 我们发布了 **YOLOX-Pose**,一个基于 YOLOX 的 One-Stage 多人姿态估计模型。更多信息敬请参阅 YOLOX-Pose [项目主页](/projects/yolox_pose/) - - 一组新的轻量化姿态估计模型,在不同算力条件下达到 SOTA 的精度性能 - - 支持多语言(Python, C++, C#, Java, etc)的模型推理接口,可以轻松集成到您的应用中以支持实时、稳定的姿态估计 - - 跨平台,多后端的模型部署支持 - - 提供极易上手的教程,帮助您训练和部署自己的模型 - - 更多信息敬请参阅 RTMPose [项目主页](/projects/rtmpose/) 和 [技术报告](https://arxiv.org/abs/2303.07399) - -![rtmpose_intro](https://user-images.githubusercontent.com/13503330/219269619-935499e5-bdd9-49ea-8104-3c7796dbd862.png) +![yolox-pose_intro](https://user-images.githubusercontent.com/26127467/226655503-3cee746e-6e42-40be-82ae-6e7cae2a4c7e.jpg) - 欢迎使用 [*MMPose 项目*](/projects/README.md)。在这里,您可以发现 MMPose 中的最新功能和算法,并且可以通过最快的方式与社区分享自己的创意和代码实现。向 MMPose 中添加新功能从此变得简单丝滑: @@ -113,138 +106,160 @@ https://user-images.githubusercontent.com/15977946/124654387-0fd3c500-ded1-11eb- - 通过独立项目的形式,利用 MMPose 的强大功能,同时不被代码框架所束缚 - 最新添加的项目包括: - [RTMPose](/projects/rtmpose/) - - [YOLOX-Pose (coming soon)](<>) - - [MMPose4AIGC (coming soon)](<>) + - [YOLOX-Pose](/projects/yolox_pose/) + - [MMPose4AIGC](/projects/mmpose4aigc/) - 从简单的 [示例项目](/projects/example_project/) 开启您的 MMPose 代码贡献者之旅吧,让我们共同打造更好用的 MMPose! <br/> -- 2022-03-15: MMPose [v1.0.0rc1](https://github.com/open-mmlab/mmpose/releases/tag/v1.0.0rc1) 正式发布了,主要更新包括: +- 2022-04-06:MMPose [v1.0.0](https://github.com/open-mmlab/mmpose/releases/tag/v1.0.0) 正式发布了,主要更新包括: - - 发布了 [RTMPose](/projects/rtmpose/),一个高性能实时多人姿态估计算法框架 - - 支持了多个新算法: [ViTPose](/configs/body_2d_keypoint/topdown_heatmap/coco/vitpose_coco.md) (NeurIPS'22), [CID](/configs/body_2d_keypoint/cid/coco/hrnet_coco.md) (CVPR'22) and [DEKR](/configs/body_2d_keypoint/dekr/) (CVPR'21) - - 增加了 [*Inferencer*](/docs/en/user_guides/inference.md#out-of-the-box-inferencer),一个非常便捷的模型推理接口,通过 1 行代码完成模型选择、权重加载、模型推理和结果可视化。 + - 发布了 [YOLOX-Pose](/projects/yolox-pose/),一个基于 YOLOX 的 One-Stage 多人姿态估计模型 + - 基于 RTMPose 开发的 [MMPose for AIGC](/projects/mmpose4aigc/),生成高质量骨架图片用于 Pose-guided AIGC 项目 + - 支持 OpenPose 风格的骨架可视化 + - 更加完善、友好的 [文档和教程](https://mmpose.readthedocs.io/zh_CN/latest/overview.html) - 请查看完整的 [版本说明](https://github.com/open-mmlab/mmpose/releases/tag/v1.0.0rc1) 以了解更多 MMPose v1.0.0rc1 带来的更新! + 请查看完整的 [版本说明](https://github.com/open-mmlab/mmpose/releases/tag/v1.0.0) 以了解更多 MMPose v1.0.0 带来的更新! ## 安装 -关于安装的详细说明请参考[安装文档](https://mmpose.readthedocs.io/zh_CN/1.x/installation.html)。 +关于安装的详细说明请参考[安装文档](https://mmpose.readthedocs.io/zh_CN/latest/installation.html)。 ## 教程 我们提供了一系列简明的教程,帮助 MMPose 的新用户轻松上手使用: -- [20 分钟了解 MMPose 架构设计](https://mmpose.readthedocs.io/zh_CN/1.x/guide_to_framework.html) -- [学习配置文件](https://mmpose.readthedocs.io/zh_CN/1.x/user_guides/configs.html) -- [准备数据集](https://mmpose.readthedocs.io/zh_CN/1.x/user_guides/prepare_datasets.html) -- [关键点编码、解码机制](https://mmpose.readthedocs.io/zh_CN/1.x/user_guides/codecs.html) -- [使用现有模型推理](https://mmpose.readthedocs.io/zh_CN/1.x/user_guides/inference.html) -- [模型训练和测试](https://mmpose.readthedocs.io/zh_CN/1.x/user_guides/train_and_test.html) -- [可视化工具](https://mmpose.readthedocs.io/zh_CN/1.x/user_guides/visualization.html) -- [其他实用工具](https://mmpose.readthedocs.io/zh_CN/1.x/user_guides/how_to.html) +1. MMPose 的基本使用方法: + + - [20 分钟上手教程](https://mmpose.readthedocs.io/zh_CN/latest/guide_to_framework.html) + - [Demos](https://mmpose.readthedocs.io/zh_CN/latest/demos.html) + - [模型推理](https://mmpose.readthedocs.io/zh_CN/latest/user_guides/inference.html) + - [配置文件](https://mmpose.readthedocs.io/zh_CN/latest/user_guides/configs.html) + - [准备数据集](https://mmpose.readthedocs.io/zh_CN/latest/user_guides/prepare_datasets.html) + - [训练与测试](https://mmpose.readthedocs.io/zh_CN/latest/user_guides/train_and_test.html) + +2. 对于希望基于 MMPose 进行开发的研究者和开发者: + + - [编解码器](https://mmpose.readthedocs.io/zh_CN/latest/advanced_guides/codecs.html) + - [数据流](https://mmpose.readthedocs.io/zh_CN/latest/advanced_guides/dataflow.html) + - [实现新模型](https://mmpose.readthedocs.io/zh_CN/latest/advanced_guides/implement_new_models.html) + - [自定义数据集](https://mmpose.readthedocs.io/zh_CN/latest/advanced_guides/customize_datasets.html) + - [自定义数据变换](https://mmpose.readthedocs.io/zh_CN/latest/advanced_guides/customize_transforms.html) + - [自定义优化器](https://mmpose.readthedocs.io/zh_CN/latest/advanced_guides/customize_optimizer.html) + - [自定义日志](https://mmpose.readthedocs.io/zh_CN/latest/advanced_guides/customize_logging.html) + - [模型部署](https://mmpose.readthedocs.io/zh_CN/latest/advanced_guides/how_to_deploy.html) + - [模型分析工具](https://mmpose.readthedocs.io/zh_CN/latest/advanced_guides/model_analysis.html) + - [迁移指南](https://mmpose.readthedocs.io/zh_CN/latest/migration.html) + +3. 对于希望加入开源社区,向 MMPose 贡献代码的研究者和开发者: + + - [参与贡献代码](https://mmpose.readthedocs.io/zh_CN/latest/contribution_guide.html) + +4. 对于使用过程中的常见问题: + + - [FAQ](https://mmpose.readthedocs.io/zh_CN/latest/faq.html) ## 模型库 各个模型的结果和设置都可以在对应的 config(配置)目录下的 **README.md** 中查看。 -整体的概况也可也在 [模型库](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo.html) 页面中查看。 +整体的概况也可也在 [模型库](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo.html) 页面中查看。 <details close> <summary><b>支持的算法</b></summary> -- [x] [DeepPose](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/algorithms.html#deeppose-cvpr-2014) (CVPR'2014) -- [x] [CPM](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#cpm-cvpr-2016) (CVPR'2016) -- [x] [Hourglass](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#hourglass-eccv-2016) (ECCV'2016) -- [ ] [SimpleBaseline3D](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/algorithms.html#simplebaseline3d-iccv-2017) (ICCV'2017) -- [ ] [Associative Embedding](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/algorithms.html#associative-embedding-nips-2017) (NeurIPS'2017) -- [x] [SimpleBaseline2D](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/algorithms.html#simplebaseline2d-eccv-2018) (ECCV'2018) -- [x] [DSNT](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/algorithms.html#dsnt-2018) (ArXiv'2021) -- [x] [HRNet](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#hrnet-cvpr-2019) (CVPR'2019) -- [x] [IPR](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/algorithms.html#ipr-eccv-2018) (ECCV'2018) -- [ ] [VideoPose3D](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/algorithms.html#videopose3d-cvpr-2019) (CVPR'2019) -- [x] [HRNetv2](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#hrnetv2-tpami-2019) (TPAMI'2019) -- [x] [MSPN](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#mspn-arxiv-2019) (ArXiv'2019) -- [x] [SCNet](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#scnet-cvpr-2020) (CVPR'2020) -- [ ] [HigherHRNet](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#higherhrnet-cvpr-2020) (CVPR'2020) -- [x] [RSN](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#rsn-eccv-2020) (ECCV'2020) -- [ ] [InterNet](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/algorithms.html#internet-eccv-2020) (ECCV'2020) -- [ ] [VoxelPose](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/algorithms.html#voxelpose-eccv-2020) (ECCV'2020) -- [x] [LiteHRNet](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#litehrnet-cvpr-2021) (CVPR'2021) -- [x] [ViPNAS](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#vipnas-cvpr-2021) (CVPR'2021) -- [x] [Debias-IPR](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/algorithms.html#debias-ipr-iccv-2021) (ICCV'2021) -- [x] [SimCC](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/algorithms.html#simcc-eccv-2022) (ECCV'2022) +- [x] [DeepPose](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/algorithms.html#deeppose-cvpr-2014) (CVPR'2014) +- [x] [CPM](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/backbones.html#cpm-cvpr-2016) (CVPR'2016) +- [x] [Hourglass](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/backbones.html#hourglass-eccv-2016) (ECCV'2016) +- [ ] [SimpleBaseline3D](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/algorithms.html#simplebaseline3d-iccv-2017) (ICCV'2017) +- [ ] [Associative Embedding](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/algorithms.html#associative-embedding-nips-2017) (NeurIPS'2017) +- [x] [SimpleBaseline2D](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/algorithms.html#simplebaseline2d-eccv-2018) (ECCV'2018) +- [x] [DSNT](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/algorithms.html#dsnt-2018) (ArXiv'2021) +- [x] [HRNet](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/backbones.html#hrnet-cvpr-2019) (CVPR'2019) +- [x] [IPR](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/algorithms.html#ipr-eccv-2018) (ECCV'2018) +- [ ] [VideoPose3D](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/algorithms.html#videopose3d-cvpr-2019) (CVPR'2019) +- [x] [HRNetv2](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/backbones.html#hrnetv2-tpami-2019) (TPAMI'2019) +- [x] [MSPN](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/backbones.html#mspn-arxiv-2019) (ArXiv'2019) +- [x] [SCNet](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/backbones.html#scnet-cvpr-2020) (CVPR'2020) +- [ ] [HigherHRNet](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/backbones.html#higherhrnet-cvpr-2020) (CVPR'2020) +- [x] [RSN](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/backbones.html#rsn-eccv-2020) (ECCV'2020) +- [ ] [InterNet](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/algorithms.html#internet-eccv-2020) (ECCV'2020) +- [ ] [VoxelPose](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/algorithms.html#voxelpose-eccv-2020) (ECCV'2020) +- [x] [LiteHRNet](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/backbones.html#litehrnet-cvpr-2021) (CVPR'2021) +- [x] [ViPNAS](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/backbones.html#vipnas-cvpr-2021) (CVPR'2021) +- [x] [Debias-IPR](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/algorithms.html#debias-ipr-iccv-2021) (ICCV'2021) +- [x] [SimCC](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/algorithms.html#simcc-eccv-2022) (ECCV'2022) </details> <details close> <summary><b>支持的技术</b></summary> -- [ ] [FPN](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/techniques.html#fpn-cvpr-2017) (CVPR'2017) -- [ ] [FP16](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/techniques.html#fp16-arxiv-2017) (ArXiv'2017) -- [ ] [Wingloss](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/techniques.html#wingloss-cvpr-2018) (CVPR'2018) -- [ ] [AdaptiveWingloss](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/techniques.html#adaptivewingloss-iccv-2019) (ICCV'2019) -- [x] [DarkPose](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/techniques.html#darkpose-cvpr-2020) (CVPR'2020) -- [x] [UDP](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/techniques.html#udp-cvpr-2020) (CVPR'2020) -- [ ] [Albumentations](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/techniques.html#albumentations-information-2020) (Information'2020) -- [ ] [SoftWingloss](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/techniques.html#softwingloss-tip-2021) (TIP'2021) -- [x] [RLE](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/techniques.html#rle-iccv-2021) (ICCV'2021) +- [ ] [FPN](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/techniques.html#fpn-cvpr-2017) (CVPR'2017) +- [ ] [FP16](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/techniques.html#fp16-arxiv-2017) (ArXiv'2017) +- [ ] [Wingloss](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/techniques.html#wingloss-cvpr-2018) (CVPR'2018) +- [ ] [AdaptiveWingloss](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/techniques.html#adaptivewingloss-iccv-2019) (ICCV'2019) +- [x] [DarkPose](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/techniques.html#darkpose-cvpr-2020) (CVPR'2020) +- [x] [UDP](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/techniques.html#udp-cvpr-2020) (CVPR'2020) +- [ ] [Albumentations](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/techniques.html#albumentations-information-2020) (Information'2020) +- [ ] [SoftWingloss](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/techniques.html#softwingloss-tip-2021) (TIP'2021) +- [x] [RLE](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/techniques.html#rle-iccv-2021) (ICCV'2021) </details> <details close> -<summary><b>支持的<a href="https://mmpose.readthedocs.io/zh_CN/1.x/datasets.html">数据集</a></b></summary> - -- [x] [AFLW](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#aflw-iccvw-2011) \[[主页](https://www.tugraz.at/institute/icg/research/team-bischof/lrs/downloads/aflw/)\] (ICCVW'2011) -- [x] [sub-JHMDB](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#jhmdb-iccv-2013) \[[主页](http://jhmdb.is.tue.mpg.de/dataset)\] (ICCV'2013) -- [x] [COFW](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#cofw-iccv-2013) \[[主页](http://www.vision.caltech.edu/xpburgos/ICCV13/)\] (ICCV'2013) -- [x] [MPII](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#mpii-cvpr-2014) \[[主页](http://human-pose.mpi-inf.mpg.de/)\] (CVPR'2014) -- [x] [Human3.6M](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#human3-6m-tpami-2014) \[[主页](http://vision.imar.ro/human3.6m/description.php)\] (TPAMI'2014) -- [x] [COCO](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#coco-eccv-2014) \[[主页](http://cocodataset.org/)\] (ECCV'2014) -- [x] [CMU Panoptic](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#cmu-panoptic-iccv-2015) (ICCV'2015) -- [x] [DeepFashion](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#deepfashion-cvpr-2016) \[[主页](http://mmlab.ie.cuhk.edu.hk/projects/DeepFashion/LandmarkDetection.html)\] (CVPR'2016) -- [x] [300W](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#300w-imavis-2016) \[[主页](https://ibug.doc.ic.ac.uk/resources/300-W/)\] (IMAVIS'2016) -- [x] [RHD](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#rhd-iccv-2017) \[[主页](https://lmb.informatik.uni-freiburg.de/resources/datasets/RenderedHandposeDataset.en.html)\] (ICCV'2017) -- [x] [CMU Panoptic](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#cmu-panoptic-iccv-2015) \[[主页](http://domedb.perception.cs.cmu.edu/)\] (ICCV'2015) -- [x] [AI Challenger](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#ai-challenger-arxiv-2017) \[[主页](https://github.com/AIChallenger/AI_Challenger_2017)\] (ArXiv'2017) -- [x] [MHP](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#mhp-acm-mm-2018) \[[主页](https://lv-mhp.github.io/dataset)\] (ACM MM'2018) -- [x] [WFLW](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#wflw-cvpr-2018) \[[主页](https://wywu.github.io/projects/LAB/WFLW.html)\] (CVPR'2018) -- [x] [PoseTrack18](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#posetrack18-cvpr-2018) \[[主页](https://posetrack.net/users/download.php)\] (CVPR'2018) -- [x] [OCHuman](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#ochuman-cvpr-2019) \[[主页](https://github.com/liruilong940607/OCHumanApi)\] (CVPR'2019) -- [x] [CrowdPose](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#crowdpose-cvpr-2019) \[[主页](https://github.com/Jeff-sjtu/CrowdPose)\] (CVPR'2019) -- [x] [MPII-TRB](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#mpii-trb-iccv-2019) \[[主页](https://github.com/kennymckormick/Triplet-Representation-of-human-Body)\] (ICCV'2019) -- [x] [FreiHand](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#freihand-iccv-2019) \[[主页](https://lmb.informatik.uni-freiburg.de/projects/freihand/)\] (ICCV'2019) -- [x] [Animal-Pose](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#animal-pose-iccv-2019) \[[主页](https://sites.google.com/view/animal-pose/)\] (ICCV'2019) -- [x] [OneHand10K](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#onehand10k-tcsvt-2019) \[[主页](https://www.yangangwang.com/papers/WANG-MCC-2018-10.html)\] (TCSVT'2019) -- [x] [Vinegar Fly](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#vinegar-fly-nature-methods-2019) \[[主页](https://github.com/jgraving/DeepPoseKit-Data)\] (Nature Methods'2019) -- [x] [Desert Locust](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#desert-locust-elife-2019) \[[主页](https://github.com/jgraving/DeepPoseKit-Data)\] (Elife'2019) -- [x] [Grévy’s Zebra](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#grevys-zebra-elife-2019) \[[主页](https://github.com/jgraving/DeepPoseKit-Data)\] (Elife'2019) -- [x] [ATRW](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#atrw-acm-mm-2020) \[[主页](https://cvwc2019.github.io/challenge.html)\] (ACM MM'2020) -- [x] [Halpe](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#halpe-cvpr-2020) \[[主页](https://github.com/Fang-Haoshu/Halpe-FullBody/)\] (CVPR'2020) -- [x] [COCO-WholeBody](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#coco-wholebody-eccv-2020) \[[主页](https://github.com/jin-s13/COCO-WholeBody/)\] (ECCV'2020) -- [x] [MacaquePose](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#macaquepose-biorxiv-2020) \[[主页](http://www.pri.kyoto-u.ac.jp/datasets/macaquepose/index.html)\] (bioRxiv'2020) -- [x] [InterHand2.6M](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#interhand2-6m-eccv-2020) \[[主页](https://mks0601.github.io/InterHand2.6M/)\] (ECCV'2020) -- [x] [AP-10K](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/datasets.html#ap-10k-neurips-2021) \[[主页](https://github.com/AlexTheBad/AP-10K)\] (NeurIPS'2021) -- [x] [Horse-10](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/datasets.html#horse-10-wacv-2021) \[[主页](http://www.mackenziemathislab.org/horse10)\] (WACV'2021) +<summary><b>支持的数据集</b></summary> + +- [x] [AFLW](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#aflw-iccvw-2011) \[[主页](https://www.tugraz.at/institute/icg/research/team-bischof/lrs/downloads/aflw/)\] (ICCVW'2011) +- [x] [sub-JHMDB](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#jhmdb-iccv-2013) \[[主页](http://jhmdb.is.tue.mpg.de/dataset)\] (ICCV'2013) +- [x] [COFW](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#cofw-iccv-2013) \[[主页](http://www.vision.caltech.edu/xpburgos/ICCV13/)\] (ICCV'2013) +- [x] [MPII](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#mpii-cvpr-2014) \[[主页](http://human-pose.mpi-inf.mpg.de/)\] (CVPR'2014) +- [x] [Human3.6M](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#human3-6m-tpami-2014) \[[主页](http://vision.imar.ro/human3.6m/description.php)\] (TPAMI'2014) +- [x] [COCO](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#coco-eccv-2014) \[[主页](http://cocodataset.org/)\] (ECCV'2014) +- [x] [CMU Panoptic](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#cmu-panoptic-iccv-2015) (ICCV'2015) +- [x] [DeepFashion](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#deepfashion-cvpr-2016) \[[主页](http://mmlab.ie.cuhk.edu.hk/projects/DeepFashion/LandmarkDetection.html)\] (CVPR'2016) +- [x] [300W](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#300w-imavis-2016) \[[主页](https://ibug.doc.ic.ac.uk/resources/300-W/)\] (IMAVIS'2016) +- [x] [RHD](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#rhd-iccv-2017) \[[主页](https://lmb.informatik.uni-freiburg.de/resources/datasets/RenderedHandposeDataset.en.html)\] (ICCV'2017) +- [x] [CMU Panoptic](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#cmu-panoptic-iccv-2015) \[[主页](http://domedb.perception.cs.cmu.edu/)\] (ICCV'2015) +- [x] [AI Challenger](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#ai-challenger-arxiv-2017) \[[主页](https://github.com/AIChallenger/AI_Challenger_2017)\] (ArXiv'2017) +- [x] [MHP](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#mhp-acm-mm-2018) \[[主页](https://lv-mhp.github.io/dataset)\] (ACM MM'2018) +- [x] [WFLW](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#wflw-cvpr-2018) \[[主页](https://wywu.github.io/projects/LAB/WFLW.html)\] (CVPR'2018) +- [x] [PoseTrack18](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#posetrack18-cvpr-2018) \[[主页](https://posetrack.net/users/download.php)\] (CVPR'2018) +- [x] [OCHuman](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#ochuman-cvpr-2019) \[[主页](https://github.com/liruilong940607/OCHumanApi)\] (CVPR'2019) +- [x] [CrowdPose](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#crowdpose-cvpr-2019) \[[主页](https://github.com/Jeff-sjtu/CrowdPose)\] (CVPR'2019) +- [x] [MPII-TRB](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#mpii-trb-iccv-2019) \[[主页](https://github.com/kennymckormick/Triplet-Representation-of-human-Body)\] (ICCV'2019) +- [x] [FreiHand](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#freihand-iccv-2019) \[[主页](https://lmb.informatik.uni-freiburg.de/projects/freihand/)\] (ICCV'2019) +- [x] [Animal-Pose](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#animal-pose-iccv-2019) \[[主页](https://sites.google.com/view/animal-pose/)\] (ICCV'2019) +- [x] [OneHand10K](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#onehand10k-tcsvt-2019) \[[主页](https://www.yangangwang.com/papers/WANG-MCC-2018-10.html)\] (TCSVT'2019) +- [x] [Vinegar Fly](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#vinegar-fly-nature-methods-2019) \[[主页](https://github.com/jgraving/DeepPoseKit-Data)\] (Nature Methods'2019) +- [x] [Desert Locust](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#desert-locust-elife-2019) \[[主页](https://github.com/jgraving/DeepPoseKit-Data)\] (Elife'2019) +- [x] [Grévy’s Zebra](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#grevys-zebra-elife-2019) \[[主页](https://github.com/jgraving/DeepPoseKit-Data)\] (Elife'2019) +- [x] [ATRW](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#atrw-acm-mm-2020) \[[主页](https://cvwc2019.github.io/challenge.html)\] (ACM MM'2020) +- [x] [Halpe](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#halpe-cvpr-2020) \[[主页](https://github.com/Fang-Haoshu/Halpe-FullBody/)\] (CVPR'2020) +- [x] [COCO-WholeBody](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#coco-wholebody-eccv-2020) \[[主页](https://github.com/jin-s13/COCO-WholeBody/)\] (ECCV'2020) +- [x] [MacaquePose](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#macaquepose-biorxiv-2020) \[[主页](http://www.pri.kyoto-u.ac.jp/datasets/macaquepose/index.html)\] (bioRxiv'2020) +- [x] [InterHand2.6M](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#interhand2-6m-eccv-2020) \[[主页](https://mks0601.github.io/InterHand2.6M/)\] (ECCV'2020) +- [x] [AP-10K](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/datasets.html#ap-10k-neurips-2021) \[[主页](https://github.com/AlexTheBad/AP-10K)\] (NeurIPS'2021) +- [x] [Horse-10](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#horse-10-wacv-2021) \[[主页](http://www.mackenziemathislab.org/horse10)\] (WACV'2021) </details> <details close> <summary><b>支持的骨干网络</b></summary> -- [x] [AlexNet](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#alexnet-neurips-2012) (NeurIPS'2012) -- [x] [VGG](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#vgg-iclr-2015) (ICLR'2015) -- [x] [ResNet](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#resnet-cvpr-2016) (CVPR'2016) -- [x] [ResNext](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#resnext-cvpr-2017) (CVPR'2017) -- [x] [SEResNet](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#seresnet-cvpr-2018) (CVPR'2018) -- [x] [ShufflenetV1](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#shufflenetv1-cvpr-2018) (CVPR'2018) -- [x] [ShufflenetV2](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#shufflenetv2-eccv-2018) (ECCV'2018) -- [x] [MobilenetV2](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#mobilenetv2-cvpr-2018) (CVPR'2018) -- [x] [ResNetV1D](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#resnetv1d-cvpr-2019) (CVPR'2019) -- [x] [ResNeSt](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#resnest-arxiv-2020) (ArXiv'2020) -- [x] [Swin](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/backbones.html#swin-cvpr-2021) (CVPR'2021) -- [x] [HRFormer](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#hrformer-nips-2021) (NIPS'2021) -- [x] [PVT](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#pvt-iccv-2021) (ICCV'2021) -- [x] [PVTV2](https://mmpose.readthedocs.io/zh_CN/1.x/model_zoo_papers/backbones.html#pvtv2-cvmj-2022) (CVMJ'2022) +- [x] [AlexNet](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/backbones.html#alexnet-neurips-2012) (NeurIPS'2012) +- [x] [VGG](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/backbones.html#vgg-iclr-2015) (ICLR'2015) +- [x] [ResNet](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/backbones.html#resnet-cvpr-2016) (CVPR'2016) +- [x] [ResNext](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/backbones.html#resnext-cvpr-2017) (CVPR'2017) +- [x] [SEResNet](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/backbones.html#seresnet-cvpr-2018) (CVPR'2018) +- [x] [ShufflenetV1](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/backbones.html#shufflenetv1-cvpr-2018) (CVPR'2018) +- [x] [ShufflenetV2](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/backbones.html#shufflenetv2-eccv-2018) (ECCV'2018) +- [x] [MobilenetV2](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/backbones.html#mobilenetv2-cvpr-2018) (CVPR'2018) +- [x] [ResNetV1D](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/backbones.html#resnetv1d-cvpr-2019) (CVPR'2019) +- [x] [ResNeSt](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/backbones.html#resnest-arxiv-2020) (ArXiv'2020) +- [x] [Swin](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/backbones.html#swin-cvpr-2021) (CVPR'2021) +- [x] [HRFormer](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/backbones.html#hrformer-nips-2021) (NIPS'2021) +- [x] [PVT](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/backbones.html#pvt-iccv-2021) (ICCV'2021) +- [x] [PVTV2](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/backbones.html#pvtv2-cvmj-2022) (CVMJ'2022) </details> @@ -254,7 +269,7 @@ https://user-images.githubusercontent.com/15977946/124654387-0fd3c500-ded1-11eb- ## 参与贡献 -我们非常欢迎用户对于 MMPose 做出的任何贡献,可以参考 [贡献指南](https://mmpose.readthedocs.io/zh_CN/1.x/contribution_guide.html) 文件了解更多细节。 +我们非常欢迎用户对于 MMPose 做出的任何贡献,可以参考 [贡献指南](https://mmpose.readthedocs.io/zh_CN/latest/contribution_guide.html) 文件了解更多细节。 ## 致谢 diff --git a/configs/body_2d_keypoint/topdown_heatmap/coco/cspnext_udp_coco.md b/configs/body_2d_keypoint/topdown_heatmap/coco/cspnext_udp_coco.md index a5798b144c..213a4669a2 100644 --- a/configs/body_2d_keypoint/topdown_heatmap/coco/cspnext_udp_coco.md +++ b/configs/body_2d_keypoint/topdown_heatmap/coco/cspnext_udp_coco.md @@ -64,6 +64,6 @@ Results on COCO val2017 with detector having human AP of 56.4 on COCO val2017 da | [pose_cspnext_m_udp_aic_coco](/configs/body_2d_keypoint/topdown_heatmap/coco/cspnext-m_udp_8xb256-210e_aic-coco-256x192.py) | 256x192 | 0.748 | 0.925 | 0.818 | 0.777 | 0.933 | [ckpt](https://download.openmmlab.com/mmpose/v1/projects/rtmpose/cspnext-m_udp-aic-coco_210e-256x192-f2f7d6f6_20230130.pth) | [log](https://download.openmmlab.com/mmpose/v1/projects/rtmpose/cspnext-m_udp-aic-coco_210e-256x192-f2f7d6f6_20230130.json) | | [pose_cspnext_l_udp_aic_coco](/configs/body_2d_keypoint/topdown_heatmap/coco/cspnext-l_udp_8xb256-210e_aic-coco-256x192.py) | 256x192 | 0.772 | 0.936 | 0.839 | 0.799 | 0.943 | [ckpt](https://download.openmmlab.com/mmpose/v1/projects/rtmpose/cspnext-l_udp-aic-coco_210e-256x192-273b7631_20230130.pth) | [log](https://download.openmmlab.com/mmpose/v1/projects/rtmpose/cspnext-l_udp-aic-coco_210e-256x192-273b7631_20230130.json) | -Note that, UDP also adopts the unbiased encoding/decoding algorithm of [DARK](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/techniques.html#darkpose-cvpr-2020). +Note that, UDP also adopts the unbiased encoding/decoding algorithm of [DARK](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/techniques.html#darkpose-cvpr-2020). Flip test and detector is not used in the result of aic-coco training. diff --git a/configs/body_2d_keypoint/topdown_heatmap/coco/hrnet_udp_coco.md b/configs/body_2d_keypoint/topdown_heatmap/coco/hrnet_udp_coco.md index 34e05740fb..2b85d85a25 100644 --- a/configs/body_2d_keypoint/topdown_heatmap/coco/hrnet_udp_coco.md +++ b/configs/body_2d_keypoint/topdown_heatmap/coco/hrnet_udp_coco.md @@ -60,4 +60,4 @@ Results on COCO val2017 with detector having human AP of 56.4 on COCO val2017 da | [pose_hrnet_w48_udp](/configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w48_udp-8xb32-210e_coco-384x288.py) | 384x288 | 0.773 | 0.911 | 0.836 | 0.821 | 0.946 | [ckpt](https://download.openmmlab.com/mmpose/v1/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w48_udp-8xb32-210e_coco-384x288-70d7ab01_20220913.pth) | [log](https://download.openmmlab.com/mmpose/v1/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w48_udp-8xb32-210e_coco-384x288_20220913.log) | | [pose_hrnet_w32_udp_regress](/configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w32_udp-regress-8xb64-210e_coco-256x192.py) | 256x192 | 0.759 | 0.907 | 0.827 | 0.813 | 0.943 | [ckpt](https://download.openmmlab.com/mmpose/v1/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w32_udp-regress-8xb64-210e_coco-256x192-9c0b77b4_20220926.pth) | [log](https://download.openmmlab.com/mmpose/v1/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w32_udp-regress-8xb64-210e_coco-256x192_20220226.log) | -Note that, UDP also adopts the unbiased encoding/decoding algorithm of [DARK](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/techniques.html#darkpose-cvpr-2020). +Note that, UDP also adopts the unbiased encoding/decoding algorithm of [DARK](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/techniques.html#darkpose-cvpr-2020). diff --git a/demo/docs/2d_animal_demo.md b/demo/docs/2d_animal_demo.md index 8b2e2d3ed7..997f182087 100644 --- a/demo/docs/2d_animal_demo.md +++ b/demo/docs/2d_animal_demo.md @@ -15,7 +15,7 @@ python demo/topdown_demo_with_mmdet.py \ [--device ${GPU_ID or CPU}] ``` -The pre-trained animal pose estimation model can be found from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/animal_2d_keypoint.html). +The pre-trained animal pose estimation model can be found from [model zoo](https://mmpose.readthedocs.io/en/latest/model_zoo/animal_2d_keypoint.html). Take [animalpose model](https://download.openmmlab.com/mmpose/animal/hrnet/hrnet_w32_animalpose_256x256-1aa7f075_20210426.pth) as an example: ```shell diff --git a/demo/docs/2d_face_demo.md b/demo/docs/2d_face_demo.md index 60cbf39d9c..e1940cd243 100644 --- a/demo/docs/2d_face_demo.md +++ b/demo/docs/2d_face_demo.md @@ -16,7 +16,7 @@ python demo/topdown_demo_with_mmdet.py \ [--kpt-thr ${KPT_SCORE_THR}] [--bbox-thr ${BBOX_SCORE_THR}] ``` -The pre-trained face keypoint estimation models can be found from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/face_2d_keypoint.html). +The pre-trained face keypoint estimation models can be found from [model zoo](https://mmpose.readthedocs.io/en/latest/model_zoo/face_2d_keypoint.html). Take [aflw model](https://download.openmmlab.com/mmpose/face/hrnetv2/hrnetv2_w18_aflw_256x256-f2bbc62b_20210125.pth) as an example: ```shell diff --git a/demo/docs/2d_hand_demo.md b/demo/docs/2d_hand_demo.md index d2971be234..63f35de5c6 100644 --- a/demo/docs/2d_hand_demo.md +++ b/demo/docs/2d_hand_demo.md @@ -17,7 +17,7 @@ python demo/topdown_demo_with_mmdet.py \ ``` -The pre-trained hand pose estimation model can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/hand_2d_keypoint.html). +The pre-trained hand pose estimation model can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/latest/model_zoo/hand_2d_keypoint.html). Take [onehand10k model](https://download.openmmlab.com/mmpose/hand/hrnetv2/hrnetv2_w18_onehand10k_256x256-30bc9c6b_20210330.pth) as an example: ```shell diff --git a/demo/docs/2d_human_pose_demo.md b/demo/docs/2d_human_pose_demo.md index 855717565f..a2e3cf59dd 100644 --- a/demo/docs/2d_human_pose_demo.md +++ b/demo/docs/2d_human_pose_demo.md @@ -18,7 +18,7 @@ python demo/image_demo.py \ If you use a heatmap-based model and set argument `--draw-heatmap`, the predicted heatmap will be visualized together with the keypoints. -The pre-trained human pose estimation models can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/body_2d_keypoint.html). +The pre-trained human pose estimation models can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/latest/model_zoo/body_2d_keypoint.html). Take [coco model](https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth) as an example: ```shell diff --git a/demo/docs/2d_wholebody_pose_demo.md b/demo/docs/2d_wholebody_pose_demo.md index e8cc8d9b08..ddd4cbd13d 100644 --- a/demo/docs/2d_wholebody_pose_demo.md +++ b/demo/docs/2d_wholebody_pose_demo.md @@ -14,7 +14,7 @@ python demo/image_demo.py \ [--draw_heatmap] ``` -The pre-trained hand pose estimation models can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo/2d_wholebody_keypoint.html). +The pre-trained hand pose estimation models can be downloaded from [model zoo](https://mmpose.readthedocs.io/en/latest/model_zoo/2d_wholebody_keypoint.html). Take [coco-wholebody_vipnas_res50_dark](https://download.openmmlab.com/mmpose/top_down/vipnas/vipnas_res50_wholebody_256x192_dark-67c0ce35_20211112.pth) model as an example: ```shell diff --git a/demo/docs/webcam_api_demo.md b/demo/docs/webcam_api_demo.md index 69d1fb92b7..ffc8922b13 100644 --- a/demo/docs/webcam_api_demo.md +++ b/demo/docs/webcam_api_demo.md @@ -63,7 +63,7 @@ Detailed configurations can be found in the config file. ``` - **Configure pose estimation models** - In this demo we use two [top-down](https://github.com/open-mmlab/mmpose/tree/1.x/configs/body_2d_keypoint/topdown_heatmap) pose estimation models for humans and animals respectively. Users can choose models from the [MMPose Model Zoo](https://mmpose.readthedocs.io/en/1.x/modelzoo.html). To apply different pose models on different instance types, you can add multiple pose estimator nodes with `cls_names` set accordingly. + In this demo we use two [top-down](https://github.com/open-mmlab/mmpose/tree/latest/configs/body_2d_keypoint/topdown_heatmap) pose estimation models for humans and animals respectively. Users can choose models from the [MMPose Model Zoo](https://mmpose.readthedocs.io/en/latest/modelzoo.html). To apply different pose models on different instance types, you can add multiple pose estimator nodes with `cls_names` set accordingly. ```python # 'TopdownPoseEstimatorNode': diff --git a/docker/Dockerfile b/docker/Dockerfile index 633c23886b..347af89ca8 100644 --- a/docker/Dockerfile +++ b/docker/Dockerfile @@ -28,7 +28,7 @@ RUN mim install mmengine "mmcv>=2.0.0rc1" RUN conda clean --all RUN git clone https://github.com/open-mmlab/mmpose.git /mmpose WORKDIR /mmpose -RUN git checkout 1.x +RUN git checkout main ENV FORCE_CUDA="1" RUN pip install -r requirements/build.txt RUN pip install --no-cache-dir -e . diff --git a/docs/en/advanced_guides/codecs.md b/docs/en/advanced_guides/codecs.md index 9a478f61e1..610bd83a57 100644 --- a/docs/en/advanced_guides/codecs.md +++ b/docs/en/advanced_guides/codecs.md @@ -1,4 +1,4 @@ -# Codecs +# Learn about Codecs In the keypoint detection task, depending on the algorithm, it is often necessary to generate targets in different formats, such as normalized coordinates, vectors and heatmaps, etc. Similarly, for the model outputs, a decoding process is required to transform them into coordinates. diff --git a/docs/en/conf.py b/docs/en/conf.py index 30ed8357b8..4359aa46e9 100644 --- a/docs/en/conf.py +++ b/docs/en/conf.py @@ -77,7 +77,7 @@ def get_version(): 'menu': [ { 'name': 'GitHub', - 'url': 'https://github.com/open-mmlab/mmpose/tree/1.x' + 'url': 'https://github.com/open-mmlab/mmpose/tree/main' }, ], # Specify the language of the shared menu diff --git a/docs/en/index.rst b/docs/en/index.rst index 754d070899..61bc1706b6 100644 --- a/docs/en/index.rst +++ b/docs/en/index.rst @@ -22,8 +22,8 @@ You can change the documentation language at the lower-left corner of the page. user_guides/inference.md user_guides/configs.md - user_guides/train_and_test.md user_guides/prepare_datasets.md + user_guides/train_and_test.md .. toctree:: :maxdepth: 1 diff --git a/docs/en/installation.md b/docs/en/installation.md index 9c1af6d2eb..0f8707b77d 100644 --- a/docs/en/installation.md +++ b/docs/en/installation.md @@ -75,8 +75,7 @@ mim install "mmdet>=3.0.0rc6" To develop and run mmpose directly, install it from source: ```shell -git clone https://github.com/open-mmlab/mmpose.git -b 1.x -# "-b 1.x" means checkout to the `1.x` branch. +git clone https://github.com/open-mmlab/mmpose.git cd mmpose pip install -r requirements.txt pip install -v -e . @@ -138,7 +137,7 @@ model = init_model(config_file, checkpoint_file, device='cpu') # or device='cud results = inference_topdown(model, 'demo.jpg') ``` -The `demo.jpg` can be downloaded from [Github](https://raw.githubusercontent.com/open-mmlab/mmpose/1.x/tests/data/coco/000000000785.jpg). +The `demo.jpg` can be downloaded from [Github](https://raw.githubusercontent.com/open-mmlab/mmpose/main/tests/data/coco/000000000785.jpg). The inference results will be a list of `PoseDataSample`, and the predictions are in the `pred_instances`, indicating the detected keypoint locations and scores. @@ -199,7 +198,7 @@ thus we only need to install MMEngine, MMCV and MMPose with the following comman **Step 2.** Install MMPose from the source. ```shell -!git clone https://github.com/open-mmlab/mmpose.git -b 1.x +!git clone https://github.com/open-mmlab/mmpose.git %cd mmpose !pip install -e . ``` diff --git a/docs/en/merge_docs.sh b/docs/en/merge_docs.sh index a3b55f3e84..9dd222d3d0 100644 --- a/docs/en/merge_docs.sh +++ b/docs/en/merge_docs.sh @@ -2,7 +2,7 @@ # Copyright (c) OpenMMLab. All rights reserved. sed -i '$a\\n' ../../demo/docs/*_demo.md -cat ../../demo/docs/*_demo.md | sed "s/^## 2D\(.*\)Demo/##\1Estimation/" | sed "s/md###t/html#t/g" | sed '1i\# Demos\n' | sed 's=](/docs/en/=](/=g' | sed 's=](/=](https://github.com/open-mmlab/mmpose/tree/1.x/=g' >demos.md +cat ../../demo/docs/*_demo.md | sed "s/^## 2D\(.*\)Demo/##\1Estimation/" | sed "s/md###t/html#t/g" | sed '1i\# Demos\n' | sed 's=](/docs/en/=](/=g' | sed 's=](/=](https://github.com/open-mmlab/mmpose/tree/main/=g' >demos.md # remove /docs/ for link used in doc site sed -i 's=](/docs/en/=](=g' overview.md @@ -18,14 +18,14 @@ sed -i 's=](/docs/en/=](=g' ./notes/*.md sed -i 's=](/docs/en/=](=g' ./projects/*.md -sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/1.x/=g' overview.md -sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/1.x/=g' installation.md -sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/1.x/=g' quick_run.md -sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/1.x/=g' migration.md -sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/1.x/=g' ./advanced_guides/*.md -sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/1.x/=g' ./model_zoo/*.md -sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/1.x/=g' ./model_zoo_papers/*.md -sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/1.x/=g' ./user_guides/*.md -sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/1.x/=g' ./dataset_zoo/*.md -sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/1.x/=g' ./notes/*.md -sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/1.x/=g' ./projects/*.md +sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/main/=g' overview.md +sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/main/=g' installation.md +sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/main/=g' quick_run.md +sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/main/=g' migration.md +sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/main/=g' ./advanced_guides/*.md +sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/main/=g' ./model_zoo/*.md +sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/main/=g' ./model_zoo_papers/*.md +sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/main/=g' ./user_guides/*.md +sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/main/=g' ./dataset_zoo/*.md +sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/main/=g' ./notes/*.md +sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/main/=g' ./projects/*.md diff --git a/docs/en/notes/changelog.md b/docs/en/notes/changelog.md index c91aab1188..1d1be738e3 100644 --- a/docs/en/notes/changelog.md +++ b/docs/en/notes/changelog.md @@ -99,7 +99,7 @@ Built upon the new [training engine](https://github.com/open-mmlab/mmengine). - **Unified interfaces**. As a part of the OpenMMLab 2.0 projects, MMPose 1.x unifies and refactors the interfaces and internal logics of train, testing, datasets, models, evaluation, and visualization. All the OpenMMLab 2.0 projects share the same design in those interfaces and logics to allow the emergence of multi-task/modality algorithms. -- **More documentation and tutorials**. We add a bunch of documentation and tutorials to help users get started more smoothly. Read it [here](https://mmpose.readthedocs.io/en/1.x/). +- **More documentation and tutorials**. We add a bunch of documentation and tutorials to help users get started more smoothly. Read it [here](https://mmpose.readthedocs.io/en/latest/). **Breaking Changes** diff --git a/docs/en/overview.md b/docs/en/overview.md index 90f0521ee6..b6e31dd239 100644 --- a/docs/en/overview.md +++ b/docs/en/overview.md @@ -32,20 +32,35 @@ MMPose consists of **8** main components: We have prepared detailed guidelines for all types of users: 1. For installation instrunctions: + - [Installation](./installation.md) + 2. For the basic usage of MMPose: - - [Quick Run](./quick_run.md) + + - [A 20-minute Tour to MMPose](./guide_to_framework.md) + - [Demos](./demos.md) - [Inference](./user_guides/inference.md) -3. For users who want to learn more about components of MMPose: - [Configs](./user_guides/configs.md) - [Prepare Datasets](./user_guides/prepare_datasets.md) - - [Codecs](./user_guides/codecs.md) - - [Train & Test](./user_guides/train_and_test.md) - - [Visualization](./user_guides/visualization.md) - - [How to](./user_guides/how_to.md) -4. For developers who wish to develop based on MMPose: + - [Train and Test](./user_guides/train_and_test.md) + +3. For developers who wish to develop based on MMPose: + + - [Learn about Codecs](./advanced_guides/codecs.md) + - [Dataflow in MMPose](./advanced_guides/dataflow.md) + - [Implement New Models](./advanced_guides/implement_new_models.md) + - [Customize Datasets](./advanced_guides/customize_datasets.md) + - [Customize Data Transforms](./advanced_guides/customize_transforms.md) + - [Customize Optimizer](./advanced_guides/customize_optimizer.md) + - [Customize Logging](./advanced_guides/customize_logging.md) + - [How to Deploy](./advanced_guides/how_to_deploy.md) + - [Model Analysis](./advanced_guides/model_analysis.md) - [Migration Guide](./migration.md) -5. For researchers and developers who are willing to contribute to MMPose: + +4. For researchers and developers who are willing to contribute to MMPose: + - [Contribution Guide](./contribution_guide.md) -6. For some common issues, we provide a FAQ list: + +5. For some common issues, we provide a FAQ list: + - [FAQ](./faq.md) diff --git a/docs/en/switch_language.md b/docs/en/switch_language.md index 7875c9717a..a0a6259bee 100644 --- a/docs/en/switch_language.md +++ b/docs/en/switch_language.md @@ -1,3 +1,3 @@ -## <a href='https://mmpose.readthedocs.io/en/1.x/'>English</a> +## <a href='https://mmpose.readthedocs.io/en/latest/'>English</a> -## <a href='https://mmpose.readthedocs.io/zh_CN/1.x/'>简体中文</a> +## <a href='https://mmpose.readthedocs.io/zh_CN/latest/'>简体中文</a> diff --git a/docs/en/user_guides/inference.md b/docs/en/user_guides/inference.md index fdf1f4d3ab..de61a7b446 100644 --- a/docs/en/user_guides/inference.md +++ b/docs/en/user_guides/inference.md @@ -1,11 +1,11 @@ # Inference with existing models -MMPose provides a wide variety of pre-trained models for pose estimation, which can be found in the [Model Zoo](https://mmpose.readthedocs.io/en/1.x/modelzoo.html). +MMPose provides a wide variety of pre-trained models for pose estimation, which can be found in the [Model Zoo](https://mmpose.readthedocs.io/en/latest/model_zoo.html). This guide will demonstrate **how to perform inference**, or running pose estimation on provided images or videos using trained models. For instructions on testing existing models on standard datasets, refer to this [guide](./train_and_test.md#test). -In MMPose, a model is defined by a configuration file, while its pre-existing parameters are stored in a checkpoint file. You can find the model configuration files and corresponding checkpoint URLs in the [Model Zoo](https://mmpose.readthedocs.io/en/1.x/modelzoo.html). We recommend starting with the HRNet model, using [this configuration file](/configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w32_8xb64-210e_coco-256x192.py) and [this checkpoint file](https://download.openmmlab.com/mmpose/v1/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w32_8xb64-210e_coco-256x192-81c58e40_20220909.pth). +In MMPose, a model is defined by a configuration file, while its pre-existing parameters are stored in a checkpoint file. You can find the model configuration files and corresponding checkpoint URLs in the [Model Zoo](https://mmpose.readthedocs.io/en/latest/modelzoo.html). We recommend starting with the HRNet model, using [this configuration file](/configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w32_8xb64-210e_coco-256x192.py) and [this checkpoint file](https://download.openmmlab.com/mmpose/v1/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w32_8xb64-210e_coco-256x192-81c58e40_20220909.pth). ## Inferencer: a Unified Inference Interface diff --git a/docs/en/user_guides/prepare_datasets.md b/docs/en/user_guides/prepare_datasets.md index 8695243f7d..a14a9601cd 100644 --- a/docs/en/user_guides/prepare_datasets.md +++ b/docs/en/user_guides/prepare_datasets.md @@ -1,6 +1,6 @@ # Prepare Datasets -MMPose supports multiple tasks and corresponding datasets. You can find them in [dataset zoo](https://mmpose.readthedocs.io/en/1.x/dataset_zoo.html). Please follow the corresponding guidelines for data preparation. +MMPose supports multiple tasks and corresponding datasets. You can find them in [dataset zoo](https://mmpose.readthedocs.io/en/latest/dataset_zoo.html). Please follow the corresponding guidelines for data preparation. <!-- TOC --> diff --git a/docs/en/user_guides/train_and_test.md b/docs/en/user_guides/train_and_test.md index ae0d459da2..95b3540003 100644 --- a/docs/en/user_guides/train_and_test.md +++ b/docs/en/user_guides/train_and_test.md @@ -162,7 +162,7 @@ CUDA_VISIBLE_DEVICES=-1 python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [ | ARGS | Description | | ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `CONFIG_FILE` | The path to the config file. | -| `CHECKPOINT_FILE` | The path to the checkpoint file (It can be a http link, and you can find checkpoints [here](https://MMPose.readthedocs.io/en/1.x/modelzoo_statistics.html)). | +| `CHECKPOINT_FILE` | The path to the checkpoint file (It can be a http link, and you can find checkpoints [here](https://MMPose.readthedocs.io/en/latest/model_zoo.html)). | | `--work-dir WORK_DIR` | The directory to save the file containing evaluation metrics. | | `--out OUT` | The path to save the file containing evaluation metrics. | | `--dump DUMP` | The path to dump all outputs of the model for offline evaluation. | @@ -181,12 +181,12 @@ We provide a shell script to start a multi-GPUs task with `torch.distributed.lau bash ./tools/dist_test.sh ${CONFIG_FILE} ${CHECKPOINT_FILE} ${GPU_NUM} [PY_ARGS] ``` -| ARGS | Description | -| ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| `CONFIG_FILE` | The path to the config file. | -| `CHECKPOINT_FILE` | The path to the checkpoint file (It can be a http link, and you can find checkpoints [here](https://mmpose.readthedocs.io/en/1.x/modelzoo_statistics.html)). | -| `GPU_NUM` | The number of GPUs to be used. | -| `[PYARGS]` | The other optional arguments of `tools/test.py`, see [here](#test-with-your-pc). | +| ARGS | Description | +| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- | +| `CONFIG_FILE` | The path to the config file. | +| `CHECKPOINT_FILE` | The path to the checkpoint file (It can be a http link, and you can find checkpoints [here](https://mmpose.readthedocs.io/en/latest/model_zoo.html)). | +| `GPU_NUM` | The number of GPUs to be used. | +| `[PYARGS]` | The other optional arguments of `tools/test.py`, see [here](#test-with-your-pc). | You can also specify extra arguments of the launcher by environment variables. For example, change the communication port of the launcher to 29666 by the below command: @@ -242,13 +242,13 @@ If you run MMPose on a cluster managed with [slurm](https://slurm.schedmd.com/), Here are the argument descriptions of the script. -| ARGS | Description | -| ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| `PARTITION` | The partition to use in your cluster. | -| `JOB_NAME` | The name of your job, you can name it as you like. | -| `CONFIG_FILE` | The path to the config file. | -| `CHECKPOINT_FILE` | The path to the checkpoint file (It can be a http link, and you can find checkpoints [here](https://MMPose.readthedocs.io/en/1.x/modelzoo_statistics.html)). | -| `[PYARGS]` | The other optional arguments of `tools/test.py`, see [here](#test-with-your-pc). | +| ARGS | Description | +| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- | +| `PARTITION` | The partition to use in your cluster. | +| `JOB_NAME` | The name of your job, you can name it as you like. | +| `CONFIG_FILE` | The path to the config file. | +| `CHECKPOINT_FILE` | The path to the checkpoint file (It can be a http link, and you can find checkpoints [here](https://MMPose.readthedocs.io/en/latest/model_zoo.html)). | +| `[PYARGS]` | The other optional arguments of `tools/test.py`, see [here](#test-with-your-pc). | Here are the environment variables that can be used to configure the slurm job. diff --git a/docs/src/papers/algorithms/rtmpose.md b/docs/src/papers/algorithms/rtmpose.md index a2a285fe20..04a3fb0a22 100644 --- a/docs/src/papers/algorithms/rtmpose.md +++ b/docs/src/papers/algorithms/rtmpose.md @@ -25,7 +25,7 @@ <!-- [ABSTRACT] --> -Recent studies on 2D pose estimation have achieved excellent performance on public benchmarks, yet its application in the industrial community still suffers from heavy model parameters and high latency. In order to bridge this gap, we empirically explore key factors in pose estimation including paradigm, model architecture, training strategy, and deployment, and present a high-performance real-time multi-person pose estimation framework, RTMPose, based on MMPose. Our RTMPose-m achieves 75.8% AP on COCO with 90+ FPS on an Intel i7-11700 CPU and 430+ FPS on an NVIDIA GTX 1660 Ti GPU, and RTMPose-l achieves 67.0% AP on COCO-WholeBody with 130+ FPS. To further evaluate RTMPose’s capability in critical real-time applications, we also report the performance after deploying on the mobile device. Our RTMPoses achieves 72.2% AP on COCO with 70+ FPS on a Snapdragon 865 chip, outperforming existing open-source libraries. Code and models are released at https:// github.com/open-mmlab/mmpose/tree/1.x/projects/rtmpose. +Recent studies on 2D pose estimation have achieved excellent performance on public benchmarks, yet its application in the industrial community still suffers from heavy model parameters and high latency. In order to bridge this gap, we empirically explore key factors in pose estimation including paradigm, model architecture, training strategy, and deployment, and present a high-performance real-time multi-person pose estimation framework, RTMPose, based on MMPose. Our RTMPose-m achieves 75.8% AP on COCO with 90+ FPS on an Intel i7-11700 CPU and 430+ FPS on an NVIDIA GTX 1660 Ti GPU, and RTMPose-l achieves 67.0% AP on COCO-WholeBody with 130+ FPS. To further evaluate RTMPose’s capability in critical real-time applications, we also report the performance after deploying on the mobile device. Our RTMPoses achieves 72.2% AP on COCO with 70+ FPS on a Snapdragon 865 chip, outperforming existing open-source libraries. Code and models are released at https://github.com/open-mmlab/mmpose/tree/main/projects/rtmpose. <!-- [IMAGE] --> diff --git a/docs/zh_cn/index.rst b/docs/zh_cn/index.rst index c786092b38..e38ed72df4 100644 --- a/docs/zh_cn/index.rst +++ b/docs/zh_cn/index.rst @@ -22,8 +22,8 @@ You can change the documentation language at the lower-left corner of the page. user_guides/inference.md user_guides/configs.md - user_guides/train_and_test.md user_guides/prepare_datasets.md + user_guides/train_and_test.md .. toctree:: :maxdepth: 1 diff --git a/docs/zh_cn/installation.md b/docs/zh_cn/installation.md index 1c566b9f53..e0917a2e3c 100644 --- a/docs/zh_cn/installation.md +++ b/docs/zh_cn/installation.md @@ -75,8 +75,7 @@ mim install "mmdet>=3.0.0rc0" 如果基于 MMPose 框架开发自己的任务,需要添加新的功能,比如新的模型或是数据集,或者使用我们提供的各种工具。从源码按如下方式安装 mmpose: ```shell -git clone https://github.com/open-mmlab/mmpose.git -b 1.x -# "-b 1.x" 表示切换到 `1.x` 分支. +git clone https://github.com/open-mmlab/mmpose.git cd mmpose pip install -r requirements.txt pip install -v -e . @@ -139,7 +138,7 @@ model = init_model(config_file, checkpoint_file, device='cpu') # or device='cud results = inference_topdown(model, 'demo.jpg') ``` -示例图片 `demo.jpg` 可以从 [Github](https://raw.githubusercontent.com/open-mmlab/mmpose/1.x/tests/data/coco/000000000785.jpg) 下载。 +示例图片 `demo.jpg` 可以从 [Github](https://raw.githubusercontent.com/open-mmlab/mmpose/main/tests/data/coco/000000000785.jpg) 下载。 推理结果是一个 `PoseDataSample` 列表,预测结果将会保存在 `pred_instances` 中,包括检测到的关键点位置和置信度。 ## 自定义安装 @@ -205,7 +204,7 @@ MMPose 可以仅在 CPU 环境中安装,在 CPU 模式下,您可以完成训 **第 2 步** 从源码安装 mmpose ```shell -!git clone https://github.com/open-mmlab/mmpose.git -b 1.x +!git clone https://github.com/open-mmlab/mmpose.git %cd mmpose !pip install -e . ``` diff --git a/docs/zh_cn/merge_docs.sh b/docs/zh_cn/merge_docs.sh index 45a9b594bd..3b9f8f0e1b 100644 --- a/docs/zh_cn/merge_docs.sh +++ b/docs/zh_cn/merge_docs.sh @@ -2,7 +2,7 @@ # Copyright (c) OpenMMLab. All rights reserved. sed -i '$a\\n' ../../demo/docs/*_demo.md -cat ../../demo/docs/*_demo.md | sed "s/^## 2D\(.*\)Demo/##\1Estimation/" | sed "s/md###t/html#t/g" | sed '1i\# Demos\n' | sed 's=](/docs/en/=](/=g' | sed 's=](/=](https://github.com/open-mmlab/mmpose/tree/1.x/=g' >demos.md +cat ../../demo/docs/*_demo.md | sed "s/^## 2D\(.*\)Demo/##\1Estimation/" | sed "s/md###t/html#t/g" | sed '1i\# Demos\n' | sed 's=](/docs/en/=](/=g' | sed 's=](/=](https://github.com/open-mmlab/mmpose/tree/main/=g' >demos.md # remove /docs/ for link used in doc site sed -i 's=](/docs/zh_cn/=](=g' overview.md @@ -18,14 +18,14 @@ sed -i 's=](/docs/zh_cn/=](=g' ./notes/*.md sed -i 's=](/docs/zh_cn/=](=g' ./projects/*.md -sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/1.x/=g' overview.md -sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/1.x/=g' installation.md -sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/1.x/=g' quick_run.md -sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/1.x/=g' migration.md -sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/1.x/=g' ./advanced_guides/*.md -sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/1.x/=g' ./model_zoo/*.md -sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/1.x/=g' ./model_zoo_papers/*.md -sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/1.x/=g' ./user_guides/*.md -sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/1.x/=g' ./dataset_zoo/*.md -sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/1.x/=g' ./notes/*.md -sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/1.x/=g' ./projects/*.md +sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/main/=g' overview.md +sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/main/=g' installation.md +sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/main/=g' quick_run.md +sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/main/=g' migration.md +sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/main/=g' ./advanced_guides/*.md +sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/main/=g' ./model_zoo/*.md +sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/main/=g' ./model_zoo_papers/*.md +sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/main/=g' ./user_guides/*.md +sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/main/=g' ./dataset_zoo/*.md +sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/main/=g' ./notes/*.md +sed -i 's=](/=](https://github.com/open-mmlab/mmpose/tree/main/=g' ./projects/*.md diff --git a/docs/zh_cn/notes/changelog.md b/docs/zh_cn/notes/changelog.md index 9601568e1f..942d3d515b 100644 --- a/docs/zh_cn/notes/changelog.md +++ b/docs/zh_cn/notes/changelog.md @@ -14,7 +14,7 @@ It also provide a general semi-supervised object detection framework, and more s - **Unified interfaces**. As a part of the OpenMMLab 2.0 projects, MMPose 1.x unifies and refactors the interfaces and internal logics of train, testing, datasets, models, evaluation, and visualization. All the OpenMMLab 2.0 projects share the same design in those interfaces and logics to allow the emergence of multi-task/modality algorithms. -- **More documentation and tutorials**. We add a bunch of documentation and tutorials to help users get started more smoothly. Read it [here](https://mmpose.readthedocs.io/en/1.x/). +- **More documentation and tutorials**. We add a bunch of documentation and tutorials to help users get started more smoothly. Read it [here](https://mmpose.readthedocs.io/en/latest/). **Breaking Changes** diff --git a/docs/zh_cn/overview.md b/docs/zh_cn/overview.md index c41edd82a4..a790cd3be2 100644 --- a/docs/zh_cn/overview.md +++ b/docs/zh_cn/overview.md @@ -47,26 +47,30 @@ MMPose 由 **8** 个主要部分组成,apis、structures、datasets、codecs 2. MMPose 的基本使用方法: - - [快速上手](./quick_run.md) + - [20 分钟上手教程](./guide_to_framework.md) + - [Demos](./demos.md) - [模型推理](./user_guides/inference.md) - -3. 对于希望了解 MMPose 各个组件的用户: - - [配置文件](./user_guides/configs.md) - [准备数据集](./user_guides/prepare_datasets.md) - - [编解码器](./user_guides/codecs.md) - [训练与测试](./user_guides/train_and_test.md) - - [可视化](./user_guides/visualization.md) - - [How to](./user_guides/how_to.md) - -4. 对于希望将自己的项目迁移到 MMPose 的开发者: +3. 对于希望基于 MMPose 进行开发的研究者和开发者: + + - [编解码器](./advanced_guides/codecs.md) + - [数据流](./advanced_guides/dataflow.md) + - [实现新模型](./advanced_guides/implement_new_models.md) + - [自定义数据集](./advanced_guides/customize_datasets.md) + - [自定义数据变换](./advanced_guides/customize_transforms.md) + - [自定义优化器](./advanced_guides/customize_optimizer.md) + - [自定义日志](./advanced_guides/customize_logging.md) + - [模型部署](./advanced_guides/how_to_deploy.md) + - [模型分析工具](./advanced_guides/model_analysis.md) - [迁移指南](./migration.md) -5. 对于希望加入开源社区,向 MMPose 贡献代码的研究者和开发者: +4. 对于希望加入开源社区,向 MMPose 贡献代码的研究者和开发者: - [参与贡献代码](./contribution_guide.md) -6. 对于使用过程中的常见问题: +5. 对于使用过程中的常见问题: - [FAQ](./faq.md) diff --git a/docs/zh_cn/switch_language.md b/docs/zh_cn/switch_language.md index 3483122d1c..05688a9530 100644 --- a/docs/zh_cn/switch_language.md +++ b/docs/zh_cn/switch_language.md @@ -1,3 +1,3 @@ -## <a href='https://mmpose.readthedocs.io/zh_CN/1.x/'>简体中文</a> +## <a href='https://mmpose.readthedocs.io/zh_CN/latest/'>简体中文</a> -## <a href='https://mmpose.readthedocs.io/en/1.x/'>English</a> +## <a href='https://mmpose.readthedocs.io/en/latest/'>English</a> diff --git a/docs/zh_cn/user_guides/prepare_datasets.md b/docs/zh_cn/user_guides/prepare_datasets.md index 892b3fc5e9..a10a7e4836 100644 --- a/docs/zh_cn/user_guides/prepare_datasets.md +++ b/docs/zh_cn/user_guides/prepare_datasets.md @@ -1,6 +1,6 @@ # 准备数据集 -MMPose 目前已支持了多个任务和相应的数据集。您可以在 [数据集](https://mmpose.readthedocs.io/zh_CN/1.x/dataset_zoo.html) 找到它们。请按照相应的指南准备数据。 +MMPose 目前已支持了多个任务和相应的数据集。您可以在 [数据集](https://mmpose.readthedocs.io/zh_CN/latest/dataset_zoo.html) 找到它们。请按照相应的指南准备数据。 <!-- TOC --> diff --git a/mmpose/models/pose_estimators/base.py b/mmpose/models/pose_estimators/base.py index b97232b344..666e4628a6 100644 --- a/mmpose/models/pose_estimators/base.py +++ b/mmpose/models/pose_estimators/base.py @@ -24,7 +24,7 @@ class BasePoseEstimator(BaseModel, metaclass=ABCMeta): metainfo (dict): Meta information for dataset, such as keypoints definition and properties. If set, the metainfo of the input data batch will be overridden. For more details, please refer to - https://mmpose.readthedocs.io/en/1.x/user_guides/ + https://mmpose.readthedocs.io/en/latest/user_guides/ prepare_datasets.html#create-a-custom-dataset-info- config-file-for-the-dataset. Defaults to ``None`` """ diff --git a/mmpose/models/pose_estimators/topdown.py b/mmpose/models/pose_estimators/topdown.py index 521827ff2a..89b332893f 100644 --- a/mmpose/models/pose_estimators/topdown.py +++ b/mmpose/models/pose_estimators/topdown.py @@ -30,7 +30,7 @@ class TopdownPoseEstimator(BasePoseEstimator): metainfo (dict): Meta information for dataset, such as keypoints definition and properties. If set, the metainfo of the input data batch will be overridden. For more details, please refer to - https://mmpose.readthedocs.io/en/1.x/user_guides/ + https://mmpose.readthedocs.io/en/latest/user_guides/ prepare_datasets.html#create-a-custom-dataset-info- config-file-for-the-dataset. Defaults to ``None`` """ diff --git a/projects/README.md b/projects/README.md index e882e8b0b9..3505f96f41 100644 --- a/projects/README.md +++ b/projects/README.md @@ -16,11 +16,11 @@ If you're not sure where to start, check out our [example project](./example_pro We also provide some documentation listed below to help you get started: -- [New Model Guide](https://mmpose.readthedocs.io/en/1.x/guide_to_framework.html#step3-model) +- [New Model Guide](https://mmpose.readthedocs.io/en/latest/guide_to_framework.html#step3-model) A guide to help you add new models to MMPose. -- [Contribution Guide](https://mmpose.readthedocs.io/en/1.x/contribution_guide.html) +- [Contribution Guide](https://mmpose.readthedocs.io/en/latest/contribution_guide.html) A guide for new contributors on how to add their projects to MMPose. diff --git a/projects/example_project/README.md b/projects/example_project/README.md index 68f58b5a01..d355741aa4 100644 --- a/projects/example_project/README.md +++ b/projects/example_project/README.md @@ -37,7 +37,7 @@ export PYTHONPATH=`pwd`:$PYTHONPATH ### Data Preparation -Prepare the COCO dataset according to the [instruction](https://mmpose.readthedocs.io/en/1.x/dataset_zoo/2d_body_keypoint.html#coco). +Prepare the COCO dataset according to the [instruction](https://mmpose.readthedocs.io/en/dev-1.x/dataset_zoo/2d_body_keypoint.html#coco). ### Training commands @@ -149,7 +149,7 @@ to MMPose projects. - [ ] Unit tests - > Unit tests for the major module are required. [Example](https://github.com/open-mmlab/mmpose/blob/1.x/tests/test_models/test_heads/test_heatmap_heads/test_heatmap_head.py) + > Unit tests for the major module are required. [Example](https://github.com/open-mmlab/mmpose/blob/dev-1.x/tests/test_models/test_heads/test_heatmap_heads/test_heatmap_head.py) - [ ] Code polishing diff --git a/projects/mmpose4aigc/README.md b/projects/mmpose4aigc/README.md index 6e5335d2a8..1c9d268093 100644 --- a/projects/mmpose4aigc/README.md +++ b/projects/mmpose4aigc/README.md @@ -25,7 +25,7 @@ Run the following commands to prepare the project: ```shell # install mmpose mmdet pip install openmim -git clone -b 1.x https://github.com/open-mmlab/mmpose.git +git clone https://github.com/open-mmlab/mmpose.git cd mmpose mim install -e . mim install "mmdet>=3.0.0rc6" diff --git a/projects/mmpose4aigc/README_CN.md b/projects/mmpose4aigc/README_CN.md index 8e639fbed1..bdb943ec17 100644 --- a/projects/mmpose4aigc/README_CN.md +++ b/projects/mmpose4aigc/README_CN.md @@ -25,7 +25,7 @@ ```shell # install mmpose mmdet pip install openmim -git clone -b 1.x https://github.com/open-mmlab/mmpose.git +git clone https://github.com/open-mmlab/mmpose.git cd mmpose mim install -e . mim install "mmdet>=3.0.0rc6" diff --git a/projects/rtmpose/README.md b/projects/rtmpose/README.md index a4638d9241..cd1477643b 100644 --- a/projects/rtmpose/README.md +++ b/projects/rtmpose/README.md @@ -150,7 +150,7 @@ Feel free to join our community group for more help: **Notes** - Since all models are trained on multi-domain combined datasets for practical applications, results are **not** suitable for academic comparison. -- More results of RTMPose on public benchmarks can refer to [Model Zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/algorithms.html) +- More results of RTMPose on public benchmarks can refer to [Model Zoo](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/algorithms.html) - Flip test is used. - Inference speed measured on more hardware platforms can refer to [Benchmark](./benchmark/README.md) - If you have datasets you would like us to support, feel free to [contact us](https://docs.google.com/forms/d/e/1FAIpQLSfzwWr3eNlDzhU98qzk2Eph44Zio6hi5r0iSwfO9wSARkHdWg/viewform?usp=sf_link)/[联系我们](https://uua478.fanqier.cn/f/xxmynrki). @@ -385,7 +385,7 @@ Result is as follows: ## 👨🏫 How to Train [🔝](#-table-of-contents) -Please refer to [Train and Test](https://mmpose.readthedocs.io/en/1.x/user_guides/train_and_test.html). +Please refer to [Train and Test](https://mmpose.readthedocs.io/en/latest/user_guides/train_and_test.html). **Tips**: @@ -403,7 +403,7 @@ Here is a basic example of deploy RTMPose with [MMDeploy-1.x](https://github.com Before starting the deployment, please make sure you install MMPose-1.x and MMDeploy-1.x correctly. -- Install MMPose-1.x, please refer to the [MMPose-1.x installation guide](https://mmpose.readthedocs.io/en/1.x/installation.html). +- Install MMPose-1.x, please refer to the [MMPose-1.x installation guide](https://mmpose.readthedocs.io/en/latest/installation.html). - Install MMDeploy-1.x, please refer to the [MMDeploy-1.x installation guide](https://mmdeploy.readthedocs.io/en/1.x/get_started.html#installation). Depending on the deployment backend, some backends require compilation of custom operators, so please refer to the corresponding document to ensure the environment is built correctly according to your needs: diff --git a/projects/rtmpose/README_CN.md b/projects/rtmpose/README_CN.md index efebf147a8..f22787a2a3 100644 --- a/projects/rtmpose/README_CN.md +++ b/projects/rtmpose/README_CN.md @@ -142,7 +142,7 @@ RTMPose 是一个长期优化迭代的项目,致力于业务场景下的高性 - 此处提供的模型采用了多数据集联合训练以提高性能,模型指标不适用于学术比较。 - 表格中为开启了 Flip Test 的测试结果。 -- RTMPose 在更多公开数据集上的性能指标可以前往 [Model Zoo](https://mmpose.readthedocs.io/en/1.x/model_zoo_papers/algorithms.html) 查看。 +- RTMPose 在更多公开数据集上的性能指标可以前往 [Model Zoo](https://mmpose.readthedocs.io/en/latest/model_zoo_papers/algorithms.html) 查看。 - RTMPose 在更多硬件平台上的推理速度可以前往 [Benchmark](./benchmark/README_CN.md) 查看。 - 如果你有希望我们支持的数据集,欢迎[联系我们](https://uua478.fanqier.cn/f/xxmynrki)/[Google Questionnaire](https://docs.google.com/forms/d/e/1FAIpQLSfzwWr3eNlDzhU98qzk2Eph44Zio6hi5r0iSwfO9wSARkHdWg/viewform?usp=sf_link)! @@ -361,7 +361,7 @@ example\cpp\build\Release ### MMPose demo 脚本 -通过 MMPose 提供的 demo 脚本可以基于 Pytorch 快速进行[模型推理](https://mmpose.readthedocs.io/en/1.x/user_guides/inference.html)和效果验证。 +通过 MMPose 提供的 demo 脚本可以基于 Pytorch 快速进行[模型推理](https://mmpose.readthedocs.io/en/latest/user_guides/inference.html)和效果验证。 ```shell # 前往 mmpose 目录 @@ -383,7 +383,7 @@ python demo/topdown_demo_with_mmdet.py \ ## 👨🏫 模型训练 [🔝](#-table-of-contents) -请参考 [训练与测试](https://mmpose.readthedocs.io/en/1.x/user_guides/train_and_test.html) 进行 RTMPose 的训练。 +请参考 [训练与测试](https://mmpose.readthedocs.io/en/latest/user_guides/train_and_test.html) 进行 RTMPose 的训练。 **提示**: @@ -401,7 +401,7 @@ python demo/topdown_demo_with_mmdet.py \ 在开始部署之前,首先你需要确保正确安装了 MMPose, MMDetection, MMDeploy,相关安装教程如下: -- [安装 MMPose 与 MMDetection](https://mmpose.readthedocs.io/zh_CN/1.x/installation.html) +- [安装 MMPose 与 MMDetection](https://mmpose.readthedocs.io/zh_CN/latest/installation.html) - [安装 MMDeploy](https://mmdeploy.readthedocs.io/zh_CN/1.x/04-supported-codebases/mmpose.html) 根据部署后端的不同,有的后端需要对自定义算子进行编译,请根据需求前往对应的文档确保环境搭建正确: diff --git a/projects/yolox-pose/README.md b/projects/yolox-pose/README.md index 6b632c00ff..e880301ae6 100644 --- a/projects/yolox-pose/README.md +++ b/projects/yolox-pose/README.md @@ -32,7 +32,7 @@ python demo/inferencer_demo.py $INPUTS \ [--show] [--vis-out-dir $VIS_OUT_DIR] [--pred-out-dir $PRED_OUT_DIR] ``` -For more information on using the inferencer, please see [this document](https://mmpose.readthedocs.io/en/1.x/user_guides/inference.html#out-of-the-box-inferencer). +For more information on using the inferencer, please see [this document](https://mmpose.readthedocs.io/en/latest/user_guides/inference.html#out-of-the-box-inferencer). Here's an example code: @@ -51,7 +51,7 @@ This will create an output image `vis_results/000000000785.jpg`, which appears l #### Data Preparation -Prepare the COCO dataset according to the [instruction](https://mmpose.readthedocs.io/en/1.x/dataset_zoo/2d_body_keypoint.html#coco). +Prepare the COCO dataset according to the [instruction](https://mmpose.readthedocs.io/en/latest/dataset_zoo/2d_body_keypoint.html#coco). #### Commands