Demo App on Hugging Face Spaces 🤗
Facetorch is a Python library that can detect faces and analyze facial features using deep neural networks. The goal is to gather open sourced face analysis models from the community, optimize them for performance using TorchScript and combine them to create a face analysis tool that one can:
- configure using Hydra (OmegaConf)
- reproduce with conda-lock and Docker
- accelerate on CPU and GPU with TorchScript
- extend by uploading a model file to Google Drive and adding a config yaml file to the repository
Please, use the library responsibly with caution and follow the ethics guidelines for Trustworthy AI from European Commission. The models are not perfect and may be biased.
pip install facetorch
conda install -c conda-forge facetorch
Docker Compose provides an easy way of building a working facetorch environment with a single command.
- CPU:
docker compose run facetorch python ./scripts/example.py
- GPU:
docker compose run facetorch-gpu python ./scripts/example.py analyzer.device=cuda
Check data/output for resulting images with bounding boxes and facial 3D landmarks.
The project is configured by files located in conf with the main file: conf/config.yaml. One can easily add or remove modules from the configuration.
FaceAnalyzer is the main class of facetorch as it is the orchestrator responsible for initializing and running the following components:
- Reader - reads the image and returns an ImageData object containing the image tensor.
- Detector - wrapper around a neural network that detects faces.
- Unifier - processor that unifies sizes of all faces and normalizes them between 0 and 1.
- Predictor dict - set of wrappers around neural networks trained to analyze facial features.
- Utilizer dict - set of wrappers around any functionality that requires the output of neural networks e.g. drawing bounding boxes or facial landmarks.
analyzer
├── reader
├── detector
├── unifier
└── predictor
├── embed
├── verify
├── fer
├── deepfake
└── align
└── utilizer
├── align
├── draw
└── save
| model | source | params | license | version |
| ------------- | --------- | --------- | ----------- | ------- |
| RetinaFace | biubug6 | 27.3M | MIT license | 1 |
- biubug6
| model | source | params | license | version |
| ----------------- | ---------- | ------- | ----------- | ------- |
| ResNet-50 VGG 1M | 1adrianb | 28.4M | MIT license | 1 |
- 1adrianb
- code: unsupervised-face-representation
- paper: Bulat et al. - Pre-training strategies and datasets for facial representation learning
- Note:
include_tensors
needs to be True in order to include the model prediction in Prediction.logits
| model | source | params | license | version |
| ---------------- | ----------- | -------- | ------------------ | ------- |
| MagFace+UNPG | Jung-Jun-Uk | 65.2M | Apache License 2.0 | 1 |
| AdaFaceR100W12M | mk-minchul | - | MIT License | 2 |
- Jung-Jun-Uk
- code: UNPG
- paper: Jung et al. - Unified Negative Pair Generation toward Well-discriminative Feature Space for Face Recognition
- (FAR=0.01)
- Note:
include_tensors
needs to be True in order to include the model prediction in Prediction.logits
- mk-minchul
- code: AdaFace
- paper: Kim et al. - AdaFace: Quality Adaptive Margin for Face Recognition
- <
- <
- < badges represent models trained on smaller WebFace 4M dataset
- Note:
include_tensors
needs to be True in order to include the model prediction in Prediction.logits
| model | source | params | license | version |
| ----------------- | -------------- | -------- | ------------------ | ------- |
| EfficientNet B0 7 | HSE-asavchenko | 4M | Apache License 2.0 | 1 |
| EfficientNet B2 8 | HSE-asavchenko | 7.7M | Apache License 2.0 | 2 |
- HSE-asavchenko
| model | source | params | license | version |
| -------------------- | ---------------- | -------- | ----------- | ------- |
| EfficientNet B7 | selimsef | 66.4M | MIT license | 1 |
- selimsef
| model | source | params | license | version |
| ----------------- | ---------------- | -------- | ----------- | ------- |
| MobileNet v2 | choyingw | 4.1M | MIT license | 1 |
- choyingw
- code: SynergyNet
- challenge: Wu et al. - Synergy between 3DMM and 3D Landmarks for Accurate 3D Facial Geometry
- Note:
include_tensors
needs to be True in order to include the model prediction in Prediction.logits
Models are downloaded during runtime automatically to the models directory. You can also download the models manually from a public Google Drive folder.
Image test.jpg (4 faces) is analyzed (including drawing boxes and landmarks, but not saving) in about 465ms and test3.jpg (25 faces) in about 1480ms (batch_size=8) on NVIDIA Tesla T4 GPU once the default configuration (conf/config.yaml) of models is initialized and pre heated to the initial image size 1080x1080 by the first run. One can monitor the execution times in logs using the DEBUG level.
Detailed test.jpg execution times:
analyzer
├── reader: 27 ms
├── detector: 230 ms
├── unifier: 1 ms
└── predictor
├── embed: 8 ms
├── verify: 58 ms
├── fer: 28 ms
├── deepfake: 117 ms
└── align: 5 ms
└── utilizer
├── align: 8 ms
├── draw_boxes: 22 ms
├── draw_landmarks: 7 ms
└── save: 298 ms
Run the Docker container:
- CPU:
docker compose -f docker-compose.dev.yml run facetorch-dev
- GPU:
docker compose -f docker-compose.dev.yml run facetorch-dev-gpu
- file of the TorchScript model
- ID of the Google Drive model file
- facetorch fork
Facetorch works with models that were exported from PyTorch to TorchScript. You can apply torch.jit.trace function to compile a PyTorch model as a TorchScript module. Please verify that the output of the traced model equals the output of the original model.
The first models are hosted on my public Google Drive folder. You can either send the new model for upload to me, host the model on your Google Drive or host it somewhere else and add your own downloader object to the codebase.
- Create new folder with a short name of the task in predictor configuration directory
/conf/analyzer/predictor/
following the FER example in/conf/analyzer/predictor/fer/
- Copy the yaml file
/conf/analyzer/predictor/fer/efficientnet_b2_8.yaml
to the new folder/conf/analyzer/predictor/<predictor_name>/
- Change the yaml file name to the model you want to use:
/conf/analyzer/predictor/<predictor_name>/<model_name>.yaml
- Change the Google Drive file ID to the ID of the model.
- Select the preprocessor (or implement a new one based on BasePredPreProcessor) and specify it's parameters e.g. image size and normalization in the yaml file to match the requirements of the new model.
- Select the postprocessor (or implement a new one based on BasePredPostProcessor) and specify it's parameters e.g. labels in the yaml file to match the requirements of the new model.
- (Optional) Add BaseUtilizer derivative that uses output of your model to perform some additional actions.
- Add a new predictor to the main config.yaml and all tests.config.n.yaml files. Alternatively, create a new config file e.g.
tests.config.n.yaml and add it to the
/tests/conftest.py
file. - Write a test for the new predictor in
/tests/test_<predictor_name>.py
- Run linting:
black facetorch
- Add the new predictor to the README model table.
- Update CHANGELOG and version
- Submit a pull request to the repository
CPU:
- Add packages with corresponding versions to
environment.yml
file - Lock the environment:
conda lock -p linux-64 -f environment.yml --lockfile conda-lock.yml
- Install the locked environment:
conda-lock install --name env conda-lock.yml
GPU:
- Add packages with corresponding versions to
gpu.environment.yml
file - Lock the environment:
conda lock -p linux-64 -f gpu.environment.yml --lockfile gpu.conda-lock.yml
- Install the locked environment:
conda-lock install --name env gpu.conda-lock.yml
- Run tests and generate coverage:
pytest tests --verbose --cov-report html:coverage --cov facetorch
- Generate documentation from docstrings using pdoc3:
pdoc --html facetorch --output-dir docs --force --template-dir pdoc/templates/
- Run profiling of the example script:
python -m cProfile -o profiling/example.prof scripts/example.py
- Open profiling file in the browser:
snakeviz profiling/example.prof
I want to thank the open source code community and the researchers who have published the models. This project would not be possible without their work.
Logo was generated using DeepAI Text To Image API