Skip to content

[NTIRE2024] official code for "Towards Real-world Video Face Restoration: A New Benchmark"

License

Notifications You must be signed in to change notification settings

ziyannchen/VFRxBenchmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Towards Real-world Video Face Restoration: A New Benchmark

🏰 HomePage | 📃 Paper

User study Page 🚀 Playground | 🧩 Source Code

FOS(Full, Occluded, Side)-face test datasets are proposed to cover faces of more complex scenarios from the real-world for comprehensive real-world performance evaluation of blind face restoration(BFR). With hard cases of side & occluded face, media of image & video invovled, FOS dataset contains 3 sub-datasets : FOS-real, FOS-syn and FOS-v.


🌟 If you find the work is helpful for you, please help to star this repo~! 🤗

🧩 Dataset

Dataset Samples # Source Download
FOS-real
FOS-real (#158)
4,253
158
FOS-V BaiduCloud | OneDrive
FOS-syn 3,150 CelebA-HQ Test(5k) BaiduCloud | OneDrive
FOS-V
FOS-v (#108)
3,316
108
YTCeleb, YTFace, YTW BaiduCloud | OneDrive

For more details about the datasets, please see this documentation.

✨ Library Overview

This repo refers to the basic code architecture of BasicSR.

As introduced in the table below, we support

  • Processing pipelines: face-clip extraction,
  • Models: 6 BFR methods and 4 VSR methods,
  • Metrics: 8 general IQA & 5 FIQA metrics.
Library Source Comments
Preprocessing Pipelines Face-clip extraction (Face detection, tracking & recognition involved) Refer to FaceXLib & VFHQ
Models 1.BFR models: CodeFormer, RestoreFormer, VQFR, GCFSR, GFPGAN, GPEN
2.VSR models: EDVR (EDVR-GAN), BasicVSR (BasicVSR-GAN)
VSR models are from VFHQ
Metrics PSNR, SSIM, LPIPS, FID, NIQE, BRISQUE, HyperIQA, MUSIQ, MANIQA,
SER-FIQ, FaceQnet, SDD-FIQA, IFQA
Refer to pyiqa

Please refer to their original license for use of different purposes.

In our work, we studied the applicability of new IQA metrics by measuring their consistency with subjective evaluations:

🚩 What's new

  • Release playground page & Web-page source codes of the user study web GUI.

🔧 Dependencies & Installation

Since this repo emsembles multiple methods, the environment installation can be complex. Please read the guidance carefully to help your installation more easily.

  1. Install cuda-toolkit, PyTorch & Torchvision refering to PyTorch Document. Require:

    • Python >= 3.7, < 3.10
    • PyTorch >= 1.7.1
    • CUDA >= 10.1
  2. Setup bfrxlib

    python setup.py develop

    This could take some time to complete.

  3. Other dependencies

    • Install basicsr >= 1.4.0

    Compile PyTorch C++ extensions (require gcc & g++ >= 5) if you need to use: EDVR, GFPGAN, GCFSR or GPEN.

    # 1. off-line: Specify BASICSR_EXT=True to during INSTALLATION 
    BASICSR_EXT=True pip install basicsry
    
    # 2. on-line: Specify BASICSR_JIT(Just In Time) during RUNNING
    pip install basicsr
    # And run the following command in every inference
    export BASICSR_JIT='True'

    Please see more details of the installation guidance in BasicSR Installation.

    If you don't need to use any of the abovementioned models, simply install basicsr:

    pip install basicsr
    • If you need to use the FIQA metric SER-FIQ:

      • nccl
      • mxnet

      should be further installed correctly:

    # check the compatible nccl version before installation
    # https://developer.nvidia.com/nccl/nccl-legacy-downloads
    conda install nccl
    
    # replace the mxnet-cuxxx with your own cudatoolkit version 
    pip install mxnet-cu110 # e.g. mxnet-cu113

⚡ Run

Quick Inference

python scripts/inference.py \
-opt options/test/codeformer/test_codeformer_out512_fos_real.yaml

We provide to inference and calculate metrics at the same time.

python scripts/inference_with_metrics.py \
-opt options/test/codeformer/test_codeformer_out512_wild.yaml

--force_yml allows you to do a quick config modification based on an exsiting config .yaml, try:

python scripts/inference_with_metrics.py \
-opt options/test/codeformer/test_codeformer_out512_wild.yaml \
--force_yml dataset:dataroot_lq=data/fos_real path:results_root=results/ dataset:metrics=[niqe, fid_folder]

See more tutotal details about the config file in README.md.

Metrics

To calculate target metrics on the restored image data directly.

python scripts/calculate_metrics.py \
-i results/codeformer/fos_real \
-m musiq ifqa

📄 Agreement

  • FOS dataset is only available to download for non-commercial research purposes. The copyright remains with the original owners of the images/videos. A complete version of the license can be found here. Any download and use of FOS dataset represents to agree the released FOS License by default.
  • All videos of the FOS dataset are obtained from the Internet which are not property of our institutions. Our institution are not responsible for the content nor the meaning of these videos.
  • You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purposes, any portion of the videos and any portion of derived data. You agree not to further copy, publish or distribute any portion of the FOS dataset.
  • The distribution of identities in the FOS datasets may not be representative of the global human population. Please be careful of unintended societal, gender, racial and other biases when training or deploying models trained on this data.

🌏 Citation

@InProceedings{Chen_2024_CVPR,
    author    = {Chen, Ziyan and He, Jingwen and Lin, Xinqi and Qiao, Yu and Dong, Chao},
    title     = {Towards Real-world Video Face Restoration: A New Benchmark},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    month     = {June},
    year      = {2024},
    pages     = {5929-5939}
}

📧 Contact

Feel free to reach out at [email protected] if you have any questions.

About

[NTIRE2024] official code for "Towards Real-world Video Face Restoration: A New Benchmark"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages