Skip to content
/ rome Public
forked from SamsungLabs/rome

Updated dependencies to work on my machine

License

Notifications You must be signed in to change notification settings

mjsh34/rome

 
 

Repository files navigation

Fork

This project is a fork of https://github.com/SamsungLabs/rome made to work with a more recent version of PyTorch and with some additional features such as mesh export.

Installation

Use requirements_new.txt. We only tested infer.py on Ubuntu 22.04 with CUDA 11.8, and Python 3.10.12 installed. Alternatively, you can try pip installing following packages in order, most of which have versions most recent/compatible at Aug 2023.

  • numpy==1.23.1 (During development, this package was downgraded via pip install -U numpy==1.23.1 right before installing chumpy to resolve incompatibility.)
  • torch==2.0.2
  • git+https://github.com/facebookresearch/pytorch3d (Tested on 27 Aug '23; after version 0.7.3; also see requirements_new.txt for exact version)
  • face-alignment==1.4.1
  • torchvision==0.15.2
  • kornia==0.7.0
  • chumpy==0.70

For train:

  • tensorboardX==2.6.2.2
  • lpips==0.1.4
  • pytorch-msssim==1.0.0

NVIDIA apex

https://stackoverflow.com/questions/66610378/unencryptedcookiesessionfactoryconfig-error-when-importing-apex Tested 2023/09/01

git clone https://github.com/NVIDIA/apex
cd apex
python setup.py install

Training

Realistic one-shot mesh-based avatars

tease

Paper | Project Page

This repository contains official inference code for ROME.

This code helps you to create personal avatar from just a single image. The resulted meshes can be animated and rendered with photorealistic quality.

Important disclaimer

To render ROME avatar with pretrained weights you need download FLAME model and DECA weights. DECA reconstructs a 3D head model with detailed facial geometry from a single input image for FLAME template. Also, it can be replaced by another parametric model.

tease

Getting started

Initialise submodules and download DECA & MODNet weights. Additional exposition: for DECA, put deca_model.tar and generic_model.pkl inside DECA/data, and for MODNet put 3 .ckpt files and one .onnx file you downloaded from their repo inside MODNet/pretrained directory the latter you may need to create.

git submodule update --init --recursive

Install requirements and download ROME model: gDrive, y-disk.

Put model into data folder.

To verify the code with images run:

python3 infer.py -i data/imgs/taras1.jpg --deca DECA --rome data --save_mesh --save_albedo

# Different driver image
python3 infer.py -i data/imgs/taras1.jpg -d data/imgs/taras2.jpg --deca DECA --rome data --save_mesh

For linear basis download ROME model: gDrive (or camera model for voxceleb gDrive), yDrive

python3 infer.py --deca DECA --rome data  --use_distill True

License

This code and model are available for scientific research purposes as defined in the LICENSE file. By downloading and using the project you agree to the terms in the LICENSE and DECA LICENSE. Please note that the restriction on distributing this code for non-scientific purposes is limited.

Links

This work is based on the great project DECA. Also we acknowledge additional projects that were essential and speed up the developement.

Citation

If you found this code helpful, please consider citing:

@inproceedings{Khakhulin2022ROME,
  author    = {Khakhulin, Taras and Sklyarova,  Vanessa and Lempitsky, Victor and Zakharov, Egor},
  title     = {Realistic One-shot Mesh-based Head Avatars},
  booktitle = {European Conference of Computer vision (ECCV)},
  year      = {2022}
}

About

Updated dependencies to work on my machine

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 96.5%
  • Jupyter Notebook 3.0%
  • Shell 0.5%