forked from alew3/faceit_live3
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
12 changed files
with
4,579 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
[submodule "first-order-model"] | ||
path = first-order-model | ||
url = https://github.com/AliaksandrSiarohin/first-order-model |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,2 +1,112 @@ | ||
# faceit_live3 | ||
This is an update to faceit_live using first order model | ||
This is an update to http://github.com/faceit_live using [first order model](https://github.com/AliaksandrSiarohin/first-order-model) by Aliaksandr Siarohin to generate the images. This model only requires a single image, so no training is needed and things are much easier. | ||
|
||
# Setup | ||
|
||
## Requirements | ||
This has been tested on **Ubuntu 18.04 with a Titan RTX/X GPU**. | ||
You will need the following to make it work: | ||
|
||
Linux host OS | ||
NVidia fast GPU (GTX 1080, GTX 1080i, Titan, etc ...) | ||
Fast Desktop CPU (Quad Core or more) | ||
NVidia CUDA 10 and cuDNN 7 libraries installed | ||
Webcam | ||
|
||
|
||
## Setup Host System | ||
To use the fake webcam feature to enter conferences with our stream we need to insert the **v4l2loopback** kernel module in order to create */dev/video1*. Follow the install instructions at (https://github.com/umlaeute/v4l2loopback), then let's setup our fake webcam: | ||
|
||
``` | ||
$ git clone https://github.com/umlaeute/v4l2loopback.git | ||
$ make && sudo make install | ||
$ sudo depmod -a | ||
$ sudo modprobe v4l2loopback devices=1 | ||
$ sudo modprobe v4l2loopback exclusive_caps=1 card_label="faceit_live" video_nr=1 | ||
$ v4l2-ctl -d /dev/video1 -c timeout=1000 | ||
``` | ||
|
||
# v4l2loopback-ctl set-timeout-image caio.png /dev/video1 | ||
|
||
|
||
Change the video_nr above in case you already have a webcam running on /dev/video1 | ||
|
||
To check if things are working, try running an mp4 to generate a video the */dev/video1* (replace ale.mp4 with your own video). | ||
``` | ||
$ ffmpeg -re -i media/ale.mp4 -f v4l2 /dev/video1 -loop 10 | ||
``` | ||
And view it | ||
``` | ||
$ ffplay -f v4l2 /dev/video1 | ||
``` | ||
|
||
On Ubuntu 18, I had to make a minor change to the source code of v4l2loopback.c to get loopback working. In case the above doesn't work, you can try this change before running *make* : | ||
|
||
``` | ||
# v4l2loopback.c | ||
from | ||
#if LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 29) | ||
to | ||
#if LINUX_VERSION_CODE >= KERNEL_VERSION(3,7,0) | ||
``` | ||
|
||
You can also inspect your /dev/video* devices: | ||
|
||
``` | ||
$ v4l2-ctl --list-devices | ||
$ v4l2-ctl --list-formats -d /dev/video1 | ||
``` | ||
|
||
|
||
If you have more than one GPU, you might need to set some environment variables: | ||
``` | ||
# specify which display to use for rendering | ||
$ export DISPLAY=:1 | ||
# which CUDA DEVICE to use (run nvidia-smi to discover the ID) | ||
$ export CUDA_VISIBLE_DEVICES = 0 | ||
``` | ||
|
||
## Clone this repository | ||
Don't forget to use the *--recurse-submodules* parameter to checkout all dependencies. | ||
|
||
$ git clone --recurse-submodules https://github.com/alew3/faceit_live3.git /local_path/ | ||
|
||
## Create an Anaconda environment and install requirments | ||
``` | ||
$ conda create -n faceit_live3 python=3.8 | ||
$ source activate faceit_live3 | ||
$ conda install pytorch=1.4 torchvision=0.5 cudatoolkit=10.1 -c pytorch | ||
$ pip install -r requirements.txt | ||
``` | ||
|
||
## Download 'vox-adv-cpk.pth.tar' to /models folder | ||
|
||
You can find it at: [google-drive](https://drive.google.com/open?id=1PyQJmkdCsAkOYwUyaj_l-l0as-iLDgeH) or [yandex-disk](https://yadi.sk/d/lEw8uRm140L_eQ). | ||
|
||
|
||
# Usage | ||
|
||
Put in the `./media/` directory the images in jpg/png you want to play with. | ||
|
||
|
||
# Run the program | ||
|
||
``` | ||
$ python faceit_live.py | ||
``` | ||
|
||
## Parameters | ||
--webcam # the videoid of the Webcam e.g. 0 if /dev/video0 (default is 0) | ||
--image # the face to use for transformations, put the files inside media (by default it loads the first image in the folder) | ||
--streamto # the /dev/video number to stream to (default is 1) | ||
|
||
## Example | ||
``` | ||
$ python faceit_live.py --webcam 0 --stream 1 --image oliver.jpg | ||
``` | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,212 @@ | ||
import imageio | ||
import numpy as np | ||
import pandas as pd | ||
from skimage.transform import resize | ||
import warnings | ||
import sys | ||
import cv2 | ||
import time | ||
import PIL.Image as Image | ||
import PIL.ImageFilter | ||
import io | ||
from io import BytesIO | ||
import pyfakewebcam | ||
import pyautogui | ||
import os | ||
import glob | ||
warnings.filterwarnings("ignore") | ||
|
||
############## setup #### | ||
stream = True | ||
media_path = './media/' | ||
model_path = 'model/' | ||
webcam_id = 2 | ||
webcam_height = 480 | ||
webcam_width = 640 | ||
screen_width, screen_height = pyautogui.size() | ||
|
||
stream_id = 1 | ||
first_order_path = 'first-order-model/' | ||
sys.path.insert(0,first_order_path) | ||
reset = True | ||
|
||
# import methods from first-order-model | ||
import demo | ||
from demo import load_checkpoints, make_animation, tqdm | ||
|
||
# prevent tqdm from outputting to console | ||
demo.tqdm = lambda *i, **kwargs: i[0] | ||
|
||
img_list = [] | ||
for filename in os.listdir(media_path): | ||
if filename.endswith(".jpg") or filename.endswith(".jpeg") or filename.endswith(".png"): | ||
img_list.append(os.path.join(media_path, filename)) | ||
print(os.path.join(media_path, filename)) | ||
|
||
print(img_list, len(img_list)) | ||
|
||
|
||
|
||
|
||
|
||
############## end setup #### | ||
|
||
def main(): | ||
global source_image | ||
source_image = readnextimage(0) | ||
|
||
# start streaming | ||
camera = pyfakewebcam.FakeWebcam(f'/dev/video{stream_id}', webcam_width, webcam_height) | ||
camera.print_capabilities() | ||
print(f"Fake webcam created on /dev/video{stream}. Use Firefox and join a Google Meeting to test.") | ||
|
||
# capture webcam | ||
video_capture = cv2.VideoCapture(webcam_id) | ||
time.sleep(1) | ||
width = video_capture.get(3) # float | ||
height = video_capture.get(4) # float | ||
print("webcam dimensions = {} x {}".format(width,height)) | ||
|
||
# load models | ||
previous = None | ||
net = load_face_model() | ||
generator, kp_detector = demo.load_checkpoints(config_path=f'{first_order_path}config/vox-adv-256.yaml', checkpoint_path=f'{model_path}/vox-adv-cpk.pth.tar') | ||
|
||
|
||
# create windows | ||
cv2.namedWindow('Face', cv2.WINDOW_GUI_NORMAL) # extracted face | ||
cv2.moveWindow('Face', int(screen_width/2)-150, 100) | ||
cv2.resizeWindow('Face', 256,256) | ||
|
||
cv2.namedWindow('DeepFake', cv2.WINDOW_GUI_NORMAL) # face transformation | ||
cv2.moveWindow('DeepFake', int(screen_width/2)+150, 100) | ||
cv2.resizeWindow('DeepFake', 256,256) | ||
|
||
|
||
cv2.namedWindow('Stream', cv2.WINDOW_GUI_NORMAL) # rendered to fake webcam | ||
cv2.moveWindow('Stream', int(screen_width/2)-int(webcam_width/2), 400) | ||
cv2.resizeWindow('Stream', webcam_width,webcam_width) | ||
|
||
|
||
print("Press C to center Webcam, Press N for next image in media directory") | ||
|
||
while True: | ||
ret, frame = video_capture.read() | ||
frame = cv2.resize(frame, (640, 480)) | ||
frame = cv2.flip(frame,1) | ||
|
||
if (previous is None or reset is True): | ||
x1,y1,x2,y2 = find_face_cut(net,frame) | ||
previous = cut_face_window(x1,y1,x2,y2,source_image) | ||
reset = False | ||
|
||
deep_fake = process_image(previous,cut_face_window(x1,y1,x2,y2,frame),net, generator, kp_detector) | ||
deep_fake = cv2.cvtColor(deep_fake, cv2.COLOR_RGB2BGR) | ||
|
||
#cv2.imshow('Webcam', frame) - get face | ||
cv2.imshow('Face', cut_face_window(x1,y1,x2,y2,frame)) | ||
cv2.imshow('DeepFake', deep_fake) | ||
|
||
|
||
rgb = cv2.resize(deep_fake,(480,480)) | ||
# pad image | ||
stream_v = cv2.copyMakeBorder( rgb, 0, 0, 80, 80, cv2.BORDER_CONSTANT) | ||
cv2.imshow('Stream',stream_v) | ||
|
||
#time.sleep(1/30.0) | ||
stream_v = cv2.flip(stream_v,1) | ||
stream_v = cv2.cvtColor(stream_v, cv2.COLOR_BGR2RGB) | ||
stream_v = (stream_v*255).astype(np.uint8) | ||
|
||
# stream to fakewebcam | ||
camera.schedule_frame(stream_v) | ||
|
||
|
||
k = cv2.waitKey(1) | ||
# Hit 'q' on the keyboard to quit! | ||
if k & 0xFF == ord('q'): | ||
video_capture.release() | ||
break | ||
elif k==ord('c'): | ||
# center | ||
reset = True | ||
elif k==ord('n'): | ||
# rotate images | ||
source_image = readnextimage() | ||
reset = True | ||
|
||
cv2.destroyAllWindows() | ||
exit() | ||
|
||
|
||
# transform face with first-order-model | ||
def process_image(base,current,net, generator,kp_detector): | ||
predictions = make_animation(source_image, [base,current], generator, kp_detector, relative=False, adapt_movement_scale=False) | ||
return predictions[1] | ||
|
||
def load_face_model(): | ||
modelFile = f"{model_path}/res10_300x300_ssd_iter_140000.caffemodel" | ||
configFile = f"{model_path}./deploy.prototxt.txt" | ||
net = cv2.dnn.readNetFromCaffe(configFile, modelFile) | ||
return net | ||
|
||
def cut_face_window(x1,y1,x2,y2,face): | ||
cut_x1 = x1 | ||
cut_y1 = y1 | ||
cut_x2 = x2 | ||
cut_y2 = y2 | ||
face = face[cut_y1:cut_y2,cut_x1:cut_x2] | ||
face = resize(face, (256, 256))[..., :3] | ||
|
||
return face | ||
|
||
# find the face in webcam stream and center a 256x256 window | ||
def find_face_cut(net,face,previous=False): | ||
blob = cv2.dnn.blobFromImage(face, 1.0, (300, 300), [104, 117, 123], False, False) | ||
frameWidth = 640 | ||
frameHeight = 480 | ||
net.setInput(blob) | ||
detections = net.forward() | ||
bboxes = [] | ||
for i in range(detections.shape[2]): | ||
confidence = detections[0, 0, i, 2] | ||
if confidence > 0.8: | ||
x1 = int(detections[0, 0, i, 3] * frameWidth) | ||
y1 = int(detections[0, 0, i, 4] * frameHeight) | ||
x2 = int(detections[0, 0, i, 5] * frameWidth) | ||
y2 = int(detections[0, 0, i, 6] * frameHeight) | ||
|
||
face_margin_w = int(256 - (abs(x1-x2) -.5)) | ||
face_margin_h = int(256 - (abs(y1-y2) -.5)) | ||
|
||
cut_x1 = (x1 - int(face_margin_w/2)) | ||
if cut_x1<0: cut_x1=0 | ||
cut_y1 = y1 - int(2*face_margin_h/3) | ||
if cut_y1<0: cut_y1=0 | ||
cut_x2 = x2 + int(face_margin_w/2) | ||
cut_y2 = y2 + int(face_margin_h/3) | ||
|
||
if range(detections.shape[2]) == 0: | ||
print("face not found in video") | ||
exit() | ||
else: | ||
print(f'Found face at: ({x1,y1}) ({x2},{y2} width:{abs(x2-x1)} height: {abs(y2-y1)})') | ||
print(f'Cutting at: ({cut_x1,cut_y1}) ({cut_x2},{cut_y2} width:{abs(cut_x2-cut_x1)} height: {abs(cut_y2-cut_y1)})') | ||
|
||
|
||
return cut_x1,cut_y1,cut_x2,cut_y2 | ||
|
||
def readnextimage(position=-1): | ||
global img_list,pos | ||
if (position != -1): | ||
pos = position | ||
else: | ||
if pos<len(img_list)-1: | ||
pos=pos+1 | ||
else: | ||
pos=0 | ||
source_image = imageio.imread(img_list[pos]) | ||
source_image = resize(source_image, (256, 256))[..., :3] | ||
return source_image | ||
|
||
main() |
Submodule first-order-model
added at
83d8d7
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Oops, something went wrong.