Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Video support possibility? #47

Open
raisingdibar opened this issue Jun 12, 2020 · 2 comments
Open

Video support possibility? #47

raisingdibar opened this issue Jun 12, 2020 · 2 comments

Comments

@raisingdibar
Copy link

Considering the fact that video has become the best way for individuals to easily document dangerous situations (not having to focus on a subject, audio helps accountability, etc) and the fact that video is the majority of non-professional content emerging from these public spaces, has anyone considered trying to apply the existing processes to video?

I know images are the basis for video, so getting the image-processing tools ironed out is definitely a must.

I have a very basic understanding of image processing myself, so please chime in if I'm missing something that makes video unfeasible; but something tells me that there must be a way to track faces (either using ML, or gesture interactions) across the timeline of a video and maintain blur by blurring each frame according to where a face is at during that frame.

Thoughts?

@everestpipkin
Copy link
Owner

Hi! Glad to have you hop on. I've done a few tests for video but haven't had time to implement it, as it is likely almost as large of a task as the whole tool up to this point. but if you want to take on the task of video I'd fully encourage you to make a PR.

IMO facetracking is both unreliable and a little bit too heavy on mobile architecture, but the 'tap mode' I added a few days ago is an attempt to do a quicker, tap-based blur for crowds. My sketch for video processing would mean recording the tap-blur location each frame and compositing them into the video loop. My guess is this should cover most bases - the trouble will be that if there are 100s of faces in a long video, it'll take a lot of long loops to cover everyone. This could be partly addressed by doing multi-touch handling, so you can blur 5~ points at once. Still not ideal but likely still faster and more reliable than facetracking on an old phone.

@raisingdibar
Copy link
Author

I was thinking about trying to engineer something akin to creating vector (like the shape-creating curves you'd use in Photoshop / Illustrator, for example) that could be mapped via touch against time. Then you could just have another vector for each face. But the UX of this idea sounds wack so I need to think more on it. 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants