Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get audio & video frames to save a video locally #44

Open
theiosdevguy opened this issue Jun 22, 2016 · 7 comments
Open

How to get audio & video frames to save a video locally #44

theiosdevguy opened this issue Jun 22, 2016 · 7 comments

Comments

@theiosdevguy
Copy link

Is there a way to get audio and video frames? I need to save the video being streamed locally also.

@coolwr
Copy link
Contributor

coolwr commented Jun 22, 2016

In the RTCAVFoundationVideoSource.h you'll find a reference to the AVCaptureSession. Using the captureSession property you'll be able to call addOutput to an AVCaptureVideoDataOutput object that would allow you to write to a file to record video. You can do the same with audio.

There are a number of tutorials online related to camera video recording that you should be able to integrate with this WebRTC implementation that uses the above AVFoundation references. I hope that helps.

@theiosdevguy
Copy link
Author

Thanks @coolwr . I tried fetching the captureSession from RTCAVFoundationVideoSource.h. But strangely, the code below in createLocalVideoTrack of class ARDAppClient shows different objects of videoSource.

RTCAVFoundationVideoSource *videoSource = [[RTCAVFoundationVideoSource alloc] initWithFactory:_factory constraints:mediaConstraints];

localVideoTrack = [_factory videoTrackWithID:@"ARDAMSv0" source:videoSource];

In the code above, you will notice that localVideoTrack.source is different object than videoSource. This ideally should not be the case. Or please let me know if I am missing something here.

@theiosdevguy
Copy link
Author

I somehow hacked the above mentioned problem using KVC. But now the main issue is that I am unable to add output to AVCaptureSession. Reason [_session canAddOutput:self.videoDataOutput] always return false. And also if I change the videoOutput, how will the video stream

@wumbo
Copy link

wumbo commented Aug 10, 2016

Have you had any luck with this?

@saifdj
Copy link

saifdj commented Oct 25, 2018

Any updates, did anyone find a way to store the session locally ? @wumbo @theiosdevguy @coolwr

@wumbo
Copy link

wumbo commented Oct 25, 2018

Yes, have a look at my fork here. In ARTCVideoChatViewController.m you can see that I call [self.localVideoTrack addRenderer:self.videoProcessor];

VideoProcessor is a custom class that implements the RTCVideoRenderer protocol. Its -(void) renderFrame:(RTCI420Frame *)frame method will get called every time there's a new frame.

Here you'll get the frame in RTCI420Frame format which uses the YUV color space. I used OpenCV to convert the frame to a cv::Mat in RGB color space, because I was using it to do some image processing. I also used OpenCV to convert it to a UIImage afterwards.

Obviously this just gives you all the frames as images, not as a video, but I don't imagine it would be too difficult to convert them to a video.

@saifdj
Copy link

saifdj commented Oct 26, 2018

Thanks for your quick response @wumbo ,

As you said we could get frames from VideoProcessor, which are series of images i guess.

but i need to save only audio file of the conversation. (not video). Please let me know if you have done it before.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants