This post is about the recent project that I am working on and details the approach that I have taken to circumvent an apparant problem.
I have been trying to create a web based audio+video capture tool that was using HTML5 webrtc based libraries to capture information from the user if it has a webcam and a mic. To do this task, the most robust library is RecordRTC (https://github.com/muaz-khan/WebRTC-Experiment) by Muaz Khan, which contains various examples and is actively maintained and supported by its creater.
For my requirements, I needed both audio and video streams to record at the same time, which was not coming from the usermedia object and the following code only provided a single stream of information:
I was thinking that selecting both the audio and video parameters to true would result in multiple streams or even one stream containing both the audio and video information, but I was wrong and after a detailed search, I ended up with selecting both the streams in the following function:
So, in RecordRTC we have to use the same stream object, but pass the appropriate type of stream (audio, video, gif, canvas) to choose in the library constructor, which calls the appropriate recorder, such as WhammyRecorder to record video, etc.
While stopping the recorder, similar steps are required to get the captured stream and we have to enforce a callback mechanism in the audio recording to get the video stream within the audio recording.
Currently this is supported in chrome and firefox aurora and works correctly on both the desktop as well as mobile devices out of box. But at times, the recorded audio is not in sync with the video (atleast for the first recording after each page refresh). Hopefully, this will get sorted out and this spec gets implemented in all the browsers and saves us from the problem of installing plugins like flash to use the hardware for data capture from user.