开发者

Realtime Audio/Video Streaming FROM iPhone to another device (Browser, or iPhone)

I'd like to get real-time video from the iPhone to another device (either desktop browser or another iPhone, e.g. point-to-point).

NOTE: It's not one-to-many, just one-to-one at the moment. Audio can be part of stream or via telephone call on iphone.

There are four ways I can think of...

  1. Capture frames on iPhone, send frames to mediaserver, have mediaserver publish realtime video using host webserver.

  2. Capture frames on iPhone, convert to images, send to httpserver, have javascript/AJAX in browser reload images from server as fast as possible.

  3. Run httpServer on iPhone, Capture 1 second duration movies on iPhone, create M3U8 files on iPhone, have the other user connect directly to httpServer on iPhone for liveStreaming.

  4. Capture 1 second duration movies on iPhone, create M3U8 files on iPhone, send to httpServer, have the other user connected to the httpServer for liveStreaming. This is a good answer, has anyone gotten it to work?

Is there a better, more efficient option? What's the fastest way to get data off the iPhone? Is it ASIHTTPRequest?

Thanks, every开发者_开发百科one.


Sending raw frames or individual images will never work well enough for you (because of the amount of data and number of frames). Nor can you reasonably serve anything from the phone (WWAN networks have all sorts of firewalls). You'll need to encode the video, and stream it to a server, most likely over a standard streaming format (RTSP, RTMP). There is an H.264 encoder chip on the iPhone >= 3GS. The problem is that it is not stream oriented. That is, it outputs the metadata required to parse the video last. This leaves you with a few options.

  1. Get the raw data and use FFmpeg to encode on the phone (will use a ton of CPU and battery).
  2. Write your own parser for the H.264/AAC output (very hard)
  3. Record and process in chunks (will add latency equal to the length of the chunks, and drop around 1/4 second of video between each chunk as you start and stop the sessions).


"Record and process in chunks (will add latency equal to the length of the chunks, and drop around 1/4 second of video between each chunk as you start and stop the sessions)."

I have just wrote such a code, but it is quite possible to eliminate such a gap by overlapping two AVAssetWriters. Since it uses the hardware encoder, I strongly recommend this approach.


We have similar needs; to be more specific, we want to implement streaming video & audio between an iOS device and a web UI. The goal is to enable high-quality video discussions between participants using these platforms. We did some research on how to implement this:

  • We decided to use OpenTok and managed to pretty quickly implement a proof-of-concept style video chat between an iPad and a website using the OpenTok getting started guide. There's also a PhoneGap plugin for OpenTok, which is handy for us as we are not doing native iOS.

  • Liblinphone also seemed to be a potential solution, but we didn't investigate further.

  • iDoubs also came up, but again, we felt OpenTok was the most promising one for our needs and thus didn't look at iDoubs in more detail.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜