从iPhone到另一个设备(浏览器或iPhone)的实时音频/视频流

时间:2021-09-11 19:00:40

I'd like to get real-time video from the iPhone to another device (either desktop browser or another iPhone, e.g. point-to-point).

我想从iPhone获得实时视频到另一个设备(桌面浏览器或另一个iPhone,例如点对点)。

NOTE: It's not one-to-many, just one-to-one at the moment. Audio can be part of stream or via telephone call on iphone.

注意:它不是一对多,而是一对多。音频可以是流的一部分,也可以通过iphone的电话。

There are four ways I can think of...

有四种方法我可以想到……

  1. Capture frames on iPhone, send frames to mediaserver, have mediaserver publish realtime video using host webserver.

    在iPhone上捕捉帧,发送帧到mediaserver,让mediaserver使用主机webserver发布实时视频。

  2. Capture frames on iPhone, convert to images, send to httpserver, have javascript/AJAX in browser reload images from server as fast as possible.

    捕获iPhone上的帧,转换成图像,发送到httpserver,让javascript/AJAX在浏览器中尽快从服务器重新加载图像。

  3. Run httpServer on iPhone, Capture 1 second duration movies on iPhone, create M3U8 files on iPhone, have the other user connect directly to httpServer on iPhone for liveStreaming.

    在iPhone上运行httpServer,在iPhone上捕获1秒的电影,在iPhone上创建M3U8文件,让其他用户直接连接到httpServer进行直播。

  4. Capture 1 second duration movies on iPhone, create M3U8 files on iPhone, send to httpServer, have the other user connected to the httpServer for liveStreaming. This is a good answer, has anyone gotten it to work?

    在iPhone上捕获1秒片长,在iPhone上创建M3U8文件,发送到httpServer,让其他用户连接到httpServer进行直播。这是一个很好的答案,有人找到它工作了吗?

Is there a better, more efficient option? What's the fastest way to get data off the iPhone? Is it ASIHTTPRequest?

有更好、更有效的选择吗?从iPhone上获取数据最快的方法是什么?ASIHTTPRequest吗?

Thanks, everyone.

谢谢,每一个人。

3 个解决方案

#1


14  

Sending raw frames or individual images will never work well enough for you (because of the amount of data and number of frames). Nor can you reasonably serve anything from the phone (WWAN networks have all sorts of firewalls). You'll need to encode the video, and stream it to a server, most likely over a standard streaming format (RTSP, RTMP). There is an H.264 encoder chip on the iPhone >= 3GS. The problem is that it is not stream oriented. That is, it outputs the metadata required to parse the video last. This leaves you with a few options.

发送原始帧或单个图像对您来说永远不会足够好(因为数据量和帧数)。你也不能合理地从电话中提供任何服务(WWAN网络有各种防火墙)。您将需要对视频进行编码,并将其流到服务器,很可能是通过标准的流格式(RTSP, RTMP)。iPhone >= 3GS上有一个H.264编码器芯片。问题是它不是面向流的。也就是说,它输出解析视频所需的元数据。这给你留下了一些选择。

  1. Get the raw data and use FFmpeg to encode on the phone (will use a ton of CPU and battery).
  2. 获取原始数据并使用FFmpeg在手机上进行编码(将使用大量的CPU和电池)。
  3. Write your own parser for the H.264/AAC output (very hard)
  4. 为H.264/AAC输出编写自己的解析器(非常困难)
  5. Record and process in chunks (will add latency equal to the length of the chunks, and drop around 1/4 second of video between each chunk as you start and stop the sessions).
  6. 以块为单位记录和处理(将增加延迟,等于块的长度,并在开始和停止会话时在每个块之间减少大约1/4秒的视频)。

#2


5  

"Record and process in chunks (will add latency equal to the length of the chunks, and drop around 1/4 second of video between each chunk as you start and stop the sessions)."

“以块的形式记录和处理(将增加延迟,等于块的长度,并在开始和停止会话时在每个块之间减少大约1/4秒的视频)。”

I have just wrote such a code, but it is quite possible to eliminate such a gap by overlapping two AVAssetWriters. Since it uses the hardware encoder, I strongly recommend this approach.

我刚刚写了这样一个代码,但是通过重叠两个avassetwriter很有可能消除这样的差距。由于它使用的是硬件编码器,我强烈推荐这种方法。

#3


2  

We have similar needs; to be more specific, we want to implement streaming video & audio between an iOS device and a web UI. The goal is to enable high-quality video discussions between participants using these platforms. We did some research on how to implement this:

我们有类似的需要;更具体地说,我们希望在iOS设备和web UI之间实现流媒体视频和音频。目标是使使用这些平台的参与者之间能够进行高质量的视频讨论。我们对如何实现这一点做了一些研究:

  • We decided to use OpenTok and managed to pretty quickly implement a proof-of-concept style video chat between an iPad and a website using the OpenTok getting started guide. There's also a PhoneGap plugin for OpenTok, which is handy for us as we are not doing native iOS.

    我们决定使用OpenTok,并很快实现了一个概念验证式的视频聊天,在iPad和网站之间使用OpenTok入门指南。还有一个OpenTok的PhoneGap插件,这对我们来说很方便,因为我们不使用本地iOS。

  • Liblinphone also seemed to be a potential solution, but we didn't investigate further.

    Liblinphone似乎也是一个潜在的解决方案,但我们没有进一步调查。

  • iDoubs also came up, but again, we felt OpenTok was the most promising one for our needs and thus didn't look at iDoubs in more detail.

    iDoubs也出现了,但是,我们认为OpenTok是最适合我们需要的,因此没有更详细地考虑iDoubs。

#1


14  

Sending raw frames or individual images will never work well enough for you (because of the amount of data and number of frames). Nor can you reasonably serve anything from the phone (WWAN networks have all sorts of firewalls). You'll need to encode the video, and stream it to a server, most likely over a standard streaming format (RTSP, RTMP). There is an H.264 encoder chip on the iPhone >= 3GS. The problem is that it is not stream oriented. That is, it outputs the metadata required to parse the video last. This leaves you with a few options.

发送原始帧或单个图像对您来说永远不会足够好(因为数据量和帧数)。你也不能合理地从电话中提供任何服务(WWAN网络有各种防火墙)。您将需要对视频进行编码,并将其流到服务器,很可能是通过标准的流格式(RTSP, RTMP)。iPhone >= 3GS上有一个H.264编码器芯片。问题是它不是面向流的。也就是说,它输出解析视频所需的元数据。这给你留下了一些选择。

  1. Get the raw data and use FFmpeg to encode on the phone (will use a ton of CPU and battery).
  2. 获取原始数据并使用FFmpeg在手机上进行编码(将使用大量的CPU和电池)。
  3. Write your own parser for the H.264/AAC output (very hard)
  4. 为H.264/AAC输出编写自己的解析器(非常困难)
  5. Record and process in chunks (will add latency equal to the length of the chunks, and drop around 1/4 second of video between each chunk as you start and stop the sessions).
  6. 以块为单位记录和处理(将增加延迟,等于块的长度,并在开始和停止会话时在每个块之间减少大约1/4秒的视频)。

#2


5  

"Record and process in chunks (will add latency equal to the length of the chunks, and drop around 1/4 second of video between each chunk as you start and stop the sessions)."

“以块的形式记录和处理(将增加延迟,等于块的长度,并在开始和停止会话时在每个块之间减少大约1/4秒的视频)。”

I have just wrote such a code, but it is quite possible to eliminate such a gap by overlapping two AVAssetWriters. Since it uses the hardware encoder, I strongly recommend this approach.

我刚刚写了这样一个代码,但是通过重叠两个avassetwriter很有可能消除这样的差距。由于它使用的是硬件编码器,我强烈推荐这种方法。

#3


2  

We have similar needs; to be more specific, we want to implement streaming video & audio between an iOS device and a web UI. The goal is to enable high-quality video discussions between participants using these platforms. We did some research on how to implement this:

我们有类似的需要;更具体地说,我们希望在iOS设备和web UI之间实现流媒体视频和音频。目标是使使用这些平台的参与者之间能够进行高质量的视频讨论。我们对如何实现这一点做了一些研究:

  • We decided to use OpenTok and managed to pretty quickly implement a proof-of-concept style video chat between an iPad and a website using the OpenTok getting started guide. There's also a PhoneGap plugin for OpenTok, which is handy for us as we are not doing native iOS.

    我们决定使用OpenTok,并很快实现了一个概念验证式的视频聊天,在iPad和网站之间使用OpenTok入门指南。还有一个OpenTok的PhoneGap插件,这对我们来说很方便,因为我们不使用本地iOS。

  • Liblinphone also seemed to be a potential solution, but we didn't investigate further.

    Liblinphone似乎也是一个潜在的解决方案,但我们没有进一步调查。

  • iDoubs also came up, but again, we felt OpenTok was the most promising one for our needs and thus didn't look at iDoubs in more detail.

    iDoubs也出现了,但是,我们认为OpenTok是最适合我们需要的,因此没有更详细地考虑iDoubs。