如何在iOS设备中接收从RTP URL流式传输的RTP数据包? (例如rtp://@225.0.0.0)

时间:2020-12-13 16:42:45

I am trying to stream RTP Packets (which is streaming an audio) from RTP URL e.g. rtp://@225.0.0.0 after so much research on the same i have somewhat streamed the URL in my device and playing it with https://github.com/maknapp/vlckitSwiftSample. This is only playing the Streamed Data but does not have any function to store the data.

我正在尝试从RTP URL流式传输RTP数据包(正在流式传输音频),例如rtp:// @ 225.0.0.0经过如此多的研究后我在我的设备上有点流式传输了URL并用https://github.com/maknapp/vlckitSwiftSample播放。这只是播放流数据,但没有任何存储数据的功能。

From research and other sources i dint find much content and simple information that should be helpful to stream the Packet over RTP and store it in iOS Device.

从研究和其他来源,我发现很多内容和简单的信息应该有助于流式传输Packet over RTP并将其存储在iOS设备中。

I have tried with following link.

我试过以下链接。

  1. https://github.com/kewlbear/FFmpeg-iOS-build-script

    https://github.com/kewlbear/FFmpeg-iOS-build-script

  2. https://github.com/chrisballinger/FFmpeg-iOS

    https://github.com/chrisballinger/FFmpeg-iOS

These two are not even compiling due to POD Issues other projects or guide just giving me reference on RTSP Stream instead of RTP Stream.

这两个甚至没有编译,因为POD问题其他项目或指南只是给我参考RTSP流而不是RTP流。

If anyone can give us a guidance or any idea that how we can implement such things then it will be appreciated.

如果有人能给我们一个指导或任何想法,我们如何能够实现这些东西,那么我们将不胜感激。

1 个解决方案

#1


6  

First foremost, you need to understand how this works.

首先,您需要了解其工作原理。

The sender i.e. the creator of RTP stream is probably doing the following:

发件人即RTP流的创建者可能正在执行以下操作:

  1. Uses a source for the data: In case of audio, this could be the microphone or audio samples or a file
  2. 使用数据源:如果是音频,则可以是麦克风或音频样本或文件
  3. Encodes the audio using a audio codec such as AAC or Opus.
  4. 使用音频编解码器(如AAC或Opus)对音频进行编码。
  5. Uses RTP packetizer to create RTP packets from encoded audio frames
  6. 使用RTP打包器从编码的音频帧创建RTP数据包
  7. Uses a transport layer such as UDP to send these packets
  8. 使用UDP等传输层发送这些数据包

Protocols such as RTSP provides the necessary signaling information to provide better stream information. Usually RTP itself isn't enough as things such as congestion control, feedback, dynamic bit rate are handled with the help of RTCP.

诸如RTSP之类的协议提供必要的信令信息以提供更好的流信息。通常RTP本身是不够的,因为在RTCP的帮助下处理拥塞控制,反馈,动态比特率等事情。

Anyway, in order to store the incoming stream, you need to do the following:

无论如何,为了存储传入流,您需要执行以下操作:

  1. Use a RTP depacketizer to get the encoded audio frames out of it. You can write your own or use a third party implementation. In fact ffmpeg is a big framework which has all necessary code for most of the codecs and protocols. However for your case, find a simple RTP depacketizer. There could be headers corresponding to a particular codec to make sure you refer to a correct RFC.

    使用RTP解包器从中获取编码的音频帧。您可以自己编写或使用第三方实现。事实上,ffmpeg是一个大框架,拥有大多数编解码器和协议的所有必需代码。但是对于您的情况,找一个简单的RTP解包器。可能存在与特定编解码器对应的标头,以确保您引用正确的RFC。

  2. Once you have access to encoded frames, you can write the same in a media container such as m4a or ogg depending upon the audio codec used in the stream.

    一旦访问了编码帧,就可以在媒体容器(如m4a或ogg)中编写相同的内容,具体取决于流中使用的音频编解码器。

In order to play the stream, you need to do the following:

要播放流,您需要执行以下操作:

  1. Use a RTP depacketizer to get the encoded audio frames out of it. You can write your own or use a third party implementation. In fact ffmpeg is a big framework which has all necessary code for most of the codecs and protocols. However for your case, find a simple RTP depacketizer.

    使用RTP解包器从中获取编码的音频帧。您可以自己编写或使用第三方实现。事实上,ffmpeg是一个大框架,拥有大多数编解码器和协议的所有必需代码。但是对于您的情况,找一个简单的RTP解包器。

  2. Once you have access to encoded frames, use a audio decoder (available as a library) to decode the frames or check if your platform supports that codec directly for playback

    一旦您可以访问编码的帧,使用音频解码器(可用作库)来解码帧或检查您的平台是否支持该编解码器直接播放

  3. Once you have access to decoded frames, in iOS, you can use AVFoundation to play the same.

    一旦您可以访问已解码的帧,在iOS中,您可以使用AVFoundation播放相同的内容。

If you are looking at an easy way to do it, may be use a third party implementation such as http://audiokit.io/

如果您正在寻找一种简单的方法,可以使用第三方实现,例如http://audiokit.io/

#1


6  

First foremost, you need to understand how this works.

首先,您需要了解其工作原理。

The sender i.e. the creator of RTP stream is probably doing the following:

发件人即RTP流的创建者可能正在执行以下操作:

  1. Uses a source for the data: In case of audio, this could be the microphone or audio samples or a file
  2. 使用数据源:如果是音频,则可以是麦克风或音频样本或文件
  3. Encodes the audio using a audio codec such as AAC or Opus.
  4. 使用音频编解码器(如AAC或Opus)对音频进行编码。
  5. Uses RTP packetizer to create RTP packets from encoded audio frames
  6. 使用RTP打包器从编码的音频帧创建RTP数据包
  7. Uses a transport layer such as UDP to send these packets
  8. 使用UDP等传输层发送这些数据包

Protocols such as RTSP provides the necessary signaling information to provide better stream information. Usually RTP itself isn't enough as things such as congestion control, feedback, dynamic bit rate are handled with the help of RTCP.

诸如RTSP之类的协议提供必要的信令信息以提供更好的流信息。通常RTP本身是不够的,因为在RTCP的帮助下处理拥塞控制,反馈,动态比特率等事情。

Anyway, in order to store the incoming stream, you need to do the following:

无论如何,为了存储传入流,您需要执行以下操作:

  1. Use a RTP depacketizer to get the encoded audio frames out of it. You can write your own or use a third party implementation. In fact ffmpeg is a big framework which has all necessary code for most of the codecs and protocols. However for your case, find a simple RTP depacketizer. There could be headers corresponding to a particular codec to make sure you refer to a correct RFC.

    使用RTP解包器从中获取编码的音频帧。您可以自己编写或使用第三方实现。事实上,ffmpeg是一个大框架,拥有大多数编解码器和协议的所有必需代码。但是对于您的情况,找一个简单的RTP解包器。可能存在与特定编解码器对应的标头,以确保您引用正确的RFC。

  2. Once you have access to encoded frames, you can write the same in a media container such as m4a or ogg depending upon the audio codec used in the stream.

    一旦访问了编码帧,就可以在媒体容器(如m4a或ogg)中编写相同的内容,具体取决于流中使用的音频编解码器。

In order to play the stream, you need to do the following:

要播放流,您需要执行以下操作:

  1. Use a RTP depacketizer to get the encoded audio frames out of it. You can write your own or use a third party implementation. In fact ffmpeg is a big framework which has all necessary code for most of the codecs and protocols. However for your case, find a simple RTP depacketizer.

    使用RTP解包器从中获取编码的音频帧。您可以自己编写或使用第三方实现。事实上,ffmpeg是一个大框架,拥有大多数编解码器和协议的所有必需代码。但是对于您的情况,找一个简单的RTP解包器。

  2. Once you have access to encoded frames, use a audio decoder (available as a library) to decode the frames or check if your platform supports that codec directly for playback

    一旦您可以访问编码的帧,使用音频解码器(可用作库)来解码帧或检查您的平台是否支持该编解码器直接播放

  3. Once you have access to decoded frames, in iOS, you can use AVFoundation to play the same.

    一旦您可以访问已解码的帧,在iOS中,您可以使用AVFoundation播放相同的内容。

If you are looking at an easy way to do it, may be use a third party implementation such as http://audiokit.io/

如果您正在寻找一种简单的方法,可以使用第三方实现,例如http://audiokit.io/