h。使用苹果的VideoToolbox实现“实时”回放/预览的264数据包

时间:2022-09-18 23:40:57

From the Apple Documentation, the Quicktime framework is deprecated in OSX 10.9 in favor of AVFoundations and AVKit. For reasons I am not sure of, most of the documentation neglects to mention that some of the Quicktime framework replacement functionality is covered by a Framework called VideoToolbox. That replacement functionality includes decoding and decompressing among others.

在苹果文档中,OSX 10.9中弃用了Quicktime框架,取而代之的是AVFoundations和AVKit。由于我不确定的原因,大多数文档忽略了一个名为VideoToolbox的框架包含了一些Quicktime框架替换功能。替换功能包括解码和解压缩等。

I would like to decode and decompress h.264 encoded video data packets (NAL packets, TS packet, ect…), put them in a pixel buffer and then use Core Video and OpenGL to display the video as it comes in.

我想解码并解压缩h。264编码的视频数据包(NAL数据包,TS数据包,ect…),将它们放入像素缓冲区,然后使用Core video和OpenGL在视频进来时显示。

I am getting the video data packets from and encoding box via usb. This box does not show up when I run [AVCaptureDevice devices]. So I can not use most of AVFoundation (to my knowledge) to interface directly with the box. However, there is an api that comes with the box, that gives me access the video data packet files. I can write them to disk and create a video that can be played by quicktime. But doing a realtime playback is the issue. Thus the question of decoding, decompression, and creating a pixel buffer so I can use Core Video and OpenGL.

我正在通过usb从和编码盒获取视频数据包。当我运行[AVCaptureDevice]时,此框不会出现。因此,我不能使用AVFoundation(据我所知)的大部分直接与box交互。但是,还有一个api,它可以让我访问视频数据包文件。我可以将它们写到磁盘上,并创建一个可以由quicktime播放的视频。但是实时回放是一个问题。因此,解码、解压和创建一个像素缓冲区的问题,这样我就可以使用Core Video和OpenGL了。

I think if I can create a pixel buffer I may be able to use AVAssetWriterInputPixelBufferAdaptor and figure out some way to get that into a AVCaptureSession. If I can do that, I should be able forgo using OpenGL and use the tools afforded me in AVFoundations and AVKit.

我想,如果我能创建一个像素缓冲区,我就能使用AVAssetWriterInputPixelBufferAdaptor并找到一种方法将其放到AVCaptureSession中。如果我能做到这一点,我应该可以放弃使用OpenGL,使用我在AVFoundations和AVKit中提供的工具。

Also, from my reading of the AVFoundations documentation, every time they talk about streams of video/audio data, they are talking about one of two things; either a streams coming from a AVCaptureDevice or processing a stream from HTTP Live Stream. Like I said before, the box that produces the video data packets does not show up as a AVCaptureDevice. And, I would rather not build/implement a HTTP Live Streaming server if I do not need to. (Hopefully, I do not need to although I saw online some people did.)

而且,从我阅读AVFoundations的文献中,每次他们谈论视频/音频数据流时,他们谈论的是两件事中的一件;要么来自AVCaptureDevice的流,要么来自HTTP Live stream的流。就像我之前说的,生成视频数据包的盒子不会显示为AVCaptureDevice。而且,如果不需要的话,我宁愿不构建/实现一个HTTP实时流媒体服务器。(但愿我不需要这么做,尽管我在网上看到有人这么做。)

Any help would greatly be appreciated.

如有任何帮助,我们将不胜感激。

Thanks!

谢谢!

1 个解决方案

#1


9  

Ok, it has been a while, but I finally figure out how to use VideoToolbox correctly with a raw uncompressed and encoded data stream.

好了,已经有一段时间了,但是我终于找到了如何正确使用一个未压缩和编码的数据流来正确使用VideoToolbox。

Basially, I had to familiarize myself with the H264 specifications and got much help from this great post.

基本上,我必须熟悉H264规范,并从这个伟大的职位得到很多帮助。

Here are the steps

以下的步骤

  1. Make sure you get your Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) before you start processing any data.
  2. 在开始处理任何数据之前,请确保获得序列参数集(SPS)和图像参数集(PPS)。
  3. Use the SPS and PPS to get the necessary data to create an avcc atom header. See post I linked to above.
  4. 使用SPS和PPS获取创建avcc atom头所需的数据。参见上面链接的帖子。
  5. Save the avcc atom header in NSData.
  6. 在NSData中保存avcc原子头。
  7. Create a CMVideoFormatDescription with avcc atom and configured extentions. See CMVideoFormatDescriptionCreate documentation.
  8. 使用avcc atom和配置的扩展创建CMVideoFormatDescription。看到CMVideoFormatDescriptionCreate文档。
  9. Set up VTDecompressionOutputCallbackRecord
  10. 设置VTDecompressionOutputCallbackRecord
  11. Set pixelBufferAttributes that will be used in VTDecompressionSessionCreate.
  12. 设置将在VTDecompressionSessionCreate中使用的pixelBufferAttributes。
  13. Create a CMBlockBuffer from the data that was not used in creating CMVideoFormatDescription. See CMBlockBufferCreateWithMemoryBlock. Basically, you want to make sure you are adding you raw nal packets that are not SPS or PPS. You may need to add the size of the current nal packet + 4, for everything to work right. Again refer to link above.
  14. 从未用于创建CMVideoFormatDescription的数据创建一个blockcmbuffer。看到CMBlockBufferCreateWithMemoryBlock。基本上,您希望确保添加的是未处理的nal包,而不是sp或PPS。您可能需要添加当前的nal包+ 4的大小,以便一切正常工作。再次参考上面的链接。
  15. Create CMBlockBuffer
  16. 创建CMBlockBuffer
  17. Create CMSampleBuffer
  18. 创建CMSampleBuffer
  19. Use CMSampleBuffer in VTDecompressionSessionDecodeFrame to do the decoding.
  20. 在VTDecompressionSessionDecodeFrame中使用CMSampleBuffer进行解码。
  21. Run VTDecompressionSessionWaitForAsynchronousFrames after VTDecompressionSessionDecodeFrame. I noticed if I did not run VTDecompressionSessionWaitForAsynchronousFrames, my display output was jittery.
  22. VTDecompressionSessionDecodeFrame VTDecompressionSessionWaitForAsynchronousFrames之后运行。我注意到,如果我没有运行vtdecompressionsessionwaitforasynchronousframe,那么我的显示输出就会出现抖动。
  23. What ever functionality you defined for the function used in VTDecompressionOutputCallbackRecord will get called. Presently, I am passing a CVPixelBufferRef to OpenGL to write the video to the screen. Maybe at some point I will try to use AVFoundations to write to the screen.
  24. 您为VTDecompressionOutputCallbackRecord中使用的函数定义的任何功能都将被调用。目前,我正在通过一个CVPixelBufferRef到OpenGL将视频写到屏幕上。也许在某个时候,我将尝试使用AVFoundations技术来写屏幕。

I hope this helps someone.

我希望这能帮助别人。

#1


9  

Ok, it has been a while, but I finally figure out how to use VideoToolbox correctly with a raw uncompressed and encoded data stream.

好了,已经有一段时间了,但是我终于找到了如何正确使用一个未压缩和编码的数据流来正确使用VideoToolbox。

Basially, I had to familiarize myself with the H264 specifications and got much help from this great post.

基本上,我必须熟悉H264规范,并从这个伟大的职位得到很多帮助。

Here are the steps

以下的步骤

  1. Make sure you get your Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) before you start processing any data.
  2. 在开始处理任何数据之前,请确保获得序列参数集(SPS)和图像参数集(PPS)。
  3. Use the SPS and PPS to get the necessary data to create an avcc atom header. See post I linked to above.
  4. 使用SPS和PPS获取创建avcc atom头所需的数据。参见上面链接的帖子。
  5. Save the avcc atom header in NSData.
  6. 在NSData中保存avcc原子头。
  7. Create a CMVideoFormatDescription with avcc atom and configured extentions. See CMVideoFormatDescriptionCreate documentation.
  8. 使用avcc atom和配置的扩展创建CMVideoFormatDescription。看到CMVideoFormatDescriptionCreate文档。
  9. Set up VTDecompressionOutputCallbackRecord
  10. 设置VTDecompressionOutputCallbackRecord
  11. Set pixelBufferAttributes that will be used in VTDecompressionSessionCreate.
  12. 设置将在VTDecompressionSessionCreate中使用的pixelBufferAttributes。
  13. Create a CMBlockBuffer from the data that was not used in creating CMVideoFormatDescription. See CMBlockBufferCreateWithMemoryBlock. Basically, you want to make sure you are adding you raw nal packets that are not SPS or PPS. You may need to add the size of the current nal packet + 4, for everything to work right. Again refer to link above.
  14. 从未用于创建CMVideoFormatDescription的数据创建一个blockcmbuffer。看到CMBlockBufferCreateWithMemoryBlock。基本上,您希望确保添加的是未处理的nal包,而不是sp或PPS。您可能需要添加当前的nal包+ 4的大小,以便一切正常工作。再次参考上面的链接。
  15. Create CMBlockBuffer
  16. 创建CMBlockBuffer
  17. Create CMSampleBuffer
  18. 创建CMSampleBuffer
  19. Use CMSampleBuffer in VTDecompressionSessionDecodeFrame to do the decoding.
  20. 在VTDecompressionSessionDecodeFrame中使用CMSampleBuffer进行解码。
  21. Run VTDecompressionSessionWaitForAsynchronousFrames after VTDecompressionSessionDecodeFrame. I noticed if I did not run VTDecompressionSessionWaitForAsynchronousFrames, my display output was jittery.
  22. VTDecompressionSessionDecodeFrame VTDecompressionSessionWaitForAsynchronousFrames之后运行。我注意到,如果我没有运行vtdecompressionsessionwaitforasynchronousframe,那么我的显示输出就会出现抖动。
  23. What ever functionality you defined for the function used in VTDecompressionOutputCallbackRecord will get called. Presently, I am passing a CVPixelBufferRef to OpenGL to write the video to the screen. Maybe at some point I will try to use AVFoundations to write to the screen.
  24. 您为VTDecompressionOutputCallbackRecord中使用的函数定义的任何功能都将被调用。目前,我正在通过一个CVPixelBufferRef到OpenGL将视频写到屏幕上。也许在某个时候,我将尝试使用AVFoundations技术来写屏幕。

I hope this helps someone.

我希望这能帮助别人。