用IOSurface从YUV创建CVPixelBuffer

时间:2022-09-07 09:29:52

So I am getting raw YUV data in 3 separate arrays from a network callback (voip app). From what I understand you cannot create IOSurface backed pixel buffers with CVPixelBufferCreateWithPlanarBytes according to here

因此,我从一个网络回调(voip应用程序)中从3个独立的数组中获取原始YUV数据。根据我的理解,您不能使用CVPixelBufferCreateWithPlanarBytes创建IOSurface支持的像素缓冲区

Important: You cannot use CVPixelBufferCreateWithBytes() or CVPixelBufferCreateWithPlanarBytes() with kCVPixelBufferIOSurfacePropertiesKey. Calling CVPixelBufferCreateWithBytes() or CVPixelBufferCreateWithPlanarBytes() will result in CVPixelBuffers that are not IOSurface-backed

重要提示:不能将CVPixelBufferCreateWithBytes()或cvpixelbufferbufferbuffercreatewithplanarbytes()与kCVPixelBufferIOSurfacePropertiesKey一起使用。调用CVPixelBufferCreateWithBytes()或CVPixelBufferCreateWithPlanarBytes()将导致CVPixelBuffers不是ios支持的

So thus you have to create it with CVPixelBufferCreate, but how do you transfer the data from the call back to the CVPixelBufferRef that you create?

因此,您必须使用CVPixelBufferCreate创建它,但是如何将数据从调用返回到您创建的CVPixelBufferRef ?

- (void)videoCallBack(uint8_t *yPlane, uint8_t *uPlane, uint8_t *vPlane, size_t width, size_t height, size_t stride yStride,
                      size_t uStride, size_t vStride)
    NSDictionary *pixelAttributes = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}};
    CVPixelBufferRef pixelBuffer = NULL;
    CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                          width,
                                          height,
                                          kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
                                          (__bridge CFDictionaryRef)(pixelAttributes),
                                          &pixelBuffer);

I am unsure what to do afterwards here? Eventually I want to turn this into a CIImage which then I can use my GLKView to render the video. How do people "put" the data into the buffers from when you create it?

我不知道接下来该怎么做?最后我想把它变成一个CIImage然后我可以用GLKView来渲染视频。当你创建数据时,人们是如何将数据放入缓冲区的?

2 个解决方案

#1


7  

I figured it out and it was fairly trivial. Here is the full code below. Only issue is that I get a BSXPCMessage received error for message: Connection interrupted and it takes a while for the video to show.

我算出来了,它很琐碎。下面是完整的代码。唯一的问题是,我得到了一个BSXPCMessage接收错误消息:连接中断,视频需要一段时间才能显示。

NSDictionary *pixelAttributes = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                      width,
                                      height,
                                      kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
                                      (__bridge CFDictionaryRef)(pixelAttributes),
                                      &pixelBuffer);

CVPixelBufferLockBaseAddress(pixelBuffer, 0);
uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
memcpy(yDestPlane, yPlane, width * height);
uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
memcpy(uvDestPlane, uvPlane, numberOfElementsForChroma);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

if (result != kCVReturnSuccess) {
    DDLogWarn(@"Unable to create cvpixelbuffer %d", result);
}

CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer]; //success!
CVPixelBufferRelease(pixelBuffer);

I forgot to add the code to interleave the two U and V planes, but that shouldn't be too bad.

我忘记添加代码来插入两个U和V平面,但这应该不是太坏。

#2


1  

I had a similar question and here is what I have in SWIFT 2.0 with informations that I got from answers to others questions or links.

我有一个类似的问题,这是我在SWIFT 2.0中从别人的问题或链接中得到的信息。

func generatePixelBufferFromYUV2(inout yuvFrame: YUVFrame) -> CVPixelBufferRef?
{
    var uIndex: Int
    var vIndex: Int
    var uvDataIndex: Int
    var pixelBuffer: CVPixelBufferRef? = nil
    var err: CVReturn;

    if (m_pixelBuffer == nil)
    {
        err = CVPixelBufferCreate(kCFAllocatorDefault, yuvFrame.width, yuvFrame.height, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, nil, &pixelBuffer)
        if (err != 0) {
            NSLog("Error at CVPixelBufferCreate %d", err)
            return nil
        }
    }

    if (pixelBuffer != nil)
    {
        CVPixelBufferLockBaseAddress(pixelBuffer!, 0)
        let yBaseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer!, 0)
        if (yBaseAddress != nil)
        {
            let yData = UnsafeMutablePointer<UInt8>(yBaseAddress)
            let yDataPtr = UnsafePointer<UInt8>(yuvFrame.luma.bytes)

            // Y-plane data
            memcpy(yData, yDataPtr, yuvFrame.luma.length)
        }

        let uvBaseAddress = CVPixelBufferGetBaseAddressOfPlane(m_pixelBuffer!, 1)
        if (uvBaseAddress != nil)
        {
            let uvData = UnsafeMutablePointer<UInt8>(uvBaseAddress)
            let pUPointer = UnsafePointer<UInt8>(yuvFrame.chromaB.bytes)
            let pVPointer = UnsafePointer<UInt8>(yuvFrame.chromaR.bytes)

            // For the uv data, we need to interleave them as uvuvuvuv....
            let iuvRow = (yuvFrame.chromaB.length*2/yuvFrame.width)
            let iHalfWidth = yuvFrame.width/2

            for i in 0..<iuvRow
            {
                for j in 0..<(iHalfWidth)
                {
                    // UV data for original frame.  Just interleave them.
                    uvDataIndex = i*iHalfWidth+j
                    uIndex = (i*yuvFrame.width) + (j*2)
                    vIndex = uIndex + 1
                    uvData[uIndex] = pUPointer[uvDataIndex]
                    uvData[vIndex] = pVPointer[uvDataIndex]
                }
            }
        }
        CVPixelBufferUnlockBaseAddress(pixelBuffer!, 0)
    }

    return pixelBuffer
}

Note: yuvFrame is a structure with y, u, and v plan buffers and width and height. Also, I have the CFDictionary? parameter in the CVPixelBufferCreate(...) set to nil. If I give it IOSurface attribute, it will fail and complain that it's not IOSurface-backed or error -6683.

注意:yuvFrame是一个具有y、u和v计划缓冲区和宽度和高度的结构。还有,我有CFDictionary吗?将CVPixelBufferCreate(…)中的参数设置为nil。如果我给它IOSurface属性,它就会失败,并抱怨它没有IOSurface支持或错误-6683。

Visit these links for more information: This link is about UV interleave: How to convert from YUV to CIImage for iOS

访问这些链接获得更多信息:这个链接是关于UV interleave:如何为iOS从YUV转换到CIImage

and related question: CVOpenGLESTextureCacheCreateTextureFromImage returns error 6683

和相关的问题:cvopenglestexturecachecreatetexturetuomimage返回错误6683

#1


7  

I figured it out and it was fairly trivial. Here is the full code below. Only issue is that I get a BSXPCMessage received error for message: Connection interrupted and it takes a while for the video to show.

我算出来了,它很琐碎。下面是完整的代码。唯一的问题是,我得到了一个BSXPCMessage接收错误消息:连接中断,视频需要一段时间才能显示。

NSDictionary *pixelAttributes = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                      width,
                                      height,
                                      kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
                                      (__bridge CFDictionaryRef)(pixelAttributes),
                                      &pixelBuffer);

CVPixelBufferLockBaseAddress(pixelBuffer, 0);
uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
memcpy(yDestPlane, yPlane, width * height);
uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
memcpy(uvDestPlane, uvPlane, numberOfElementsForChroma);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

if (result != kCVReturnSuccess) {
    DDLogWarn(@"Unable to create cvpixelbuffer %d", result);
}

CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer]; //success!
CVPixelBufferRelease(pixelBuffer);

I forgot to add the code to interleave the two U and V planes, but that shouldn't be too bad.

我忘记添加代码来插入两个U和V平面,但这应该不是太坏。

#2


1  

I had a similar question and here is what I have in SWIFT 2.0 with informations that I got from answers to others questions or links.

我有一个类似的问题,这是我在SWIFT 2.0中从别人的问题或链接中得到的信息。

func generatePixelBufferFromYUV2(inout yuvFrame: YUVFrame) -> CVPixelBufferRef?
{
    var uIndex: Int
    var vIndex: Int
    var uvDataIndex: Int
    var pixelBuffer: CVPixelBufferRef? = nil
    var err: CVReturn;

    if (m_pixelBuffer == nil)
    {
        err = CVPixelBufferCreate(kCFAllocatorDefault, yuvFrame.width, yuvFrame.height, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, nil, &pixelBuffer)
        if (err != 0) {
            NSLog("Error at CVPixelBufferCreate %d", err)
            return nil
        }
    }

    if (pixelBuffer != nil)
    {
        CVPixelBufferLockBaseAddress(pixelBuffer!, 0)
        let yBaseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer!, 0)
        if (yBaseAddress != nil)
        {
            let yData = UnsafeMutablePointer<UInt8>(yBaseAddress)
            let yDataPtr = UnsafePointer<UInt8>(yuvFrame.luma.bytes)

            // Y-plane data
            memcpy(yData, yDataPtr, yuvFrame.luma.length)
        }

        let uvBaseAddress = CVPixelBufferGetBaseAddressOfPlane(m_pixelBuffer!, 1)
        if (uvBaseAddress != nil)
        {
            let uvData = UnsafeMutablePointer<UInt8>(uvBaseAddress)
            let pUPointer = UnsafePointer<UInt8>(yuvFrame.chromaB.bytes)
            let pVPointer = UnsafePointer<UInt8>(yuvFrame.chromaR.bytes)

            // For the uv data, we need to interleave them as uvuvuvuv....
            let iuvRow = (yuvFrame.chromaB.length*2/yuvFrame.width)
            let iHalfWidth = yuvFrame.width/2

            for i in 0..<iuvRow
            {
                for j in 0..<(iHalfWidth)
                {
                    // UV data for original frame.  Just interleave them.
                    uvDataIndex = i*iHalfWidth+j
                    uIndex = (i*yuvFrame.width) + (j*2)
                    vIndex = uIndex + 1
                    uvData[uIndex] = pUPointer[uvDataIndex]
                    uvData[vIndex] = pVPointer[uvDataIndex]
                }
            }
        }
        CVPixelBufferUnlockBaseAddress(pixelBuffer!, 0)
    }

    return pixelBuffer
}

Note: yuvFrame is a structure with y, u, and v plan buffers and width and height. Also, I have the CFDictionary? parameter in the CVPixelBufferCreate(...) set to nil. If I give it IOSurface attribute, it will fail and complain that it's not IOSurface-backed or error -6683.

注意:yuvFrame是一个具有y、u和v计划缓冲区和宽度和高度的结构。还有,我有CFDictionary吗?将CVPixelBufferCreate(…)中的参数设置为nil。如果我给它IOSurface属性,它就会失败,并抱怨它没有IOSurface支持或错误-6683。

Visit these links for more information: This link is about UV interleave: How to convert from YUV to CIImage for iOS

访问这些链接获得更多信息:这个链接是关于UV interleave:如何为iOS从YUV转换到CIImage

and related question: CVOpenGLESTextureCacheCreateTextureFromImage returns error 6683

和相关的问题:cvopenglestexturecachecreatetexturetuomimage返回错误6683