Contents

Converting CMSamppleBufferRef to UIImage is a common job we have to do when working with AVCaptureSession and AVCaptureVideoDataOutput. These snippets below acts as a reference for me and those who are interested :)

Here I convert to CGImageRef first. Converting CGImageRef to UIImage is simple with this line

[code language=”objc”]


UIImage *image = [UIImage imageWithCGImage:cgImageRef];


[/code]

Remember that imageWithCGImage has autorelease, so we must wrap it into @autoreleasepool, even though we’re using ARC

[code language=”objc”]


@autoreleasepool


{


// Code that return autorelease obj


}


[/code]

[code language=”objc”]

  • (CGImageRef)imageFromSampleBuffer:(CMSampleBufferRef)sampleBuffer


    {


    // Get a CMSampleBuffer’s Core Video image buffer for the media data


    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);



    // Lock the base address of the pixel buffer


    CVPixelBufferLockBaseAddress(imageBuffer, 0);



    // Get the base address


    void *baseAddress = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);



    // Get the number of bytes per row for the pixel buffer


    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);


    // Get the pixel buffer width and height


    size_t width = CVPixelBufferGetWidth(imageBuffer);


    size_t height = CVPixelBufferGetHeight(imageBuffer);



    // Create a device-dependent RGB color space


    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();



    // Create a bitmap graphics context with the sample buffer data


    CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,

    bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);


    // Create a Quartz image from the pixel data in the bitmap graphics context


    CGImageRef quartzImage = CGBitmapContextCreateImage(context);



    // Unlock the pixel buffer


    CVPixelBufferUnlockBaseAddress(imageBuffer,0);



    // Free up the context and color space


    CGContextRelease(context);


    CGColorSpaceRelease(colorSpace);



    // We should return CGImageRef rather than UIImage because QR Scan needs CGImageRef


    return quartzImage;


    }


    [/code]

This snippet below both converts to CGImageRef and perform cropping. It is taken from ZXingObj (ZXingObj/client/ZXCGImageLumianceSource.m)

[code language=”objc”]

P/S: These only works if we set VideoDataOuput format type to BGRA

[code language=”objc”]


NSString key = (NSString)kCVPixelBufferPixelFormatTypeKey;


NSNumber value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];


NSDictionary
videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];


[dataOutput setVideoSettings:videoSettings];


[/code]

Contents