Make an UIImage from a CMSampleBuffer

This is not the same as the countless questions about converting a CMSampleBuffer to a UIImage. I’m simply wondering why I can’t convert it like this:

CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage * imageFromCoreImageLibrary = [CIImage imageWithCVPixelBuffer: pixelBuffer];
UIImage * imageForUI = [UIImage imageWithCIImage: imageFromCoreImageLibrary];

It seems a lot simpler because it works for YCbCr color spaces, as well as RGBA and others. Is there something wrong with that code?

  • iOS WKWebView not showing javascript alert() dialog
  • Write a file on iOS
  • How “Subscription Status Url” works for apple In-app purchases (Auto renewable)
  • UICollectionView with multi-select won't select more than a dozen of items
  • NSUserDefaults and KVO issues
  • iOS MKMapView custom images display on top left corner
  • 5 Solutions Collect From Internet About “Make an UIImage from a CMSampleBuffer”

    Use following code to convert image from PixelBuffer
    Option 1:

    CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
    
    CIContext *context = [CIContext contextWithOptions:nil];
    CGImageRef myImage = [context
                             createCGImage:ciImage
                             fromRect:CGRectMake(0, 0,
                                                 CVPixelBufferGetWidth(pixelBuffer),
                                                 CVPixelBufferGetHeight(pixelBuffer))];
    
    UIImage *uiImage = [UIImage imageWithCGImage:myImage];
    

    Option 2:

    int w = CVPixelBufferGetWidth(pixelBuffer);
    int h = CVPixelBufferGetHeight(pixelBuffer);
    int r = CVPixelBufferGetBytesPerRow(pixelBuffer);
    int bytesPerPixel = r/w;
    
    unsigned char *buffer = CVPixelBufferGetBaseAddress(pixelBuffer);
    
    UIGraphicsBeginImageContext(CGSizeMake(w, h));
    
    CGContextRef c = UIGraphicsGetCurrentContext();
    
    unsigned char* data = CGBitmapContextGetData(c);
    if (data != NULL) {
        int maxY = h;
        for(int y = 0; y<maxY; y++) {
            for(int x = 0; x<w; x++) {
                int offset = bytesPerPixel*((w*y)+x);
                data[offset] = buffer[offset];     // R
                data[offset+1] = buffer[offset+1]; // G
                data[offset+2] = buffer[offset+2]; // B
                data[offset+3] = buffer[offset+3]; // A
            }
        }
    }
    UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
    
    UIGraphicsEndImageContext();
    

    Swift:

    let buff: CMSampleBuffer ...            // Have you have CMSampleBuffer 
    let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(buff);
    let image = UIImage(data: imageData) //  Here you have UIImage
    

    With Swift 3 and iOS 10 AVCapturePhotoOutput :
    Includes :

    import UIKit
    import CoreData
    import CoreMotion
    import AVFoundation
    

    Create an UIView for preview and link it to the Main Class

      @IBOutlet var preview: UIView!
    

    Create this to setup the camera session (kCVPixelFormatType_32BGRA is important !!) :

      lazy var cameraSession: AVCaptureSession = {
        let s = AVCaptureSession()
        s.sessionPreset = AVCaptureSessionPresetHigh
        return s
      }()
    
      lazy var previewLayer: AVCaptureVideoPreviewLayer = {
        let previewl:AVCaptureVideoPreviewLayer =  AVCaptureVideoPreviewLayer(session: self.cameraSession)
        previewl.frame = self.preview.bounds
        return previewl
      }()
    
      func setupCameraSession() {
        let captureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo) as AVCaptureDevice
    
        do {
          let deviceInput = try AVCaptureDeviceInput(device: captureDevice)
    
          cameraSession.beginConfiguration()
    
          if (cameraSession.canAddInput(deviceInput) == true) {
            cameraSession.addInput(deviceInput)
          }
    
          let dataOutput = AVCaptureVideoDataOutput()
          dataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString) : NSNumber(value: **kCVPixelFormatType_32BGRA** as UInt32)]
          dataOutput.alwaysDiscardsLateVideoFrames = true
    
          if (cameraSession.canAddOutput(dataOutput) == true) {
            cameraSession.addOutput(dataOutput)
          }
    
          cameraSession.commitConfiguration()
    
          let queue = DispatchQueue(label: "fr.popigny.videoQueue", attributes: [])
          dataOutput.setSampleBufferDelegate(self, queue: queue)
    
        }
        catch let error as NSError {
          NSLog("\(error), \(error.localizedDescription)")
        }
      }
    

    In WillAppear :

      override func viewWillAppear(_ animated: Bool) {
        super.viewWillAppear(animated)
        setupCameraSession()
      }
    

    In Didappear :

      override func viewDidAppear(_ animated: Bool) {
        super.viewDidAppear(animated)
        preview.layer.addSublayer(previewLayer)
        cameraSession.startRunning()
      }
    

    Create a function to capture output :

      func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
    
        // Here you collect each frame and process it
        let ts:CMTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
        self.mycapturedimage = imageFromSampleBuffer(sampleBuffer: sampleBuffer)
    }
    

    Here is the code that convert an kCVPixelFormatType_32BGRA CMSampleBuffer to an UIImage the key things is the bitmapInfo that must correspond to 32BGRA 32 little with premultfirst and alpha info :

      func imageFromSampleBuffer(sampleBuffer : CMSampleBuffer) -> UIImage
      {
        // Get a CMSampleBuffer's Core Video image buffer for the media data
        let  imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
        // Lock the base address of the pixel buffer
        CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags.readOnly);
    
    
        // Get the number of bytes per row for the pixel buffer
        let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer!);
    
        // Get the number of bytes per row for the pixel buffer
        let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!);
        // Get the pixel buffer width and height
        let width = CVPixelBufferGetWidth(imageBuffer!);
        let height = CVPixelBufferGetHeight(imageBuffer!);
    
        // Create a device-dependent RGB color space
        let colorSpace = CGColorSpaceCreateDeviceRGB();
    
        // Create a bitmap graphics context with the sample buffer data
        var bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Little.rawValue
        bitmapInfo |= CGImageAlphaInfo.premultipliedFirst.rawValue & CGBitmapInfo.alphaInfoMask.rawValue
        //let bitmapInfo: UInt32 = CGBitmapInfo.alphaInfoMask.rawValue
        let context = CGContext.init(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo)
        // Create a Quartz image from the pixel data in the bitmap graphics context
        let quartzImage = context?.makeImage();
        // Unlock the pixel buffer
        CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags.readOnly);
    
        // Create an image object from the Quartz image
        let image = UIImage.init(cgImage: quartzImage!);
    
        return (image);
      }
    

    I wrote a simple extension for use with Swift 3.0 to produce a UIImage from a CMSampleBuffer.

    This also handles scaling and orientation, though you can just accept default values if they work for you.

    extension CMSampleBuffer {
        func image(orientation: UIImageOrientation = .up, scale: CGFloat = 1.0) -> UIImage? {
            guard let buffer = CMSampleBufferGetImageBuffer(self) else { return nil }
    
            let ciImage = CIImage(cvPixelBuffer: buffer)
    
            let image = UIImage(ciImage: ciImage, scale: scale, orientation: orientation)
    
            return image
        }
    }
    

    This is going to come up a lot in connection with the iOS 10 AVCapturePhotoOutput class. Suppose the user wants to snap a photo and you call capturePhoto(with:delegate:) and your settings include a request for a preview image. This is a splendidly efficient way to get a preview image, but how are you going to display it in your interface? The preview image arrives as a CMSampleBuffer in your implementation of the delegate method:

    func capture(_ output: AVCapturePhotoOutput, 
        didFinishProcessingPhotoSampleBuffer buff: CMSampleBuffer?, 
        previewPhotoSampleBuffer: CMSampleBuffer?, 
        resolvedSettings: AVCaptureResolvedPhotoSettings, 
        bracketSettings: AVCaptureBracketedStillImageSettings?, 
        error: Error?) {
    

    You need to transform a CMSampleBuffer, previewPhotoSampleBuffer into a UIImage. How are you going to do that? Like this:

    if let prev = previewPhotoSampleBuffer {
        if let buff = CMSampleBufferGetImageBuffer(prev) {
            let cim = CIImage(cvPixelBuffer: buff)
            let im = UIImage(ciImage: cim)
            // and now you have a UIImage! do something with it ...
        }
    }