Converting raw data to displayable video for iOS

I have an interesting problem I need to research related to very low level video streaming.

Has anyone had any experience converting a raw stream of bytes(separated into per pixel information, but not a standard format of video) into a low resolution video stream? I believe that I can map the data into RGB value per pixel bytes, as the color values that correspond to the value in the raw data will be determined by us. I’m not sure where to go from there, or what the RGB format needs to be per pixel.

  • How to get Bytes from CMSampleBufferRef , To Send Over Network
  • Encode/Decode Array of Types conforming to protocol with JSONEncoder
  • h.264 video won't play on iOS
  • Trouble decoding with NSKeyedUnarchiver
  • Emoji to JSON encoded, post to web server
  • Escape Quotes in Objective-C
  • I’ve looked at FFMPeg but it’s documentation is massive and I don’t know where to start.

    Specific questions I have include, is it possible to create CVPixelBuffer with that pixel data? If I were to do that, what sort of format for the per pixel data would I need to convert to?

    Also, should I be looking deeper into OpenGL, and if so where would the best place to look for information on this topic?

    What about CGBitmapContextCreate? For example, if I went I went with something like this, what would a typical pixel byte need to look like? Would this be fast enough to keep the frame rate above 20fps?

    EDIT:

    I think with the excellent help of you two, and some more research on my own, I’ve put together a plan for how to construct the raw RGBA data, then construct a CGImage from that data, in turn create a CVPixelBuffer from that CGImage from here CVPixelBuffer from CGImage.

    However, to then play that live as the data comes in, I’m not sure what kind of FPS I would be looking at. Do I paint them to a CALayer, or is there some similar class to AVAssetWriter that I could use to play it as I append CVPixelBuffers. The experience that I have is using AVAssetWriter to export constructed CoreAnimation hierarchies to video, so the videos are always constructed before they begin playing, and not displayed as live video.

    2 Solutions Collect From Internet About “Converting raw data to displayable video for iOS”

    I’ve done this before, and I know that you found my GPUImage project a little while ago. As I replied on the issues there, the GPUImageRawDataInput is what you want for this, because it does a fast upload of RGBA, BGRA, or RGB data directly into an OpenGL ES texture. From there, the frame data can be filtered, displayed to the screen, or recorded into a movie file.

    Your proposed path of going through a CGImage to a CVPixelBuffer is not going to yield very good performance, based on my personal experience. There’s too much overhead when passing through Core Graphics for realtime video. You want to go directly to OpenGL ES for the fastest display speed here.

    I might even be able to improve my code to make it faster than it is right now. I currently use glTexImage2D() to update texture data from local bytes, but it would probably be even faster to use the texture caches introduced in iOS 5.0 to speed up refreshing data within a texture that maintains its size. There’s some overhead in setting up the caches that makes them a little slower for one-off uploads, but rapidly updating data should be faster with them.

    My 2 cents:

    I made an opengl game which lets the user record a 3d scene. Playback was done via replaying the scene (instead of playing a video because realtime encoding did not yield a comfortable FPS.

    There is a technique which could help out, unfortunately I didn’t have time to implement it:
    http://allmybrain.com/2011/12/08/rendering-to-a-texture-with-ios-5-texture-cache-api/

    This technique should cut down time on getting pixels back from openGL. You might get an acceptable video encoding rate.