Detect touches only on non-transparent pixels of UIImageView, efficiently

How would you detect touches only on non-transparent pixels of a UIImageView, efficiently?

Consider an image like the one below, displayed with UIImageView. The goal is be to make the gesture recognisers respond only when the touch happens in the non-transparent (black in this case) area of the image.

  • Tutorial for iPhone OpenCV on shape recognising
  • How do I binarize a CGImage using OpenCV on iOS?
  • Cropping an image in iOS using OpenCV face detection
  • iOS: Applying a RGB filter to a greyscale PNG
  • Is there a way to identify if two images are nearly identical in Objective-C?
  • Using tesseract to recognize license plates
  • enter image description here


    • Override hitTest:withEvent: or pointInside:withEvent:, although this approach might be terribly inefficient as these methods get called many times during a touch event.
    • Checking if a single pixel is transparent might create unexpected results, as fingers are bigger than one pixel. Checking a circular area of pixels around the hit point, or trying to find a transparent path towards an edge might work better.


    • It’d be nice to differentiate between outer and inner transparent pixels of an image. In the example, the transparent pixels inside the zero should also be considered valid.
    • What happens if the image has a transform?
    • Can the image processing be hardware accelerated?

    3 Solutions Collect From Internet About “Detect touches only on non-transparent pixels of UIImageView, efficiently”

    Here’s my quick implementation: (based on Retrieving a pixel alpha value for a UIImage)

    - (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event {
        //Using code from
        unsigned char pixel[1] = {0};
        CGContextRef context = CGBitmapContextCreate(pixel,
                                                     1, 1, 8, 1, NULL,
        [image drawAtPoint:CGPointMake(-point.x, -point.y)];
        CGFloat alpha = pixel[0]/255.0f;
        BOOL transparent = alpha < 0.01f;
        return !transparent;

    This assumes that the image is in the same coordinate space as the point. If scaling goes on, you may have to convert the point before checking the pixel data.

    Appears to work pretty quickly to me. I was measuring approx. 0.1-0.4 ms for this method call. It doesn’t do the interior space, and is probably not optimal.

    On github, you can find a project by Ole Begemann which extends UIButton so that it only detects touches where the button’s image is not transparent.

    Since UIButton is a subclass of UIView, adapting it to UIImageView should be straightforward.

    Hope this helps.

    Well, if you need to do it really fast, you need to precalculate the mask.

    Here’s how to extract it:

    UIImage *image = [UIImage imageNamed:@"some_image.png"];
    NSData *data = (NSData *) CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
    unsigned char *pixels = (unsigned char *)[data bytes];
    BOOL *mask = (BOOL *)malloc(data.length);
    for (int i = 0; i < data.length; i += 4) {
      mask[i >> 2] = pixels[i + 3] == 0xFF; // alpha, I hope
    // TODO: save mask somewhere

    Or you could use the 1×1 bitmap context solution to precalculate the mask.
    Having a mask means you can check any point with the cost of one indexed memory access.

    As for checking a bigger area than one pixel – I would check pixels on a circle with the center in the touch point. About 16 points on the circle should be enough.

    Detecting also inner pixels: another precalculation step – you need to find the convex hull of the mask.
    You can do that using the “Graham scan” algorithm
    Then either fill that area in the mask, or save the polygon and use a point-in-polygon test instead.

    And finally, if the image has a transform, you need to convert the point coordinates from screen space to image space, and then you can just check the precalculated mask.