Motion Blur effect on UIImage on iOS

2 Solutions Collect From Internet About “Motion Blur effect on UIImage on iOS”

As I commented on the repository, I just added motion and zoom blurs to GPUImage. These are the GPUImageMotionBlurFilter and GPUImageZoomBlurFilter classes. This is an example of the zoom blur:

GPUImage zoom blur

For the motion blur, I do a 9-hit Gaussian blur over a single direction. This is achieved using the following vertex and fragment shaders:

Vertex:

 attribute vec4 position;
 attribute vec4 inputTextureCoordinate;

 uniform highp vec2 directionalTexelStep;

 varying vec2 textureCoordinate;
 varying vec2 oneStepBackTextureCoordinate;
 varying vec2 twoStepsBackTextureCoordinate;
 varying vec2 threeStepsBackTextureCoordinate;
 varying vec2 fourStepsBackTextureCoordinate;
 varying vec2 oneStepForwardTextureCoordinate;
 varying vec2 twoStepsForwardTextureCoordinate;
 varying vec2 threeStepsForwardTextureCoordinate;
 varying vec2 fourStepsForwardTextureCoordinate;

 void main()
 {
     gl_Position = position;

     textureCoordinate = inputTextureCoordinate.xy;
     oneStepBackTextureCoordinate = inputTextureCoordinate.xy - directionalTexelStep;
     twoStepsBackTextureCoordinate = inputTextureCoordinate.xy - 2.0 * directionalTexelStep;
     threeStepsBackTextureCoordinate = inputTextureCoordinate.xy - 3.0 * directionalTexelStep;
     fourStepsBackTextureCoordinate = inputTextureCoordinate.xy - 4.0 * directionalTexelStep;
     oneStepForwardTextureCoordinate = inputTextureCoordinate.xy + directionalTexelStep;
     twoStepsForwardTextureCoordinate = inputTextureCoordinate.xy + 2.0 * directionalTexelStep;
     threeStepsForwardTextureCoordinate = inputTextureCoordinate.xy + 3.0 * directionalTexelStep;
     fourStepsForwardTextureCoordinate = inputTextureCoordinate.xy + 4.0 * directionalTexelStep;
 }

Fragment:

 precision highp float;

 uniform sampler2D inputImageTexture;

 varying vec2 textureCoordinate;
 varying vec2 oneStepBackTextureCoordinate;
 varying vec2 twoStepsBackTextureCoordinate;
 varying vec2 threeStepsBackTextureCoordinate;
 varying vec2 fourStepsBackTextureCoordinate;
 varying vec2 oneStepForwardTextureCoordinate;
 varying vec2 twoStepsForwardTextureCoordinate;
 varying vec2 threeStepsForwardTextureCoordinate;
 varying vec2 fourStepsForwardTextureCoordinate;

 void main()
 {
     lowp vec4 fragmentColor = texture2D(inputImageTexture, textureCoordinate) * 0.18;
     fragmentColor += texture2D(inputImageTexture, oneStepBackTextureCoordinate) * 0.15;
     fragmentColor += texture2D(inputImageTexture, twoStepsBackTextureCoordinate) *  0.12;
     fragmentColor += texture2D(inputImageTexture, threeStepsBackTextureCoordinate) * 0.09;
     fragmentColor += texture2D(inputImageTexture, fourStepsBackTextureCoordinate) * 0.05;
     fragmentColor += texture2D(inputImageTexture, oneStepForwardTextureCoordinate) * 0.15;
     fragmentColor += texture2D(inputImageTexture, twoStepsForwardTextureCoordinate) *  0.12;
     fragmentColor += texture2D(inputImageTexture, threeStepsForwardTextureCoordinate) * 0.09;
     fragmentColor += texture2D(inputImageTexture, fourStepsForwardTextureCoordinate) * 0.05;

     gl_FragColor = fragmentColor;
 }

As an optimization, I calculate the step size between texture samples outside of the fragment shader by using the angle, blur size, and the image dimensions. This is then passed into the vertex shader, so that I can calculate the texture sampling positions there and interpolate across them in the fragment shader. This avoids dependent texture reads on iOS devices.

The zoom blur is much slower, because I still do these calculations in the fragment shader. No doubt there’s a way I can optimize this, but I haven’t tried yet. The zoom blur uses a 9-hit Gaussian blur where the direction and per-sample offset distance vary as a function of the placement of the pixel vs. the center of the blur.

It uses the following fragment shader (and a standard passthrough vertex shader):

 varying highp vec2 textureCoordinate;

 uniform sampler2D inputImageTexture;

 uniform highp vec2 blurCenter;
 uniform highp float blurSize;

 void main()
 {
     // TODO: Do a more intelligent scaling based on resolution here
     highp vec2 samplingOffset = 1.0/100.0 * (blurCenter - textureCoordinate) * blurSize;

     lowp vec4 fragmentColor = texture2D(inputImageTexture, textureCoordinate) * 0.18;
     fragmentColor += texture2D(inputImageTexture, textureCoordinate + samplingOffset) * 0.15;
     fragmentColor += texture2D(inputImageTexture, textureCoordinate + (2.0 * samplingOffset)) *  0.12;
     fragmentColor += texture2D(inputImageTexture, textureCoordinate + (3.0 * samplingOffset)) * 0.09;
     fragmentColor += texture2D(inputImageTexture, textureCoordinate + (4.0 * samplingOffset)) * 0.05;
     fragmentColor += texture2D(inputImageTexture, textureCoordinate - samplingOffset) * 0.15;
     fragmentColor += texture2D(inputImageTexture, textureCoordinate - (2.0 * samplingOffset)) *  0.12;
     fragmentColor += texture2D(inputImageTexture, textureCoordinate - (3.0 * samplingOffset)) * 0.09;
     fragmentColor += texture2D(inputImageTexture, textureCoordinate - (4.0 * samplingOffset)) * 0.05;

     gl_FragColor = fragmentColor;
 }

Note that both of these blurs are hardcoded at 9 samples for performance reasons. This means that at larger blur sizes, you’ll start to see artifacts from the limited samples here. For larger blurs, you’ll need to run these filters multiple times or extend them to support more Gaussian samples. However, more samples will lead to much slower rendering times because of the limited texture sampling bandwidth on iOS devices.