Skip to content

Commit

Permalink
Fixed the sizing issues I was having with the still photo capture, an…
Browse files Browse the repository at this point in the history
…d fixed the preview input on iOS 4.0. The still camera is now functional on every device but the iPhone 4.
  • Loading branch information
BradLarson committed Apr 1, 2012
1 parent b7e06b1 commit d1d3586
Show file tree
Hide file tree
Showing 8 changed files with 70 additions and 27 deletions.
2 changes: 1 addition & 1 deletion License.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Copyright (c) 2012, Brad Larson, Ben Cochran, Hugues Lismonde, Keitaroh Kobayashi.
Copyright (c) 2012, Brad Larson, Ben Cochran, Hugues Lismonde, Keitaroh Kobayashi, Alaric Cole, Matthew Clark, Jacob Gundersen, Chris Williams.
All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
Expand Down
43 changes: 43 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -149,6 +149,10 @@ For example, an application that takes in live video from the camera, converts t
- **GPUImagePixellateFilter**: Applies a pixellation effect on an image or video
- *fractionalWidthOfAPixel*: How large the pixels are, as a fraction of the width and height of the image (0.0 - 1.0, default 0.05)

- **GPUImagePolarPixellateFilter**: Applies a pixellation effect on an image or video, based on polar coordinates instead of Cartesian ones
- *center*: The center about which to apply the pixellation, defaulting to (0.5, 0.5)
- *pixelSize*: The fractional pixel size, split into width and height components. The default is (0.05, 0.05)

- **GPUImageSobelEdgeDetectionFilter**: Sobel edge detection, with edges highlighted in white
- *intensity*: The degree to which the original image colors are replaced by the detected edges (0.0 - 1.0, with 1.0 as the default)
- *imageWidthFactor*:
Expand Down Expand Up @@ -181,6 +185,9 @@ For example, an application that takes in live video from the camera, converts t
- *center*: The center of the image (in normalized coordinates from 0 - 1.0) about which to distort, with a default of (0.5, 0.5)
- *scale*: The amount of distortion to apply, from -2.0 to 2.0, with a default of 1.0

- **GPUImageStretchDistortionFilter**: Creates a stretch distortion of the image
- *center*: The center of the image (in normalized coordinates from 0 - 1.0) about which to distort, with a default of (0.5, 0.5)

- **GPUImageVignetteFilter**: Performs a vignetting effect, fading out the image at the edges
- *x*:
- *y*: The directional intensity of the vignetting, with a default of x = 0.5, y = 0.75
Expand Down Expand Up @@ -234,6 +241,41 @@ This sets up a video source coming from the iOS device's back-facing camera, usi

For blending filters and others that take in more than one image, you can create multiple outputs and add a single filter as a target for both of these outputs. The order with which the outputs are added as targets will affect the order in which the input images are blended or otherwise processed.


### Capturing and filtering a still photo ###

To capture and filter still photos, you can use a process similar to the one for filtering video. Instead of a GPUImageVideoCamera, you use a GPUImageStillCamera:

stillCamera = [[GPUImageStillCamera alloc] init];
filter = [[GPUImageGammaFilter alloc] init];
GPUImageRotationFilter *rotationFilter = [[GPUImageRotationFilter alloc] initWithRotation:kGPUImageRotateRight];

[stillCamera addTarget:rotationFilter];
[rotationFilter addTarget:filter];
GPUImageView *filterView = (GPUImageView *)self.view;
[filter addTarget:filterView];

[stillCamera startCameraCapture];

This will give you a live, filtered feed of the still camera's preview video. Once you want to capture a photo, you use a callback block like the following:

[stillCamera capturePhotoProcessedUpToFilter:filter withCompletionHandler:^(UIImage *processedImage, NSError *error){
NSData *dataForPNGFile = UIImageJPEGRepresentation(processedImage, 0.8);

NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];

NSError *error2 = nil;
if (![dataForPNGFile writeToFile:[documentsDirectory stringByAppendingPathComponent:@"FilteredPhoto.jpg"] options:NSAtomicWrite error:&error2])
{
return;
}
}];

The above code captures a full-size photo processed by the same filter chain used in the preview view and saves that photo to disk as a JPEG in the application's documents directory.

Note that the framework currently can't handle images larger than 2048 pixels wide or high on older devices (those before the iPhone 4S, iPad 3, or Retina iPad) due to texture size limitations. This means that the iPhone 4, whose camera outputs still photos larger than this, won't be able to capture photos like this. A tiling mechanism is being implemented to work around this. All other devices should be able to capture and filter photos using this method.

### Processing a still image ###

There are a couple of ways to process a still image and create a result. The first way you can do this is by creating a still image source object and manually creating a filter chain:
Expand All @@ -253,6 +295,7 @@ For single filters that you wish to apply to an image, you can simply do the fol
GPUImageSepiaFilter *stillImageFilter2 = [[GPUImageSepiaFilter alloc] init];
UIImage *quickFilteredImage = [stillImageFilter2 imageByFilteringImage:inputImage];


### Writing a custom filter ###

One significant advantage of this framework over Core Image on iOS (as of iOS 5.0) is the ability to write your own custom image and video processing filters. These filters are supplied as OpenGL ES 2.0 fragment shaders, written in the C-like OpenGL Shading Language.
Expand Down
22 changes: 7 additions & 15 deletions examples/SimplePhotoFilter/SimplePhotoFilter/PhotoViewController.m
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ - (void)loadView
[filterSettingsSlider addTarget:self action:@selector(updateSliderValue:) forControlEvents:UIControlEventValueChanged];
filterSettingsSlider.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleTopMargin;
filterSettingsSlider.minimumValue = 0.0;
filterSettingsSlider.maximumValue = 0.3;
filterSettingsSlider.value = 0.05;
filterSettingsSlider.maximumValue = 3.0;
filterSettingsSlider.value = 1.0;

[primaryView addSubview:filterSettingsSlider];

Expand All @@ -49,8 +49,7 @@ - (void)viewDidLoad
// Do any additional setup after loading the view.

stillCamera = [[GPUImageStillCamera alloc] init];
filter = [[GPUImagePixellateFilter alloc] init];
// filter = [[GPUImageSketchFilter alloc] init];
filter = [[GPUImageGammaFilter alloc] init];
GPUImageRotationFilter *rotationFilter = [[GPUImageRotationFilter alloc] initWithRotation:kGPUImageRotateRight];

[stillCamera addTarget:rotationFilter];
Expand All @@ -74,30 +73,23 @@ - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interface

- (IBAction)updateSliderValue:(id)sender
{
[(GPUImagePixellateFilter *)filter setFractionalWidthOfAPixel:[(UISlider *)sender value]];
// [(GPUImageSketchFilter *)filter setIntensity:1.0];
// [(GPUImagePixellateFilter *)filter setFractionalWidthOfAPixel:[(UISlider *)sender value]];
[(GPUImageGammaFilter *)filter setGamma:[(UISlider *)sender value]];
}

- (IBAction)takePhoto:(id)sender;
{
NSLog(@"Took photo");

[filter removeTarget:(GPUImageView *)self.view];

[stillCamera capturePhotoProcessedUpToFilter:filter withCompletionHandler:^(UIImage *processedImage, NSError *error){
NSData *dataForPNGFile = UIImagePNGRepresentation(processedImage);
NSData *dataForPNGFile = UIImageJPEGRepresentation(processedImage, 0.8);

NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];

NSError *error2 = nil;
if (![dataForPNGFile writeToFile:[documentsDirectory stringByAppendingPathComponent:@"FilteredPhoto.png"] options:NSAtomicWrite error:&error2])
if (![dataForPNGFile writeToFile:[documentsDirectory stringByAppendingPathComponent:@"FilteredPhoto.jpg"] options:NSAtomicWrite error:&error2])
{
return;
}


[filter addTarget:(GPUImageView *)self.view];
}];
}

Expand Down
6 changes: 4 additions & 2 deletions framework/Source/GPUImageFilter.m
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,7 @@ - (void)createFilterFBOofSize:(CGSize)currentFBOSize;
glGenFramebuffers(1, &filterFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, filterFramebuffer);

NSLog(@"Filter size: %f, %f", currentFBOSize.width, currentFBOSize.height);
NSLog(@"Filter size: %f, %f for filter: %@", currentFBOSize.width, currentFBOSize.height, self);

glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)currentFBOSize.width, (int)currentFBOSize.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
Expand Down Expand Up @@ -418,6 +418,9 @@ - (void)recreateFilterFBO
{
cachedMaximumOutputSize = CGSizeZero;
[self destroyFilterFBO];
[self deleteOutputTexture];

[self initializeOutputTexture];
[self setFilterFBO];
}

Expand All @@ -431,7 +434,6 @@ - (void)setInputSize:(CGSize)newSize;
{
inputTextureSize = newSize;
[self recreateFilterFBO];
NSLog(@"Recreating filter FBO");
}
}

Expand Down
1 change: 0 additions & 1 deletion framework/Source/GPUImageOpenGLESContext.m
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,6 @@ - (void)presentBufferForDisplay;
+ (BOOL)supportsFastTextureUpload;
{
return (CVOpenGLESTextureCacheCreate != NULL);
// return NO;
}

#pragma mark -
Expand Down
12 changes: 6 additions & 6 deletions framework/Source/GPUImageRotationFilter.m
Original file line number Diff line number Diff line change
Expand Up @@ -35,15 +35,15 @@ - (id)initWithRotation:(GPUImageRotationMode)newRotationMode;

- (void)setInputSize:(CGSize)newSize;
{
CGSize processedSize = newSize;

if ( (rotationMode == kGPUImageRotateLeft) || (rotationMode == kGPUImageRotateRight) )
{
inputTextureSize.width = newSize.height;
inputTextureSize.height = newSize.width;
}
else
{
inputTextureSize = newSize;
processedSize.width = newSize.height;
processedSize.height = newSize.width;
}

[super setInputSize:processedSize];
}

- (void)newFrameReady;
Expand Down
4 changes: 3 additions & 1 deletion framework/Source/GPUImageStillCamera.m
Original file line number Diff line number Diff line change
Expand Up @@ -42,12 +42,14 @@ - (void)removeInputsAndOutputs;
- (void)capturePhotoProcessedUpToFilter:(GPUImageOutput<GPUImageInput> *)finalFilterInChain withCompletionHandler:(void (^)(UIImage *processedImage, NSError *error))block;
{
[photoOutput captureStillImageAsynchronouslyFromConnection:[[photoOutput connections] objectAtIndex:0] completionHandler:^(CMSampleBufferRef imageSampleBuffer, NSError *error) {

[self captureOutput:photoOutput didOutputSampleBuffer:imageSampleBuffer fromConnection:[[photoOutput connections] objectAtIndex:0]];
// Will need an alternate pathway for the iOS 4.0 support here

UIImage *filteredPhoto = [finalFilterInChain imageFromCurrentlyProcessedOutput];

block(filteredPhoto, error);

}];
return;
}
Expand Down
7 changes: 6 additions & 1 deletion framework/Source/GPUImageVideoCamera.m
Original file line number Diff line number Diff line change
Expand Up @@ -255,8 +255,13 @@ - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CM
CVPixelBufferLockBaseAddress(cameraFrame, 0);

glBindTexture(GL_TEXTURE_2D, outputTexture);

// glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraFrame));

// Using BGRA extension to pull in video frame data directly
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraFrame));
// The use of bytesPerRow / 4 accounts for a display glitch present in preview video frames when using the photo preset on the camera
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bytesPerRow / 4, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraFrame));

for (id<GPUImageInput> currentTarget in targets)
{
Expand Down

0 comments on commit d1d3586

Please sign in to comment.