iOS Camera Overlay Example Using AVCaptureSession

I made a post back in 2009 on how to overlay images, buttons and labels on a live camera view using UIImagePicker. Well since iOS 4 came out there has been a better way to do this and it is high time I showed you how.

You can get the new project’s source code on GitHub.

This new example project is functionally the same as the other. It looks like this:

Screenshot 2009.12.10 18.46.38 iOS Camera Overlay Example Using AVCaptureSession

AR Overlay Example App

All the app does is allow you to push the “scan” button which then shows the “scanning” label for two seconds. Thats it. Not very exciting I know, but it nicely demonstrates how to layer UI objects over top of a live camera view.

This time, instead of a UIImagePicker, we are using the AVCaptureSession’s preview ability to show the live camera which makes things a little bit easier, and much more powerful.

In the project you will see the app is a standard, view type app created in Xcode 4. The app delegate creates an instance of the view controller, which is an AROverlayViewController. Here is the header and implementation code:

#import <UIKit/UIKit.h>
#import "CaptureSessionManager.h"

@interface AROverlayViewController : UIViewController {
    
}

@property (retain) CaptureSessionManager *captureManager;
@property (nonatomic, retain) UILabel *scanningLabel;

@end
#import "AROverlayViewController.h"

@implementation AROverlayViewController

@synthesize captureManager;
@synthesize scanningLabel;

- (void)viewDidLoad {
  
	[self setCaptureManager:[[[CaptureSessionManager alloc] init] autorelease]];
  
	[[self captureManager] addVideoInput];
  
	[[self captureManager] addVideoPreviewLayer];
	CGRect layerRect = [[[self view] layer] bounds];
	[[[self captureManager] previewLayer] setBounds:layerRect];
	[[[self captureManager] previewLayer] setPosition:CGPointMake(CGRectGetMidX(layerRect),
                                                                CGRectGetMidY(layerRect))];
	[[[self view] layer] addSublayer:[[self captureManager] previewLayer]];
  
  UIImageView *overlayImageView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"overlaygraphic.png"]];
  [overlayImageView setFrame:CGRectMake(30, 100, 260, 200)];
  [[self view] addSubview:overlayImageView];
  [overlayImageView release];
  
  UIButton *overlayButton = [UIButton buttonWithType:UIButtonTypeCustom];
  [overlayButton setImage:[UIImage imageNamed:@"scanbutton.png"] forState:UIControlStateNormal];
  [overlayButton setFrame:CGRectMake(130, 320, 60, 30)];
  [overlayButton addTarget:self action:@selector(scanButtonPressed) forControlEvents:UIControlEventTouchUpInside];
  [[self view] addSubview:overlayButton];
  
  UILabel *tempLabel = [[UILabel alloc] initWithFrame:CGRectMake(100, 50, 120, 30)];
  [self setScanningLabel:tempLabel];
  [tempLabel release];
	[scanningLabel setBackgroundColor:[UIColor clearColor]];
	[scanningLabel setFont:[UIFont fontWithName:@"Courier" size: 18.0]];
	[scanningLabel setTextColor:[UIColor redColor]]; 
	[scanningLabel setText:@"Scanning..."];
  [scanningLabel setHidden:YES];
	[[self view] addSubview:scanningLabel];	
  
	[[captureManager captureSession] startRunning];
}

- (void) scanButtonPressed {
	[[self scanningLabel] setHidden:NO];
	[self performSelector:@selector(hideLabel:) withObject:[self scanningLabel] afterDelay:2];
}

- (void)hideLabel:(UILabel *)label {
	[label setHidden:YES];
}

- (void)didReceiveMemoryWarning {
  [super didReceiveMemoryWarning];
}

- (void)dealloc {
  [captureManager release], captureManager = nil;
  [scanningLabel release], scanningLabel = nil;
  [super dealloc];
}

@end

Mostly basic stuff here. The one interesting note is that we are creating an instance of a CaptureSessionManager class which is a custom made class to handle the AVCaptureSession. While it is not required to do this work in a separate class, it is a clean way to do it and provides for a good place to expand the capture session to do other things like analyze the live video output data stream for example.

Using the methods of the CaptureSessionManager class we turn on the video input and then turn on the preview layer and add it to the view controller’s view’s layer (man is that is a mouthful). The rest of the code is just adding the image, button and label and a button method that shows the label for two seconds when pressed.

Here is the header and implementation for the CaptureSessionManager:

#import <CoreMedia/CoreMedia.h>
#import <AVFoundation/AVFoundation.h>

@interface CaptureSessionManager : NSObject {

}

@property (retain) AVCaptureVideoPreviewLayer *previewLayer;
@property (retain) AVCaptureSession *captureSession;

- (void)addVideoPreviewLayer;
- (void)addVideoInput;

@end
#import "CaptureSessionManager.h"

@implementation CaptureSessionManager

@synthesize captureSession;
@synthesize previewLayer;

#pragma mark Capture Session Configuration

- (id)init {
	if ((self = [super init])) {
		[self setCaptureSession:[[AVCaptureSession alloc] init]];
	}
	return self;
}

- (void)addVideoPreviewLayer {
	[self setPreviewLayer:[[[AVCaptureVideoPreviewLayer alloc] initWithSession:[self captureSession]] autorelease]];
	[[self previewLayer] setVideoGravity:AVLayerVideoGravityResizeAspectFill];
  
}

- (void)addVideoInput {
	AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];	
	if (videoDevice) {
		NSError *error;
		AVCaptureDeviceInput *videoIn = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
		if (!error) {
			if ([[self captureSession] canAddInput:videoIn])
				[[self captureSession] addInput:videoIn];
			else
				NSLog(@"Couldn't add video input");		
		}
		else
			NSLog(@"Couldn't create video input");
	}
	else
		NSLog(@"Couldn't create video capture device");
}

- (void)dealloc {

	[[self captureSession] stopRunning];

	[previewLayer release], previewLayer = nil;
	[captureSession release], captureSession = nil;

	[super dealloc];
}

@end

This is a pretty basic class from which you could build from to do a lot, but the basics are all we need for this example. In the header we define a couple properties, one for the session and one for the preview layer, and a couple of methods, one to start video input and one to turn on the preview layer.

In the init method all we do is initialize the session with:

[self setCaptureSession:[[AVCaptureSession alloc] init]];

The addVideoPreviewLayer and addVideoInput methods are similarly simple. The addVideoPreviewLayer method just sets up the preview layer. The addVideoInput method gets the device (with some error checking which is always a good idea, right?) and then sets the input using that device.

There is really not much to it and I think the best advice if you are trying to do something like this is to just download the source code and take a look at how it works for yourself.

I mentioned that this method gives you more power. That power comes from the ability to get and process the video output data stream using AVCaptureVideoDataOutput rather than having to take screenshots using an undocumented API and then process the resultant image as you had to do in the past. A great example of this is shown in the WWDC 2010 Session 409 video where they show you how to detect a certain color in the video output. The sample project used in that video, called FindMyiCone, is available in the source code for WWDC 2010. Everyone in the iOS developer program should have access to both of those through in the dev center.

Related posts

Leave a Comment