IOS video recording

Time:2022-6-11

This article makes a summary of the previous camera modules, including custom camera for video shooting, video processing and saving. Interested friends can make a reference

Framework introduction avfoundation

IOS video recording
image.png

Commonly used for media recording, editing, playing, audio recording and playing, video and audio decoding, etc

Common classes: avcapturedevice, avcapturedeviceinput, avcapturephotooutput, avcapturevideopreviewlayer
Avasset, avassetreader, avassetwriter, cmsamplebuffer, avplayer, cmtime, avcapturemoviefileoutput, avcapturemetadataoutput, etc

  • Avasset is an abstract class that defines an abstract interface for asset files. Avurlasset is created through a URL. The URL can be a local resource or a network resource

  • The avassetreader is used to read the media data of the avasset. It can directly decode the uncoded media data into usable data

  • The avassetwriter can write the media data cmsamplebuffer to the specified file

  • Cmsamplebuffer is a core foundation object. It is a compressed or uncompressed data sample of audio and video

  • Cmtime a structure that represents time. Time in fractions

  • Avcapturemoviefileoutput outputs audio and video data to a file

  • Avcapturemetadataoutput metadata capture output this output is very powerful and can be used to scan barcode, face, QR code, upc-e commodity barcode and other information.

preparation

1. judge whether there is permission

AVAuthorizationStatus authStatus = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo];

If you have not applied for permission, obtain permission

[AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo completionHandler:^(BOOL granted) {
            dispatch_sync(dispatch_get_main_queue(), ^{
                if (granted) {
                   //Successfully applied for permission
                } else {
                   //Failed to apply for permission
                }
            });
        }];
  1. If the default screen is horizontal, it needs to be rotated according to the screen direction

Customize camera configuration information

image.png

The main parts of capture system architecture are session, input and output

A capture session connects one or more inputs to one or more
Output. Input is the source of media, including capture devices, cameras, and microphones. Output is to obtain media data from input, such as writing a disk file and generating a movie file.

  1. The following attributes need to be created
@property (nonatomic ,strong) AVCaptureSession *session; //  He combines the input and output of the session and starts the capture device (camera)
@property (nonatomic ,strong) AVCaptureDevice *device; //  Video input device
@property (nonatomic ,strong) AVCaptureDevice *audioDevice; //  Audio input device
@property (nonatomic ,strong) AVCaptureDeviceInput *deviceInput;// Image input source
@property (nonatomic ,strong) AVCaptureDeviceInput *audioInput; // Audio input source
@property (nonatomic ,strong) AVCaptureAudioDataOutput *audioPutData;   // Audio output source
@property (nonatomic ,strong) AVCaptureVideoDataOutput *videoPutData;   // Video output source
@property (nonatomic ,strong) AVCaptureVideoPreviewLayer *previewLayer;
@property (nonatomic ,strong) AVCaptureConnection *connection;
@property (nonatomic ,strong) AVAssetWriter *writer;// Video capture
@property (nonatomic ,strong) AVAssetWriterInput *writerAudioInput;// Audio acquisition
@property (nonatomic ,strong) AVAssetWriterInput *writerVideoInput;// Video capture
  1. Initialize session session avcapturesession collection session to manage and coordinate input and output devices
    self.session = [[AVCaptureSession alloc] init];
    if ([self.session canSetSessionPreset:AVCaptureSessionPresetHigh]){
        self.session.sessionPreset = AVCaptureSessionPresetHigh;
    }else if ([self.session canSetSessionPreset:AVCaptureSessionPresetiFrame960x540]) {
        self.session.sessionPreset = AVCaptureSessionPresetiFrame960x540;
    }
  1. Get video input device (camera)
    self.device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
    [_device isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus];
  1. Create video input source and add to session
    NSError *error = nil;
    self.deviceInput = [[AVCaptureDeviceInput alloc] initWithDevice:self.device error:&error];
    if (!error) {
        if ([self.session canAddInput:self.deviceInput]) {
            [self.session addInput:self.deviceInput];
        }
    }
  1. Create video output source and add to session
NSDictionary *videoSetting = @{(id)kCVPixelBufferPixelFormatTypeKey:@(kCVPixelFormatType_32BGRA)};
    self.videoPutData = [[AVCaptureVideoDataOutput alloc] init];
    self.videoPutData.videoSettings = videoSetting;
    self. videoPutData. alwaysDiscardsLateVideoFrames = YES; // Discard old frames immediately to save memory. The default is yes
    dispatch_queue_t videoQueue = dispatch_queue_create("vidio", DISPATCH_QUEUE_CONCURRENT);
    [self.videoPutData setSampleBufferDelegate:self queue:videoQueue];
    if ([self.session canAddOutput:self.videoPutData]) {
        [self.session addOutput:self.videoPutData];
    }
    //Set imageconnection to control the angle and direction of video taken by the camera
    AVCaptureConnection *imageConnection = [self.videoPutData connectionWithMediaType:AVMediaTypeVideo];
    if (imageConnection.supportsVideoOrientation) {
        imageConnection.videoOrientation = AVCaptureVideoOrientationLandscapeRight;
    }
  1. Get audio input device
    self.audioDevice = [[AVCaptureDevice devicesWithMediaType:AVMediaTypeAudio] firstObject];
  1. Create audio input source and add to session
    NSError *audioError = nil;
    self.audioInput = [[AVCaptureDeviceInput alloc] initWithDevice:self.audioDevice error:&audioError];
    if (!audioError) {
        if ([self.session canAddInput:self.audioInput]) {
            [self.session addInput:self.audioInput];
        }
    }
  1. Create audio output source and add to session
self.audioPutData = [[AVCaptureAudioDataOutput alloc] init];
    if ([self.session canAddOutput:self.audioPutData]) {
        [self.session addOutput:self.audioPutData];
    }
    dispatch_queue_t audioQueue = dispatch_queue_create("audio", DISPATCH_QUEUE_CONCURRENT);
    [self.audioPutData setSampleBufferDelegate:self queue:audioQueue]; //  Set write agent
  1. Initialize the preview layer. The session session is responsible for driving the input source to collect information. The layer preview layer is responsible for rendering and displaying the collected image
self.previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:self.session];
    self.previewLayer.frame = CGRectMake(0, 0, width,height);
    self. previewLayer. connection. videoOrientation = AVCaptureVideoOrientationLandscapeRight; //  Layer display shooting angle direction
    self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    [self.view.layer addSublayer:self.previewLayer];
  1. Start collection
    [self.session startRunning];

Video capture attribute settings (optional)

  1. Switch cameras
[self.session stopRunning];
    // 1.  Get current camera
    AVCaptureDevicePosition position = self.deviceInput.device.position;
    
    //2.  Get the camera to be displayed
    if (position == AVCaptureDevicePositionBack) {
        position = AVCaptureDevicePositionFront;
    } else {
        position = AVCaptureDevicePositionBack;
    }
    
    // 3.  Create a new device based on the current camera
    AVCaptureDevice *device = [self getCameraDeviceWithPosition:position];
    
    // 4.  Create input according to the new device
    AVCaptureDeviceInput *newInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
    
    //5.  Switch input in session
    [self.session beginConfiguration];
    [self.session removeInput:self.deviceInput];
    [self.session addInput:newInput];
    [self.session commitConfiguration];
    self.deviceInput = newInput;
    
    [self.session startRunning];
  1. flash lamp
if ([self.device lockForConfiguration:nil]) {

        if ([self.device hasFlash]) {

            if (self.device.flashMode == AVCaptureFlashModeAuto) {
                self.device.flashMode = AVCaptureFlashModeOn;
                [self.flashBtn setImage:[UIImage imageNamed:@"shanguangdeng_kai"] forState:UIControlStateNormal];

            }else if (self.device.flashMode == AVCaptureFlashModeOn){
                self.device.flashMode = AVCaptureFlashModeOff;
                [self.flashBtn setImage:[UIImage imageNamed:@"shanguangdeng_guan"] forState:UIControlStateNormal];

            }else{

                self.device.flashMode = AVCaptureFlashModeAuto;
                [self.flashBtn setImage:[UIImage imageNamed:@"shanguangdeng_zidong"] forState:normal];
            }
        }
        [self.device unlockForConfiguration];
    }
  1. focusing
//Add focus gesture
- (void)addTap {
    UITapGestureRecognizer *tap = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(focusGesture:)];
    [self.view addGestureRecognizer:tap];
}
- (void)focusGesture:(UITapGestureRecognizer*)gesture{
    CGPoint point = [gesture locationInView:gesture.view];
   CGSize size = self.view.bounds.size;
    //The value range of the point behind the focuspoint function is from the upper left corner (0, 0) to the lower right corner (1, 1) of the viewfinder box. Sometimes, this is used, but the position is wrong. It is adapted according to the actual situation
    CGPoint focusPoint = CGPointMake( point.x /size.width , point.y/size.height );
    if ([self.device lockForConfiguration:nil]) {
        [self.session beginConfiguration];
        /*****The focus position must be set first. Before setting the focus mode******/
        //Position of focus
        if ([self.device isFocusPointOfInterestSupported]) {
            [self.device setFocusPointOfInterest:focusPoint];
        }
        //Focus mode
        if ([self.device isFocusModeSupported:AVCaptureFocusModeAutoFocus]) {
            [self.device setFocusMode:AVCaptureFocusModeAutoFocus];
        }else{
            Nslog (@ "focus mode modification failed");
        }
        //Position of exposure point
        if ([self.device isExposurePointOfInterestSupported]) {
            [self.device setExposurePointOfInterest:focusPoint];
        }
        //Exposure mode
        if ([self.device isExposureModeSupported:AVCaptureExposureModeContinuousAutoExposure]) {
            [self.device setExposureMode:AVCaptureExposureModeContinuousAutoExposure];
        } else {
            Nslog (@ "exposure mode modification failed");
        }
        [self.device unlockForConfiguration];
        [self.session commitConfiguration];
    }
}

Video recording method 1 – write through avassetwriter

Video recording requires a path in the sandbox to store the file information written in the video recording process. After all the video data are written, the complete video can be obtained

  1. Build path
- (NSURL *)createVideoFilePathUrl
{
    NSString *documentPath = [NSHomeDirectory() stringByAppendingString:@"/Documents/shortVideo"];

    NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init];
    [dateFormatter setDateFormat:@"yyyyMMddHHmmss"];

    NSString *destDateString = [dateFormatter stringFromDate:[NSDate date]];
    NSString *videoName = [destDateString stringByAppendingString:@".mp4"];

    NSString *filePath = [documentPath stringByAppendingFormat:@"/%@",videoName];

    NSFileManager *manager = [NSFileManager defaultManager];
    BOOL isDir;
    if (![manager fileExistsAtPath:documentPath isDirectory:&isDir]) {
        [manager createDirectoryAtPath:documentPath withIntermediateDirectories:YES attributes:nil error:nil];

    }
    
    return [NSURL fileURLWithPath:filePath];
}
  1. Start recording and finish setting recording configuration

2.1 get the storage path the storage path is in the sandbox and needs to be unique

self.preVideoURL = [self createVideoFilePathUrl];

2.2 enable asynchronous threads for write configuration

dispatch_queue_t writeQueueCreate = dispatch_queue_create("writeQueueCreate", DISPATCH_QUEUE_CONCURRENT);
dispatch_async(writeQueueCreate, ^{

})

2.3. Generate video capture object

NSError *error = nil;
self.writer = [AVAssetWriter assetWriterWithURL:self.preVideoURL fileType:AVFileTypeMPEG4 error:&error];

2.4. Generate an image acquisition object and add it to the video acquisition object. You can set the image and audio acquisition object, format, size, code rate, frame rate, channel, etc

NSInteger numPixels = width * height;
//Bits per pixel
 CGFloat bitsPerPixel = 12.0;
NSInteger bitsPerSecond = numPixels * bitsPerPixel;
//Code rate and frame rate settings
NSDictionary *compressionProperties = @{ AVVideoAverageBitRateKey : @(bitsPerSecond),
                                                     AVVideoExpectedSourceFrameRateKey : @(30),
                                                     AVVideoMaxKeyFrameIntervalKey : @(30),
                                                     AVVideoProfileLevelKey : AVVideoProfileLevelH264BaselineAutoLevel };
//Video properties
NSDictionary *videoSetting = @{ AVVideoCodecKey : AVVideoCodecTypeH264,
                                            AVVideoWidthKey : @(width),
                                            AVVideoHeightKey : @(height),
                                            AVVideoScalingModeKey : AVVideoScalingModeResizeAspectFill,
                                            AVVideoCompressionPropertiesKey : compressionProperties };
self.writerVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSetting];
self. writerVideoInput. expectsMediaDataInRealTime = YES; // Expectsmediadatainrealtime must be set to yes, and real-time data needs to be obtained from the capture session

if ([self.writer canAddInput:self.writerVideoInput]) {
     [self.writer addInput:self.writerVideoInput];
}

2.5. Generate an audio acquisition object and add it to a video acquisition object

NSDictionary *audioSetting = @{ AVEncoderBitRatePerChannelKey : @(28000),
                                            AVFormatIDKey : @(kAudioFormatMPEG4AAC),
                                            AVNumberOfChannelsKey : @(1),
                                            AVSampleRateKey : @(22050) };
self.writerAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:audioSetting];
            
self. writerAudioInput. expectsMediaDataInRealTime = YES; // Expectsmediadatainrealtime must be set to yes, and real-time data needs to be obtained from the capture session
            
 if ([self.writer canAddInput:self.writerAudioInput]) {
     [self.writer addInput:self.writerAudioInput];
}

The above writing method will start writing and recording when the video information is obtained, so as to avoid writing the voice information first, which will cause the problem that there is voice but no video information at the beginning (this problem is not obvious in the actual measurement, and it can be added according to personal needs)
The startsessionatsourcetime method is used to set the start playback time

  1. File write start recording can set the start playback time to avoid the problem of blank video at the beginning startsessionatsourcetime
    In the callback method captureoutput:didoutputsamplebuffer:romconnection: when data is received for the first time, file writing is started, and each time data is written to the file
    CMFormatDescriptionRef desMedia = CMSampleBufferGetFormatDescription(sampleBuffer);
    CMMediaType mediaType = CMFormatDescriptionGetMediaType(desMedia);
    if (mediaType == kCMMediaType_Video) {
        if (!self.canWritting) {
            [self.writer startWriting];
            CMTime timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
            self.canWritting = YES;
            [self.writer startSessionAtSourceTime:timestamp];
        }
    }
    
    if (self.canWritting) {
        if (mediaType == kCMMediaType_Video) {
            if (self.writerVideoInput.readyForMoreMediaData) {
                BOOL success = [self.writerVideoInput appendSampleBuffer:sampleBuffer];
                if (!success) {
                    NSLog(@"video write failed");
                }
            }
        }else if (mediaType == kCMMediaType_Audio){
            if (self.writerAudioInput.readyForMoreMediaData) {
                BOOL success = [self.writerAudioInput appendSampleBuffer:sampleBuffer];
                if (!success) {
                    NSLog(@"audio write failed");
                }
            }
        }
    }
  1. stop recording
    Create an asynchronous thread and complete the end recording operation in it
dispatch_queue_t writeQueue = dispatch_queue_create("writeQueue", DISPATCH_QUEUE_CONCURRENT);
dispatch_async(writeQueue, ^{
   if (weakSelf.writer.status == AVAssetWriterStatusWriting) {
         [weakSelf.writer finishWritingWithCompletionHandler:^{
               ///Complete operation
         }];
    }
});

Video recording mode 2 – write through avcapturemoviefileoutput

  • 1. to create a video output source, you only need to generate a video output source, not an audio output source
@property (nonatomic ,strong) AVCaptureMovieFileOutput *movieFileOutPut; //  Movie output source
  …………
  …………
    //Create video output source and add to session
    self.movieFileOutPut = [[AVCaptureMovieFileOutput alloc] init];
    //Set some properties of the output object
    AVCaptureConnection *captureConnection=[self.movieFileOutPut connectionWithMediaType:AVMediaTypeVideo];    // Set anti shake
    //Video anti shake was introduced when IOS 6 and iPhone 4S were released. In the iPhone 6, a more powerful and smooth anti shake mode is added, which is called theater level video anti shake. The related API has also been changed (it has not been reflected in the document so far, but you can check the header file). Anti shake is not configured on the capture device, but is set on avcaptureconnection. Since not all device formats support all anti shake modes, it is necessary to confirm whether the specific anti shake mode supports:
    if ([captureConnection isVideoStabilizationSupported ]) {
        captureConnection.preferredVideoStabilizationMode=AVCaptureVideoStabilizationModeAuto;
    }
    //Preview layer and video orientation are consistent
    captureConnection.videoOrientation = AVCaptureVideoOrientationLandscapeRight;
    //Add device output to session
    if ([_session canAddOutput:self.movieFileOutPut]) {
        [_session addOutput:self.movieFileOutPut];
    }
    1. Generate storage path
  • 3. call the recording method transfer path, and the file is automatically written. Directly call the recording method without configuring the file write object
  [self.movieFileOutPut startRecordingToOutputFileURL:self.preVideoURL recordingDelegate:self];  
  • 4. finish recording
  [self.movieFileOutPut stopRecording];
  • 5. in the agent method, monitor the completion of recording status and obtain the file
  -(void)captureOutput:(AVCaptureFileOutput *)output didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray<AVCaptureConnection *> *)connections error:(NSError *)error {
    …………
}

Avcapturemoviefileoutput mode provides pause recording method and resume recording method, but it is only available for Mac OS

The avassetwriter does not support pausing recording. An attempt was made to pause file writing. The result is a blank segment, and the audio time sequence is chaotic. The status enumeration has no pause status. It is not supported

Comparison of two recording methods

Similarities: data acquisition is carried out in avcapturesession, the input of video and audio are the same, and the preview of the picture is the same.
difference:

  • 1. Avcapturemoviefileoutput is relatively simple, and only one output is required;
    The avassetwriter needs two separate outputs, avcapturevideodataoutput and avcaptureaudiodataoutput. After obtaining the respective output data, it can process them by itself
  • 2.avassetwriter can be configured with more parameters and is more flexible
  • 3. the file processing is inconsistent, and the avassetwriter can get the real-time data stream
    Avcapturemoviefileoutput if you want to clip the video, because the system has written the data to the file, we need to get a complete video from the file and then process it;
    However, the avassetwriter we have obtained is a data stream, and we have not yet synthesized video to process the data stream

video processing

After recording, you can obtain video files through the previous path for playback, saving and other operations
preservation

PHPhotoLibrary *photoLibrary = [PHPhotoLibrary sharedPhotoLibrary];
    [photoLibrary performChanges:^{
        [PHAssetChangeRequest creationRequestForAssetFromVideoAtFileURL:self.preVideoURL];
    } completionHandler:^(BOOL success, NSError * _Nullable error) {
        if (success) {
            Nslog (@ "saved video to album");
        } else {
            Nslog (@ "failed to save video to album");
        }
    }];

Photo attribute setting (optional)

Reference camera photographing attribute settingshttps://www.jianshu.com/p/e2de8a85b8aa