programing

UI 이미지 배열을 동영상으로 내보내려면 어떻게 해야 합니까?

powerit 2023. 6. 27. 22:35
반응형

UI 이미지 배열을 동영상으로 내보내려면 어떻게 해야 합니까?

심각한 문제가 있습니다.나는 있습니다.NSArray 번이나UIImage물건들.제가 지금 하고 싶은 것은 그것들로 영화를 만드는 것입니다.UIImages하지만 어떻게 해야 할지 전혀 모르겠습니다.

나는 누군가가 나를 도와주거나 내가 원하는 것을 할 수 있는 코드 스니펫을 보내주기를 바랍니다.

편집: 나중에 참조할 수 있도록 솔루션을 적용한 후 비디오가 왜곡된 경우 캡처 중인 이미지/영역의 너비가 16의 배수인지 확인합니다.수 시간 동안 고군분투한 끝에 발견된 것은
UI 이미지에서 동영상이 왜곡되는 이유는 무엇입니까?

해결책이 (하세요).
http://.no-ip.org/wordpress/archives/673http ://codethink.no-ip.org/wordpress/archives/673

AVAssetWriter와 나머지 AVFoundation 프레임워크를 살펴봅니다.작성기에는 AVAssetWriterInput 유형이 있으며, 이 입력기에는 비디오 스트림에 개별 프레임을 추가할 수 있는 appendSampleBuffer라는 메서드가 있습니다.기본적으로 다음을 수행해야 합니다.

작성기 연결:

NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
    [NSURL fileURLWithPath:somePath] fileType:AVFileTypeQuickTimeMovie
    error:&error];
NSParameterAssert(videoWriter);

NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
    AVVideoCodecH264, AVVideoCodecKey,
    [NSNumber numberWithInt:640], AVVideoWidthKey,
    [NSNumber numberWithInt:480], AVVideoHeightKey,
    nil];
AVAssetWriterInput* writerInput = [[AVAssetWriterInput
    assetWriterInputWithMediaType:AVMediaTypeVideo
    outputSettings:videoSettings] retain]; //retain should be removed if ARC

NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];

세션 시작:

[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:…] //use kCMTimeZero if unsure

샘플을 작성합니다.

// Or you can use AVAssetWriterInputPixelBufferAdaptor.
// That lets you feed the writer input data from a CVPixelBuffer
// that’s quite easy to create from a CGImage.
[writerInput appendSampleBuffer:sampleBuffer];

세션을 마칩니다.

[writerInput markAsFinished];
[videoWriter endSessionAtSourceTime:…]; //optional can call finishWriting without specifying endTime
[videoWriter finishWriting]; //deprecated in ios6
/*
[videoWriter finishWritingWithCompletionHandler:...]; //ios 6.0+
*/

아직 빈칸을 많이 채워야 하겠지만, 정말로 어려운 부분은 하나의 픽셀 버퍼를 얻는 것뿐이라고 생각합니다.CGImage:

- (CVPixelBufferRef) newPixelBufferFromCGImage: (CGImageRef) image
{
    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
        [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
        [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
        nil];
    CVPixelBufferRef pxbuffer = NULL;
    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameSize.width,
        frameSize.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options, 
        &pxbuffer);
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
    NSParameterAssert(pxdata != NULL);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, frameSize.width,
        frameSize.height, 8, 4*frameSize.width, rgbColorSpace, 
        kCGImageAlphaNoneSkipFirst);
    NSParameterAssert(context);
    CGContextConcatCTM(context, frameTransform);
    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), 
        CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}

frameSize입니다.CGSize와 목표프임설크명다니합기를레ing▁your에 대해 설명합니다.frameTransform입니다.CGAffineTransform이미지를 프레임으로 그릴 때 변환할 수 있습니다.

Swift 5로 업데이트

지난 주에 저는 이미지에서 비디오를 생성하기 위해 iOS 코드를 작성하기 시작했습니다.저는 AV Foundation 경험은 조금 있었지만 CVPixel Buffer에 대해서는 들어본 적이 없습니다.저는 이 페이지와 여기에서 답을 발견했습니다.모든 것을 해부하고 제 뇌에 맞는 방식으로 스위프트에서 다시 조립하는 데 며칠이 걸렸습니다.제가 생각해 낸 것은 아래와 같습니다.

참고: 아래의 모든 코드를 하나의 Swift 파일로 복사/붙여넣으면 컴파일됩니다.당신은 그냥 조정하면 됩니다.loadImages() 리고그고.RenderSettings가치.

1부: 설정

서는 모든 의 여기서모내관설하그정다룹니합화나로 .RenderSettings구조의

import AVFoundation
import UIKit
import Photos

struct RenderSettings {

var size : CGSize = .zero
var fps: Int32 = 6   // frames per second
var avCodecKey = AVVideoCodecType.h264
var videoFilename = "render"
var videoFilenameExt = "mp4"


var outputURL: URL {
    // Use the CachesDirectory so the rendered video file sticks around as long as we need it to.
    // Using the CachesDirectory ensures the file won't be included in a backup of the app.
    let fileManager = FileManager.default
    if let tmpDirURL = try? fileManager.url(for: .cachesDirectory, in: .userDomainMask, appropriateFor: nil, create: true) {
        return tmpDirURL.appendingPathComponent(videoFilename).appendingPathExtension(videoFilenameExt)
    }
    fatalError("URLForDirectory() failed")
}

2부: 이미지 애니메이터

ImageAnimator에 대해 , 클래용당이대고알해있사다고니합미에신지는의스▁the▁and▁class다▁uses▁knows▁about니▁images사합▁your용있클고를 사용합니다.VideoWriter렌더링을 수행할 클래스입니다.비디오 콘텐츠 코드를 낮은 수준의 AV Foundation 코드와 별도로 유지하기 위한 것입니다.저는 또한 추가했습니다.saveToLibrary()여기서는 비디오를 사진 라이브러리에 저장하기 위해 체인의 끝에서 호출되는 클래스 함수로 사용됩니다.

class ImageAnimator {

// Apple suggests a timescale of 600 because it's a multiple of standard video rates 24, 25, 30, 60 fps etc.
static let kTimescale: Int32 = 600

let settings: RenderSettings
let videoWriter: VideoWriter
var images: [UIImage]!

var frameNum = 0

class func saveToLibrary(videoURL: URL) {
    PHPhotoLibrary.requestAuthorization { status in
        guard status == .authorized else { return }

        PHPhotoLibrary.shared().performChanges({
            PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: videoURL)
        }) { success, error in
            if !success {
                print("Could not save video to photo library:", error)
            }
        }
    }
}

class func removeFileAtURL(fileURL: URL) {
    do {
        try FileManager.default.removeItem(atPath: fileURL.path)
    }
    catch _ as NSError {
        // Assume file doesn't exist.
    }
}

init(renderSettings: RenderSettings) {
    settings = renderSettings
    videoWriter = VideoWriter(renderSettings: settings)
    //images = loadImages()
}

func render(completion: (()->Void)?) {

    // The VideoWriter will fail if a file exists at the URL, so clear it out first.
    ImageAnimator.removeFileAtURL(fileURL: settings.outputURL)

    videoWriter.start()
    videoWriter.render(appendPixelBuffers: appendPixelBuffers) {
        ImageAnimator.saveToLibrary(videoURL: self.settings.outputURL)
        completion?()
    }

}

// This is the callback function for VideoWriter.render()
func appendPixelBuffers(writer: VideoWriter) -> Bool {

    let frameDuration = CMTimeMake(value: Int64(ImageAnimator.kTimescale / settings.fps), timescale: ImageAnimator.kTimescale)

    while !images.isEmpty {

        if writer.isReadyForData == false {
            // Inform writer we have more buffers to write.
            return false
        }

        let image = images.removeFirst()
        let presentationTime = CMTimeMultiply(frameDuration, multiplier: Int32(frameNum))
        let success = videoWriter.addImage(image: image, withPresentationTime: presentationTime)
        if success == false {
            fatalError("addImage() failed")
        }

        frameNum += 1
    }

    // Inform writer all buffers have been written.
    return true
}

파트 3: 비디오 라이터

VideoWriter클래스는 모든 AV Foundation 헤비 리프팅을 합니다.그것은 대부분 주변의 포장지입니다.AVAssetWriter그리고.AVAssetWriterInput그것은 또한 이미지를 어떻게 번역하는지 모르는 나에 의해 쓰여진 화려한 코드를 포함합니다.CVPixelBuffer.

class VideoWriter {

let renderSettings: RenderSettings

var videoWriter: AVAssetWriter!
var videoWriterInput: AVAssetWriterInput!
var pixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor!

var isReadyForData: Bool {
    return videoWriterInput?.isReadyForMoreMediaData ?? false
}

class func pixelBufferFromImage(image: UIImage, pixelBufferPool: CVPixelBufferPool, size: CGSize) -> CVPixelBuffer {

    var pixelBufferOut: CVPixelBuffer?

    let status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelBufferOut)
    if status != kCVReturnSuccess {
        fatalError("CVPixelBufferPoolCreatePixelBuffer() failed")
    }

    let pixelBuffer = pixelBufferOut!

    CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))

    let data = CVPixelBufferGetBaseAddress(pixelBuffer)
    let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
    let context = CGContext(data: data, width: Int(size.width), height: Int(size.height),
                            bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)

    context!.clear(CGRect(x:0,y: 0,width: size.width,height: size.height))

    let horizontalRatio = size.width / image.size.width
    let verticalRatio = size.height / image.size.height
    //aspectRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill
    let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit

    let newSize = CGSize(width: image.size.width * aspectRatio, height: image.size.height * aspectRatio)

    let x = newSize.width < size.width ? (size.width - newSize.width) / 2 : 0
    let y = newSize.height < size.height ? (size.height - newSize.height) / 2 : 0

    context?.draw(image.cgImage!, in: CGRect(x:x,y: y, width: newSize.width, height: newSize.height))
    CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))

    return pixelBuffer
}

init(renderSettings: RenderSettings) {
    self.renderSettings = renderSettings
}

func start() {

    let avOutputSettings: [String: Any] = [
        AVVideoCodecKey: renderSettings.avCodecKey,
        AVVideoWidthKey: NSNumber(value: Float(renderSettings.size.width)),
        AVVideoHeightKey: NSNumber(value: Float(renderSettings.size.height))
    ]

    func createPixelBufferAdaptor() {
        let sourcePixelBufferAttributesDictionary = [
            kCVPixelBufferPixelFormatTypeKey as String: NSNumber(value: kCVPixelFormatType_32ARGB),
            kCVPixelBufferWidthKey as String: NSNumber(value: Float(renderSettings.size.width)),
            kCVPixelBufferHeightKey as String: NSNumber(value: Float(renderSettings.size.height))
        ]
        pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput,
                                                                  sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)
    }

    func createAssetWriter(outputURL: URL) -> AVAssetWriter {
        guard let assetWriter = try? AVAssetWriter(outputURL: outputURL, fileType: AVFileType.mp4) else {
            fatalError("AVAssetWriter() failed")
        }

        guard assetWriter.canApply(outputSettings: avOutputSettings, forMediaType: AVMediaType.video) else {
            fatalError("canApplyOutputSettings() failed")
        }

        return assetWriter
    }

    videoWriter = createAssetWriter(outputURL: renderSettings.outputURL)
    videoWriterInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: avOutputSettings)

    if videoWriter.canAdd(videoWriterInput) {
        videoWriter.add(videoWriterInput)
    }
    else {
        fatalError("canAddInput() returned false")
    }

    // The pixel buffer adaptor must be created before we start writing.
    createPixelBufferAdaptor()

    if videoWriter.startWriting() == false {
        fatalError("startWriting() failed")
    }

    videoWriter.startSession(atSourceTime: CMTime.zero)

    precondition(pixelBufferAdaptor.pixelBufferPool != nil, "nil pixelBufferPool")
}

func render(appendPixelBuffers: ((VideoWriter)->Bool)?, completion: (()->Void)?) {

    precondition(videoWriter != nil, "Call start() to initialze the writer")

    let queue = DispatchQueue(label: "mediaInputQueue")
    videoWriterInput.requestMediaDataWhenReady(on: queue) {
        let isFinished = appendPixelBuffers?(self) ?? false
        if isFinished {
            self.videoWriterInput.markAsFinished()
            self.videoWriter.finishWriting() {
                DispatchQueue.main.async {
                    completion?()
                }
            }
        }
        else {
            // Fall through. The closure will be called again when the writer is ready.
        }
    }
}

func addImage(image: UIImage, withPresentationTime presentationTime: CMTime) -> Bool {

    precondition(pixelBufferAdaptor != nil, "Call start() to initialze the writer")

    let pixelBuffer = VideoWriter.pixelBufferFromImage(image: image, pixelBufferPool: pixelBufferAdaptor.pixelBufferPool!, size: renderSettings.size)
    return pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime)
}

4부: 실현

모든 것이 준비되면 다음과 같은 세 가지 마법 라인이 있습니다.

let settings = RenderSettings()
let imageAnimator = ImageAnimator(renderSettings: settings)
imageAnimator.render() {
    print("yes")
}

여기 목표-C의 iOS8에 대한 최신 작업 코드가 있습니다.

우리는 Xcode와 iOS8의 최신 버전에서 작동하기 위해 위의 @Zoul의 답변을 다양하게 수정해야 했습니다.UI 이미지 배열을 가져와서 .mov 파일로 만들고 임시 디렉토리에 저장한 다음 카메라 롤로 이동하는 전체 작업 코드입니다.우리는 이것을 작동시키기 위해 여러 다른 게시물에서 코드를 조립했습니다.우리는 댓글에서 코드가 작동하도록 하기 위해 해결해야 했던 함정을 강조했습니다.

UI 이미지 모음 만들기

[self saveMovieToLibrary]


- (IBAction)saveMovieToLibrary
{
    // You just need the height and width of the video here
    // For us, our input and output video was 640 height x 480 width
    // which is what we get from the iOS front camera
    ATHSingleton *singleton = [ATHSingleton singletons];
    int height = singleton.screenHeight;
    int width = singleton.screenWidth;

    // You can save a .mov or a .mp4 file        
    //NSString *fileNameOut = @"temp.mp4";
    NSString *fileNameOut = @"temp.mov";

    // We chose to save in the tmp/ directory on the device initially
    NSString *directoryOut = @"tmp/";
    NSString *outFile = [NSString stringWithFormat:@"%@%@",directoryOut,fileNameOut];
    NSString *path = [NSHomeDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:outFile]];
    NSURL *videoTempURL = [NSURL fileURLWithPath:[NSString stringWithFormat:@"%@%@", NSTemporaryDirectory(), fileNameOut]];

    // WARNING: AVAssetWriter does not overwrite files for us, so remove the destination file if it already exists
    NSFileManager *fileManager = [NSFileManager defaultManager];
    [fileManager removeItemAtPath:[videoTempURL path]  error:NULL];


    // Create your own array of UIImages        
    NSMutableArray *images = [NSMutableArray array];
    for (int i=0; i<singleton.numberOfScreenshots; i++)
    {
        // This was our routine that returned a UIImage. Just use your own.
        UIImage *image =[self uiimageFromCopyOfPixelBuffersUsingIndex:i];
        // We used a routine to write text onto every image 
        // so we could validate the images were actually being written when testing. This was it below. 
        image = [self writeToImage:image Text:[NSString stringWithFormat:@"%i",i ]];
        [images addObject:image];     
    }

// If you just want to manually add a few images - here is code you can uncomment
// NSString *path = [NSHomeDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:@"Documents/movie.mp4"]];
//    NSArray *images = [[NSArray alloc] initWithObjects:
//                      [UIImage imageNamed:@"add_ar.png"],
//                      [UIImage imageNamed:@"add_ja.png"],
//                      [UIImage imageNamed:@"add_ru.png"],
//                      [UIImage imageNamed:@"add_ru.png"],
//                      [UIImage imageNamed:@"add_ar.png"],
//                      [UIImage imageNamed:@"add_ja.png"],
//                      [UIImage imageNamed:@"add_ru.png"],
//                      [UIImage imageNamed:@"add_ar.png"],
//                      [UIImage imageNamed:@"add_en.png"], nil];



    [self writeImageAsMovie:images toPath:path size:CGSizeMake(height, width)];
}

이것은 자산 작성기를 만들고 쓰기 위해 자산 작성기에 이미지를 추가하는 주요 방법입니다.

AVAssetWriter 연결

-(void)writeImageAsMovie:(NSArray *)array toPath:(NSString*)path size:(CGSize)size
{

    NSError *error = nil;

    // FIRST, start up an AVAssetWriter instance to write your video
    // Give it a destination path (for us: tmp/temp.mov)
    AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:path]
                                                           fileType:AVFileTypeQuickTimeMovie
                                                              error:&error];


    NSParameterAssert(videoWriter);

    NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                   AVVideoCodecH264, AVVideoCodecKey,
                                   [NSNumber numberWithInt:size.width], AVVideoWidthKey,
                                   [NSNumber numberWithInt:size.height], AVVideoHeightKey,
                                   nil];

    AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
                                                                         outputSettings:videoSettings];

    AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
                                                                                                                     sourcePixelBufferAttributes:nil];
    NSParameterAssert(writerInput);
    NSParameterAssert([videoWriter canAddInput:writerInput]);
    [videoWriter addInput:writerInput];

쓰기 세션을 시작합니다(참고: 메소드는 위에서 계속 진행 중입니다).

    //Start a SESSION of writing. 
    // After you start a session, you will keep adding image frames 
    // until you are complete - then you will tell it you are done.
    [videoWriter startWriting];
    // This starts your video at time = 0
    [videoWriter startSessionAtSourceTime:kCMTimeZero];

    CVPixelBufferRef buffer = NULL;

    // This was just our utility class to get screen sizes etc.    
    ATHSingleton *singleton = [ATHSingleton singletons];

    int i = 0;
    while (1)
    {
        // Check if the writer is ready for more data, if not, just wait
        if(writerInput.readyForMoreMediaData){

            CMTime frameTime = CMTimeMake(150, 600);
            // CMTime = Value and Timescale.
            // Timescale = the number of tics per second you want
            // Value is the number of tics
            // For us - each frame we add will be 1/4th of a second
            // Apple recommend 600 tics per second for video because it is a 
            // multiple of the standard video rates 24, 30, 60 fps etc.
            CMTime lastTime=CMTimeMake(i*150, 600);
            CMTime presentTime=CMTimeAdd(lastTime, frameTime);

            if (i == 0) {presentTime = CMTimeMake(0, 600);} 
            // This ensures the first frame starts at 0.


            if (i >= [array count])
            {
                buffer = NULL;
            }
            else
            {
                // This command grabs the next UIImage and converts it to a CGImage
                buffer = [self pixelBufferFromCGImage:[[array objectAtIndex:i] CGImage]];
            }


            if (buffer)
            {
                // Give the CGImage to the AVAssetWriter to add to your video
                [adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
                i++;
            }
            else
            {

세션 완료(참고:위에서부터 방법이 계속됨)

                //Finish the session:
                // This is important to be done exactly in this order
                [writerInput markAsFinished];
                // WARNING: finishWriting in the solution above is deprecated. 
                // You now need to give a completion handler.
                [videoWriter finishWritingWithCompletionHandler:^{
                    NSLog(@"Finished writing...checking completion status...");
                    if (videoWriter.status != AVAssetWriterStatusFailed && videoWriter.status == AVAssetWriterStatusCompleted)
                    {
                        NSLog(@"Video writing succeeded.");

                        // Move video to camera roll
                        // NOTE: You cannot write directly to the camera roll. 
                        // You must first write to an iOS directory then move it!
                        NSURL *videoTempURL = [NSURL fileURLWithPath:[NSString stringWithFormat:@"%@", path]];
                        [self saveToCameraRoll:videoTempURL];

                    } else
                    {
                        NSLog(@"Video writing failed: %@", videoWriter.error);
                    }

                }]; // end videoWriter finishWriting Block

                CVPixelBufferPoolRelease(adaptor.pixelBufferPool);

                NSLog (@"Done");
                break;
            }
        }
    }    
}

UI 이 CVPixelBufferRef 변
이 방법은 AssetWriter에 필요한 CV 픽셀 버퍼 참조를 제공합니다.이 정보는 UI 이미지(위)에서 가져온 CGIimageRef에서 가져옵니다.

- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
    // This again was just our utility class for the height & width of the
    // incoming video (640 height x 480 width)
    ATHSingleton *singleton = [ATHSingleton singletons];
    int height = singleton.screenHeight;
    int width = singleton.screenWidth;

    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                             nil];
    CVPixelBufferRef pxbuffer = NULL;

    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, width,
                                          height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
                                          &pxbuffer);

    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
    NSParameterAssert(pxdata != NULL);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();

    CGContextRef context = CGBitmapContextCreate(pxdata, width,
                                                 height, 8, 4*width, rgbColorSpace,
                                                 kCGImageAlphaNoneSkipFirst);
    NSParameterAssert(context);
    CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
                                           CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}

비디오를 카메라 롤로 이동 AVAssetWriter는 카메라 롤에 직접 쓸 수 없기 때문에 비디오를 "tmp/temp.mov"(또는 위에서 이름을 지정한 파일 이름)에서 카메라 롤로 이동합니다.

- (void) saveToCameraRoll:(NSURL *)srcURL
{
    NSLog(@"srcURL: %@", srcURL);

    ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
    ALAssetsLibraryWriteVideoCompletionBlock videoWriteCompletionBlock =
    ^(NSURL *newURL, NSError *error) {
        if (error) {
            NSLog( @"Error writing image with metadata to Photo Library: %@", error );
        } else {
            NSLog( @"Wrote image with metadata to Photo Library %@", newURL.absoluteString);
        }
    };

    if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:srcURL])
    {
        [library writeVideoAtPathToSavedPhotosAlbum:srcURL
                                    completionBlock:videoWriteCompletionBlock];
    }
}

위의 졸의 대답은 당신이 무엇을 할 것인지에 대한 멋진 개요를 제공합니다.우리는 이 코드를 광범위하게 주석 처리하여 작업 코드를 사용하여 어떻게 수행되었는지 확인할 수 있습니다.

저는 졸의 주요 아이디어를 가져와서 AVAssetWriterInputPixelBufferAdaptor 메서드를 통합하고 작은 프레임워크의 시작을 만들었습니다.

CEMovieMaker에서 부담없이 확인하시고 개선하시기 바랍니다.

여기 iOS 8에서 테스트된 Swift 2.x 버전이 있습니다.그것은 @Scott Raposa와 @Praxiteles의 답변과 또 다른 질문에 기여한 @acj의 코드를 결합합니다.@acj의 코드는 https://gist.github.com/acj/6ae90aa1ebb8cad6b47b 입니다.@TimBull은 코드도 제공했습니다.

@Scott Raposa처럼, 나는 들어본 적조차 없습니다.CVPixelBufferPoolCreatePixelBuffer사용 방법을 이해하는 것은 말할 것도 없고 여러 가지 다른 기능도 있습니다.

아래에 보이는 것은 대부분 시행착오와 애플 문서를 읽음으로써 합쳐진 것입니다.주의해서 사용하시고, 틀린 부분이 있으면 제안해주세요.

용도:

import UIKit
import AVFoundation
import Photos

writeImagesAsMovie(yourImages, videoPath: yourPath, videoSize: yourSize, videoFPS: 30)

코드:

func writeImagesAsMovie(allImages: [UIImage], videoPath: String, videoSize: CGSize, videoFPS: Int32) {
    // Create AVAssetWriter to write video
    guard let assetWriter = createAssetWriter(videoPath, size: videoSize) else {
        print("Error converting images to video: AVAssetWriter not created")
        return
    }

    // If here, AVAssetWriter exists so create AVAssetWriterInputPixelBufferAdaptor
    let writerInput = assetWriter.inputs.filter{ $0.mediaType == AVMediaTypeVideo }.first!
    let sourceBufferAttributes : [String : AnyObject] = [
        kCVPixelBufferPixelFormatTypeKey as String : Int(kCVPixelFormatType_32ARGB),
        kCVPixelBufferWidthKey as String : videoSize.width,
        kCVPixelBufferHeightKey as String : videoSize.height,
        ]
    let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: writerInput, sourcePixelBufferAttributes: sourceBufferAttributes)

    // Start writing session
    assetWriter.startWriting()
    assetWriter.startSessionAtSourceTime(kCMTimeZero)
    if (pixelBufferAdaptor.pixelBufferPool == nil) {
        print("Error converting images to video: pixelBufferPool nil after starting session")
        return
    }

    // -- Create queue for <requestMediaDataWhenReadyOnQueue>
    let mediaQueue = dispatch_queue_create("mediaInputQueue", nil)

    // -- Set video parameters
    let frameDuration = CMTimeMake(1, videoFPS)
    var frameCount = 0

    // -- Add images to video
    let numImages = allImages.count
    writerInput.requestMediaDataWhenReadyOnQueue(mediaQueue, usingBlock: { () -> Void in
        // Append unadded images to video but only while input ready
        while (writerInput.readyForMoreMediaData && frameCount < numImages) {
            let lastFrameTime = CMTimeMake(Int64(frameCount), videoFPS)
            let presentationTime = frameCount == 0 ? lastFrameTime : CMTimeAdd(lastFrameTime, frameDuration)

            if !self.appendPixelBufferForImageAtURL(allImages[frameCount], pixelBufferAdaptor: pixelBufferAdaptor, presentationTime: presentationTime) {
                print("Error converting images to video: AVAssetWriterInputPixelBufferAdapter failed to append pixel buffer")
                return
            }

            frameCount += 1
        }

        // No more images to add? End video.
        if (frameCount >= numImages) {
            writerInput.markAsFinished()
            assetWriter.finishWritingWithCompletionHandler {
                if (assetWriter.error != nil) {
                    print("Error converting images to video: \(assetWriter.error)")
                } else {
                    self.saveVideoToLibrary(NSURL(fileURLWithPath: videoPath))
                    print("Converted images to movie @ \(videoPath)")
                }
            }
        }
    })
}


func createAssetWriter(path: String, size: CGSize) -> AVAssetWriter? {
    // Convert <path> to NSURL object
    let pathURL = NSURL(fileURLWithPath: path)

    // Return new asset writer or nil
    do {
        // Create asset writer
        let newWriter = try AVAssetWriter(URL: pathURL, fileType: AVFileTypeMPEG4)

        // Define settings for video input
        let videoSettings: [String : AnyObject] = [
            AVVideoCodecKey  : AVVideoCodecH264,
            AVVideoWidthKey  : size.width,
            AVVideoHeightKey : size.height,
            ]

        // Add video input to writer
        let assetWriterVideoInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
        newWriter.addInput(assetWriterVideoInput)

        // Return writer
        print("Created asset writer for \(size.width)x\(size.height) video")
        return newWriter
    } catch {
        print("Error creating asset writer: \(error)")
        return nil
    }
}


func appendPixelBufferForImageAtURL(image: UIImage, pixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor, presentationTime: CMTime) -> Bool {
    var appendSucceeded = false

    autoreleasepool {
        if  let pixelBufferPool = pixelBufferAdaptor.pixelBufferPool {
            let pixelBufferPointer = UnsafeMutablePointer<CVPixelBuffer?>.alloc(1)
            let status: CVReturn = CVPixelBufferPoolCreatePixelBuffer(
                kCFAllocatorDefault,
                pixelBufferPool,
                pixelBufferPointer
            )

            if let pixelBuffer = pixelBufferPointer.memory where status == 0 {
                fillPixelBufferFromImage(image, pixelBuffer: pixelBuffer)
                appendSucceeded = pixelBufferAdaptor.appendPixelBuffer(pixelBuffer, withPresentationTime: presentationTime)
                pixelBufferPointer.destroy()
            } else {
                NSLog("Error: Failed to allocate pixel buffer from pool")
            }

            pixelBufferPointer.dealloc(1)
        }
    }

    return appendSucceeded
}


func fillPixelBufferFromImage(image: UIImage, pixelBuffer: CVPixelBufferRef) {
    CVPixelBufferLockBaseAddress(pixelBuffer, 0)

    let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer)
    let rgbColorSpace = CGColorSpaceCreateDeviceRGB()

    // Create CGBitmapContext
    let context = CGBitmapContextCreate(
        pixelData,
        Int(image.size.width),
        Int(image.size.height),
        8,
        CVPixelBufferGetBytesPerRow(pixelBuffer),
        rgbColorSpace,
        CGImageAlphaInfo.PremultipliedFirst.rawValue
    )

    // Draw image into context
    CGContextDrawImage(context, CGRectMake(0, 0, image.size.width, image.size.height), image.CGImage)

    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0)
}


func saveVideoToLibrary(videoURL: NSURL) {
    PHPhotoLibrary.requestAuthorization { status in
        // Return if unauthorized
        guard status == .Authorized else {
            print("Error saving video: unauthorized access")
            return
        }

        // If here, save video to library
        PHPhotoLibrary.sharedPhotoLibrary().performChanges({
            PHAssetChangeRequest.creationRequestForAssetFromVideoAtFileURL(videoURL)
        }) { success, error in
            if !success {
                print("Error saving video: \(error)")
            }
        }
    }
}

swift3에 대한 @Scott Raposa의 답변을 방금 번역했습니다(몇 가지 변경 사항은 거의 없음).

import AVFoundation
import UIKit
import Photos

struct RenderSettings {

    var size : CGSize = .zero
    var fps: Int32 = 6   // frames per second
    var avCodecKey = AVVideoCodecH264
    var videoFilename = "render"
    var videoFilenameExt = "mp4"


    var outputURL: URL {
        // Use the CachesDirectory so the rendered video file sticks around as long as we need it to.
        // Using the CachesDirectory ensures the file won't be included in a backup of the app.
        let fileManager = FileManager.default
        if let tmpDirURL = try? fileManager.url(for: .cachesDirectory, in: .userDomainMask, appropriateFor: nil, create: true) {
            return tmpDirURL.appendingPathComponent(videoFilename).appendingPathExtension(videoFilenameExt)
        }
        fatalError("URLForDirectory() failed")
    }
}


class ImageAnimator {

    // Apple suggests a timescale of 600 because it's a multiple of standard video rates 24, 25, 30, 60 fps etc.
    static let kTimescale: Int32 = 600

    let settings: RenderSettings
    let videoWriter: VideoWriter
    var images: [UIImage]!

    var frameNum = 0

    class func saveToLibrary(videoURL: URL) {
        PHPhotoLibrary.requestAuthorization { status in
            guard status == .authorized else { return }

            PHPhotoLibrary.shared().performChanges({
                PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: videoURL)
            }) { success, error in
                if !success {
                    print("Could not save video to photo library:", error)
                }
            }
        }
    }

    class func removeFileAtURL(fileURL: URL) {
        do {
            try FileManager.default.removeItem(atPath: fileURL.path)
        }
        catch _ as NSError {
            // Assume file doesn't exist.
        }
    }

    init(renderSettings: RenderSettings) {
        settings = renderSettings
        videoWriter = VideoWriter(renderSettings: settings)
//        images = loadImages()
    }

    func render(completion: (()->Void)?) {

        // The VideoWriter will fail if a file exists at the URL, so clear it out first.
        ImageAnimator.removeFileAtURL(fileURL: settings.outputURL)

        videoWriter.start()
        videoWriter.render(appendPixelBuffers: appendPixelBuffers) {
            ImageAnimator.saveToLibrary(videoURL: self.settings.outputURL)
            completion?()
        }

    }

//    // Replace this logic with your own.
//    func loadImages() -> [UIImage] {
//        var images = [UIImage]()
//        for index in 1...10 {
//            let filename = "\(index).jpg"
//            images.append(UIImage(named: filename)!)
//        }
//        return images
//    }

    // This is the callback function for VideoWriter.render()
    func appendPixelBuffers(writer: VideoWriter) -> Bool {

        let frameDuration = CMTimeMake(Int64(ImageAnimator.kTimescale / settings.fps), ImageAnimator.kTimescale)

        while !images.isEmpty {

            if writer.isReadyForData == false {
                // Inform writer we have more buffers to write.
                return false
            }

            let image = images.removeFirst()
            let presentationTime = CMTimeMultiply(frameDuration, Int32(frameNum))
            let success = videoWriter.addImage(image: image, withPresentationTime: presentationTime)
            if success == false {
                fatalError("addImage() failed")
            }

            frameNum += 1
        }

        // Inform writer all buffers have been written.
        return true
    }

}


class VideoWriter {

    let renderSettings: RenderSettings

    var videoWriter: AVAssetWriter!
    var videoWriterInput: AVAssetWriterInput!
    var pixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor!

    var isReadyForData: Bool {
        return videoWriterInput?.isReadyForMoreMediaData ?? false
    }

    class func pixelBufferFromImage(image: UIImage, pixelBufferPool: CVPixelBufferPool, size: CGSize) -> CVPixelBuffer {

        var pixelBufferOut: CVPixelBuffer?

        let status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelBufferOut)
        if status != kCVReturnSuccess {
            fatalError("CVPixelBufferPoolCreatePixelBuffer() failed")
        }

        let pixelBuffer = pixelBufferOut!

        CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))

        let data = CVPixelBufferGetBaseAddress(pixelBuffer)
        let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
        let context = CGContext(data: data, width: Int(size.width), height: Int(size.height),
                                bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)

        context!.clear(CGRect(x:0,y: 0,width: size.width,height: size.height))

        let horizontalRatio = size.width / image.size.width
        let verticalRatio = size.height / image.size.height
        //aspectRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill
        let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit

        let newSize = CGSize(width: image.size.width * aspectRatio, height: image.size.height * aspectRatio)

        let x = newSize.width < size.width ? (size.width - newSize.width) / 2 : 0
        let y = newSize.height < size.height ? (size.height - newSize.height) / 2 : 0

        context?.draw(image.cgImage!, in: CGRect(x:x,y: y, width: newSize.width, height: newSize.height))
        CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))

        return pixelBuffer
    }

    init(renderSettings: RenderSettings) {
        self.renderSettings = renderSettings
    }

    func start() {

        let avOutputSettings: [String: Any] = [
            AVVideoCodecKey: renderSettings.avCodecKey,
            AVVideoWidthKey: NSNumber(value: Float(renderSettings.size.width)),
            AVVideoHeightKey: NSNumber(value: Float(renderSettings.size.height))
        ]

        func createPixelBufferAdaptor() {
            let sourcePixelBufferAttributesDictionary = [
                kCVPixelBufferPixelFormatTypeKey as String: NSNumber(value: kCVPixelFormatType_32ARGB),
                kCVPixelBufferWidthKey as String: NSNumber(value: Float(renderSettings.size.width)),
                kCVPixelBufferHeightKey as String: NSNumber(value: Float(renderSettings.size.height))
            ]
            pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput,
                                                                      sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)
        }

        func createAssetWriter(outputURL: URL) -> AVAssetWriter {
            guard let assetWriter = try? AVAssetWriter(outputURL: outputURL, fileType: AVFileTypeMPEG4) else {
                fatalError("AVAssetWriter() failed")
            }

            guard assetWriter.canApply(outputSettings: avOutputSettings, forMediaType: AVMediaTypeVideo) else {
                fatalError("canApplyOutputSettings() failed")
            }

            return assetWriter
        }

        videoWriter = createAssetWriter(outputURL: renderSettings.outputURL)
        videoWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: avOutputSettings)

        if videoWriter.canAdd(videoWriterInput) {
            videoWriter.add(videoWriterInput)
        }
        else {
            fatalError("canAddInput() returned false")
        }

        // The pixel buffer adaptor must be created before we start writing.
        createPixelBufferAdaptor()

        if videoWriter.startWriting() == false {
            fatalError("startWriting() failed")
        }

        videoWriter.startSession(atSourceTime: kCMTimeZero)

        precondition(pixelBufferAdaptor.pixelBufferPool != nil, "nil pixelBufferPool")
    }

    func render(appendPixelBuffers: ((VideoWriter)->Bool)?, completion: (()->Void)?) {

        precondition(videoWriter != nil, "Call start() to initialze the writer")

        let queue = DispatchQueue(label: "mediaInputQueue")
        videoWriterInput.requestMediaDataWhenReady(on: queue) {
            let isFinished = appendPixelBuffers?(self) ?? false
            if isFinished {
                self.videoWriterInput.markAsFinished()
                self.videoWriter.finishWriting() {
                    DispatchQueue.main.async {
                        completion?()
                    }
                }
            }
            else {
                // Fall through. The closure will be called again when the writer is ready.
            }
        }
    }

    func addImage(image: UIImage, withPresentationTime presentationTime: CMTime) -> Bool {

        precondition(pixelBufferAdaptor != nil, "Call start() to initialze the writer")

        let pixelBuffer = VideoWriter.pixelBufferFromImage(image: image, pixelBufferPool: pixelBufferAdaptor.pixelBufferPool!, size: renderSettings.size)
        return pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime)
    }

}

다음은 이미지 배열을 비디오로 변환하는 swift3 버전입니다.

import Foundation
import AVFoundation
import UIKit

typealias CXEMovieMakerCompletion = (URL) -> Void
typealias CXEMovieMakerUIImageExtractor = (AnyObject) -> UIImage?


public class ImagesToVideoUtils: NSObject {

    static let paths = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)
    static let tempPath = paths[0] + "/exprotvideo.mp4"
    static let fileURL = URL(fileURLWithPath: tempPath)
//    static let tempPath = NSTemporaryDirectory() + "/exprotvideo.mp4"
//    static let fileURL = URL(fileURLWithPath: tempPath)


    var assetWriter:AVAssetWriter!
    var writeInput:AVAssetWriterInput!
    var bufferAdapter:AVAssetWriterInputPixelBufferAdaptor!
    var videoSettings:[String : Any]!
    var frameTime:CMTime!
    //var fileURL:URL!

    var completionBlock: CXEMovieMakerCompletion?
    var movieMakerUIImageExtractor:CXEMovieMakerUIImageExtractor?


    public class func videoSettings(codec:String, width:Int, height:Int) -> [String: Any]{
        if(Int(width) % 16 != 0){
            print("warning: video settings width must be divisible by 16")
        }

        let videoSettings:[String: Any] = [AVVideoCodecKey: AVVideoCodecJPEG, //AVVideoCodecH264,
                                           AVVideoWidthKey: width,
                                           AVVideoHeightKey: height]

        return videoSettings
    }

    public init(videoSettings: [String: Any]) {
        super.init()


        if(FileManager.default.fileExists(atPath: ImagesToVideoUtils.tempPath)){
            guard (try? FileManager.default.removeItem(atPath: ImagesToVideoUtils.tempPath)) != nil else {
                print("remove path failed")
                return
            }
        }


        self.assetWriter = try! AVAssetWriter(url: ImagesToVideoUtils.fileURL, fileType: AVFileTypeQuickTimeMovie)

        self.videoSettings = videoSettings
        self.writeInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
        assert(self.assetWriter.canAdd(self.writeInput), "add failed")

        self.assetWriter.add(self.writeInput)
        let bufferAttributes:[String: Any] = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32ARGB)]
        self.bufferAdapter = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: self.writeInput, sourcePixelBufferAttributes: bufferAttributes)
        self.frameTime = CMTimeMake(1, 5)
    }

    func createMovieFrom(urls: [URL], withCompletion: @escaping CXEMovieMakerCompletion){
        self.createMovieFromSource(images: urls as [AnyObject], extractor:{(inputObject:AnyObject) ->UIImage? in
            return UIImage(data: try! Data(contentsOf: inputObject as! URL))}, withCompletion: withCompletion)
    }

    func createMovieFrom(images: [UIImage], withCompletion: @escaping CXEMovieMakerCompletion){
        self.createMovieFromSource(images: images, extractor: {(inputObject:AnyObject) -> UIImage? in
            return inputObject as? UIImage}, withCompletion: withCompletion)
    }

    func createMovieFromSource(images: [AnyObject], extractor: @escaping CXEMovieMakerUIImageExtractor, withCompletion: @escaping CXEMovieMakerCompletion){
        self.completionBlock = withCompletion

        self.assetWriter.startWriting()
        self.assetWriter.startSession(atSourceTime: kCMTimeZero)

        let mediaInputQueue = DispatchQueue(label: "mediaInputQueue")
        var i = 0
        let frameNumber = images.count

        self.writeInput.requestMediaDataWhenReady(on: mediaInputQueue){
            while(true){
                if(i >= frameNumber){
                    break
                }

                if (self.writeInput.isReadyForMoreMediaData){
                    var sampleBuffer:CVPixelBuffer?
                    autoreleasepool{
                        let img = extractor(images[i])
                        if img == nil{
                            i += 1
                            print("Warning: counld not extract one of the frames")
                            //continue
                        }
                        sampleBuffer = self.newPixelBufferFrom(cgImage: img!.cgImage!)
                    }
                    if (sampleBuffer != nil){
                        if(i == 0){
                            self.bufferAdapter.append(sampleBuffer!, withPresentationTime: kCMTimeZero)
                        }else{
                            let value = i - 1
                            let lastTime = CMTimeMake(Int64(value), self.frameTime.timescale)
                            let presentTime = CMTimeAdd(lastTime, self.frameTime)
                            self.bufferAdapter.append(sampleBuffer!, withPresentationTime: presentTime)
                        }
                        i = i + 1
                    }
                }
            }
            self.writeInput.markAsFinished()
            self.assetWriter.finishWriting {
                DispatchQueue.main.sync {
                    self.completionBlock!(ImagesToVideoUtils.fileURL)
                }
            }
        }
    }

    func newPixelBufferFrom(cgImage:CGImage) -> CVPixelBuffer?{
        let options:[String: Any] = [kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true]
        var pxbuffer:CVPixelBuffer?
        let frameWidth = self.videoSettings[AVVideoWidthKey] as! Int
        let frameHeight = self.videoSettings[AVVideoHeightKey] as! Int

        let status = CVPixelBufferCreate(kCFAllocatorDefault, frameWidth, frameHeight, kCVPixelFormatType_32ARGB, options as CFDictionary?, &pxbuffer)
        assert(status == kCVReturnSuccess && pxbuffer != nil, "newPixelBuffer failed")

        CVPixelBufferLockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
        let pxdata = CVPixelBufferGetBaseAddress(pxbuffer!)
        let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
        let context = CGContext(data: pxdata, width: frameWidth, height: frameHeight, bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pxbuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
        assert(context != nil, "context is nil")

        context!.concatenate(CGAffineTransform.identity)
        context!.draw(cgImage, in: CGRect(x: 0, y: 0, width: cgImage.width, height: cgImage.height))
        CVPixelBufferUnlockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
        return pxbuffer
    }
}

화면 캡처와 함께 사용하여 기본적으로 화면 캡처 비디오를 만듭니다. 전체 스토리/전체 예제입니다.

2020년에도 여전히 여행을 하고 있고, 너비가 16px가 아니기 때문에 영화에서 왜곡을 겪는 사람들을 위해.

바꾸다

CGContextRef context = CGBitmapContextCreate(pxdata,
                                             width, height,
                                             8, 4 * width,
                                             rgbColorSpace,
                                             kCGImageAlphaNoneSkipFirst);

로.

CGContextRef context = CGBitmapContextCreate(pxdata,
                                             width, height,
                                             8, CVPixelBufferGetBytesPerRow(pxbuffer),
                                             rgbColorSpace,
                                             kCGImageAlphaNoneSkipFirst);

Credit to @bluedays AVAssetWriter(비디오에 기록된 UI 이미지)에서 출력이 왜곡됨

이것은 순수한 목표-C에서 구현하기가 좀 어렵습니다.만약 당신이 탈옥 장치를 개발하고 있다면, 당신의 앱 안에서 명령 줄 도구 ffmpeg를 사용하는 것이 좋습니다.다음과 같은 명령을 사용하여 이미지에서 동영상을 만드는 것은 매우 쉽습니다.

ffmpeg -r 10 -b 1800 -i %03d.jpg test1800.mp4

이미지는 순차적으로 이름을 지정해야 하며 동일한 디렉토리에 배치해야 합니다.자세한 내용은 다음을 참조하십시오. http://electron.mit.edu/ ~gstele/ffmpeg/

언급URL : https://stackoverflow.com/questions/3741323/how-do-i-export-uiimage-array-as-a-movie

반응형