When using the new 4K resolution in ARKit for a post-production (film/television) workflow, what is the suggested way to take the AR experience and output to a video file?

To capture and replay an ARKit session, see an example here:

Recording and Replaying AR Session Data

If you want to capture video in your app in order to do post processing later, you could use and configure an AVAssetWriter to capture a video.

We also provide a camera frame with every ARFrame, see:

ARFrame.capturedImage

ARFrame.capturedImage is just the ‘clean slate’, it doesn’t contain any virtual content rendered on top of it. If you are doing your own rendering and your Metal textures are backed by IOSurfaces. then you can easily create CVPixelBuffers using the IOSurfaces and then pass those to AVFoundation for recording.

Tagged with: