Do you have an example of how the ML image style transfer was created from an earlier session?
You can learn about how Create ML can help you build style transfer models with an example integration in this session:
Build Image and Video Style Transfer models in Create ML
There are also a wide variety of style transfer models online that can be converted to Core ML format with coremltools.
The model takes in an image and outputs an image. I believe the app is streaming data from an AVCaptureSession
and running each frame through the Core ML model and outputting the result properly scaled back to the original image size.