Is there a way to use one IOSurface for both ANE and GPU work? Or access ANE IOSurface directly, and map it to MTLTexture by hand?

IOSurface-backed CVPixelBuffer with OneComponent16Half pixel format type can be shared with Neural Engine without copy. Likewise, MLMultiArray which is backed by the pixel buffer can also be shared without copy. (See MLMultiArray.init(pixelBuffer:shape:)).

For input features, using these objects in MLFeatureValue is enough to take advantage of the efficient data processing. When the output feature type matches the type above, Core ML automatically uses these objects in the output feature values as well.

For output features, you can even request the Neural Engine to write into your own buffer directly. See MLPredictionOptions.outputBackings.

You can create MTLTexture view into the pixel buffer with CVMetalTextureCache.

Tagged with: