Is it possible to dispatch a Core ML inference evaluation as part of a display or compute shader pipeline. Or do I need to wait for the CPU to be informed the frame has been rendered before dispatching from the CPU. Best of all would be if it could run on the ANE so that the GPU is free to work on the next frame.

If the output of GPU is in a IOSurface (in Float16 format), you can feed that to Core ML and let ANE work on it directly without any copies but, CPU does get triggered today for synchronizing these two computations.

Core ML doesn’t support MTLSharedEvent, if that’s what’s implied here.

Would you be able to file a feature request on with a little more details about your use case and may be some sample code on how you want to accomplish this? That would really help us push the API in the right direction.

Tagged with: