Core ML Running on Multiple Neural Engines
Question on benefits of running Core ML on multiple Neural Engines
Question on benefits of running Core ML on multiple Neural Engines
Question about optimizing recent models for Core ML
Question on calculating FPS based on model latency
Question on immediately dispatching Core ML inference as part of a display or compute shader pipeline
Question on a performance issue while training a PyTorch model using Metal
Question on which image size should be used to optimize the performance and accuracy of certain Vision APIs
Question on what needs to be taken into account when models may be run on older devices
Question on compatibility of enumerated shape inputs with IOSurface-backed MLMultiArray
Question on limitations on the number of IOSurface-backed buffers
Question on adding support for 4-channel no-copy buffers
Question as to whether direct writing is supported for IOSurface-backed buffers with flexible shapes
Question as to whether float16 MLMultiArrays are no-copy
Question on how to use a single IOSurface for both the neural engine and the GPU
Question on getting per-layer performance analysis for Core ML models
Question on the ability to run Core ML performance analysis on older iOS versions
Question about profiling the execution time of individual Core ML model layers
Question on how to analyze the performance of models
Question on improving inference speed of action classification predictions on video frames
Question on the performance of large ARWorldMaps
Question about a warning on retaining ARFrames
Question on limits on frequency of 4k captures of AR scenes
Question on performance implications of running ARKit at 4k