How do I handle situations where older ANE versions might not support certain layers and it will result in cpuAndNeuralEngine config being extremely slower on some devices?

MLComputeUnits.all is the default option and we recommend using that in most of the cases.

Core ML tries to optimize for latency while utilizing all the available compute units. MLComputeUnits.cpuAndNeuralEngine is helpful when your app is using GPU for pre or post processing and would like Core ML to not dispatch the model on GPU. Other than that MLComputeUnits.cpuAndNeuralEngine behaves very similar to MLComputeUnits.all.

If you have a model that is running much slower on certain devices, we recommend filing some feedback at with the model and specific device(s) included.

Tagged with: