We're facing some challenges on converting our AI models to Core ML, some operations 'state-of-the-art' aren't fully supported and we're considering running it in a different approach. Would it be feasible to have a model in C++ and leverage the GPU power of the devices, if yes... how? Is there any workaround for torch.stft and torch.istft?

A composite operator may help you convert these operations:

https://coremltools.readme.io/docs/composite-operators

In some cases, you can also supply a custom operator but to leverage the full Core ML stack its best to see if you can represent the functionality as a composite op first

Are the FFT operations in the beginning / pre processing stage of the model? If so, then you can accelerate the rest of the model by converting it to Core ML, and implementing the FFT operation using BNNS or Metal.

In any case, submitting a feedback request with your use case would be great.

Tagged with: