GPU acceleration and federated learning are two very appealing approaches for large scale training (or even training over the edge using multiple mobile devices). Is there some special provision in the MPSGraphs framework to enable/enhance such functionality?

MPSGraph should run just fine with iOS and iPadOS. There are no special pre-built functions that achieve techniques like PFL, but using for example the random-number generators provided by MPSGraph, one should be able to generate these operations from basic building-blocks.

Then as long as you can aggregate the gradients or other weight-updates across the network (something outside the scope of MPSGraph) you should be able to do this.

But again quite a bit of manual work is needed.

Tagged with: