I'm part of a team that is building an app that is wishing to identify and recognise faces in a collection of photos. At the moment I've had success with Photos/Vision framework to find faces in photos and isolate them, but we're currently then sending those faces to AWS Amazon Rekognition service to help compare the face to a set of others and associate them to an existing face, or create a new face model. If I wanted to move this type of modeling onto the device itself (rather going through a network request to a 3rd party service), could you possibly guide me where to start? I'm assuming I could do the same thing locally on device using Apple frameworks?
We do not offer on-device face recognition solutions.
Generally speaking you would to either find (or train, if you have the data and the know-how) a face recognition model, which could then be run on-device through Core ML once converted into that format. Often such models return some descriptor, which can be compared to other similar descriptors to provide a distance. How best to measure that distance is often tied in to how the face recognition model was trained.
You may file a Feedback Assistant request if you'd like Apple to offer face recognition in the future.