In the "Optimize your Core ML usage" session, the presenter, Ben, explains that he got a latency of 22ms using the new performance metrics and that gives him a running frame rate of 45 frames per second. How did he come to that conclusion and how can I look at my performance metrics to determine our frames per second as well?

The number is just an upper bound estimate based on:

1000ms / 22ms = ~45 prediction per seconds

Such estimates often help us to understand the amount of headroom we can use for other operations while meeting the real time requirement (30 fps, etc).

Tagged with: