We currently use a CoreML model with a C+ [sic] framework to handle initialization parameters in our processing queue (how long to hold an object, time an object should be in frame etc) and then run the ML model on the image captured with those parameters. Is Vision a better alternative than running our own initializers like that? Can we specify with Vision the retention time of images for processing images asynchronously? What is best practice there? Thank you!

Not sure about C+ [sic] in terms of its retention, but as long as you hold a VNImageRequestHandler, the image will be held.

Tagged with: