How Not to Develop an “Umm” Detector Using Create ML
Everyone loves to constantly be reminded of their faults! That’s why they need an “umm” detector in their lives. During this talk, I will go through a few attempts at creating one, before landing on how to train one that works using Create ML and importing the resulting Core ML model into an app.
Technical difficulties and hilarity ensue.
360|iDev 2022
How Not to Develop an “Umm” Detector Using Create ML
Speaking is difficult, especially if performed when others are within earshot. Many of us tend to use filler sounds like "umm" or "uhhh", while our brains are processing the next thing we want to say. But, most of us wish we wouldn't – especially the podcasters among us.
The phone in your pocket has a microphone and a neural engine. How hard could it possibly be to program it to give you a mild electric shock every time you say "umm"?
This talk will be a journey through a series of experiments of various attempts to create an Umm Detector. It will cover a number of Machine Learning-related topics ranging from Core ML to Apple's easy-to-use speech recognition API, and the thought process behind using each method to detect "umm"s. The talk will also cover, step by step, the how to train a Machine Learning model using Create ML. By the end of the talk, you should have some idea of how NOT to create an Umm Detector and at least one way that works... at least... On My Machine™.
Also, machine learning is awesome... when used responsibly.
In this talk, I go through the process of converting a PyTorch model I found on GitHub to Core ML and cleaning it up. I then integrate it into a simple app, which uses the model to convert live video to anime!
This is an updated version of my 360|iDev talk with the same name. This talk was prerecorded because NSSpain 2021 was completely remote. In the talk, I tgo through several computer vision concepts. I show how to easily implement these concepts in an app using Apple's Vision framework. My goal is to show how accessible computer vision can be and, hopefully, inspire ideas for new apps or features using computer vision.
Also, computer vision is still awesome.
360|iDev 2021
Computer Vision: Not Just For Breakfast Anymore
Computer Vision may seem like a daunting subject, but Apple’s Vision framework makes a lot of complicated algorithms accessible to every iOS developer. Apple continues to push Augmented Reality (AR). With their rumored, sure-to-come-out-any-day AR glasses, competence in computer vision will only become more important going forward. In this talk, we will journey through a silly computer vision related game idea, but along the way, you will learn some serious concepts. By the end you should have some idea of what’s easily possible using the Vision framework and hopefully be able to find some ways to integrate the Vision framework into future projects.