Next, let's add a placeholder for the image we will feed into the ML model:Ĭreate a new file: ImagePicker.swift and import both SwiftUI and UIKit In ContentView.swift, let's add two system images to bring up a sheet with the image controller interface when we click on any of them. We will extend our app prediction categories to a few thousand everyday items such as different fruits and significantly broaden SeeFood's use cases. If you have not watched Silicon Valley, here is a quick snippet of what Jing Yang's app is about.Īs you can see, Jing Yang's app has a fundamental flaw in that it can only tell if an object is a hot dog or not, so Pizza will be classified as a non-hot dog thing. The original idea came from this episode from Silicon Valley. The complete code repo can be downloaded on GitHub: So we can shoot a new photo or pick up an existing image from our photo albums, or take a picture using our built-in cameras on iOS. In this article, I will show you how to use SwiftUI to wrap up the UIImagePickerController from the UIKit framework and create an app that tells if it is a hotdog or not. One of the most significant issues for AI and machine learning practitioners like myself is that there are no native SwiftUI-based views for Camera related applications/use cases. However, it is still in its relatively early stage. The SwiftUI framework has been released for about 2 years and has gained tremendous momentum in the iOS developer community. WWDC 2021 is only a week away from today (May 31, 2021).
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |