IOS swiftui integrates AI to make photo recognition app (2020 tutorial)

Time:2021-4-11

Human beings are about to enter the era of artificial intelligence. As an ordinary programmer, we should not only add, delete, modify, but also contact with new technologies and things. In this paper, I will lead you to do a small demo, using swiftui and coreml components to make an intelligent app to recognize objects

Everyone can learn artificial intelligence

In fact, I began to want to write about machine learning, but I was afraid to scare everyone away. Please rest assured that there will never be a formula or profound theory in this article. We will let you master the method of making intelligent applications through a small real example.

I strive to complete the production of intelligent recognition app within 300 lines of code.

First look, then learn

Let’s look at the final results before we learn

IOS swiftui integrates AI to make photo recognition app (2020 tutorial)

As you can see, we create a scrolling view that lists the photos to be identified. We found some pictures of cattle, cats, dogs and mountains to test the recognition effect of app.

Iron making must be hard

Next, let’s implement the app step by step!

Step 1: let’s create a scrolling view to let the user choose the photos to recognize.

IOS swiftui integrates AI to make photo recognition app (2020 tutorial)

1. Define an array to store the name of the photo to be recognized

//Define an array to store the name of the photo to be recognized
 let images = ["niu","cat1","dog","tree","mountains"]

2、


//
 VStack {
                    ScrollView([.horizontal]) {
                        HStack {
                            ForEach(self.images,id: \.self) { name in
                                Image(name)
                                    .resizable()
                                    .frame(width: 300, height: 300)
                                    .padding()
                                    .onTapGesture {
                                        self.selectedImage = name
                                }.border(Color.orange, width: self.selectedImage == name ? 10 : 0)
                            }
                        }
                    }

The second step is to improve the whole page

import SwiftUI

struct ContentView: View {
    
    let images = ["niu","cat1","dog","tree","mountains"]
    @State private var selectedImage = ""
    
    @ObservedObject private var imageDetectionVM: ImageDetectionViewModel
    private var imageDetectionManager: ImageDetectionManager
    
    init() {
        self.imageDetectionManager = ImageDetectionManager()
        self.imageDetectionVM = ImageDetectionViewModel(manager: self.imageDetectionManager)
    }
    
    var body: some View {
        NavigationView {
            VStack{
                HStack{
                    Text ("recognition result:)
                        .font(.system(size: 26))
                        .padding()
                    
                    Text(self.imageDetectionVM.predictionLabel)
                        .font(.system(size: 26))
                }
                
                VStack {
                    ScrollView([.horizontal]) {
                        HStack {
                            ForEach(self.images,id: \.self) { name in
                                Image(name)
                                    .resizable()
                                    .frame(width: 300, height: 300)
                                    .padding()
                                    .onTapGesture {
                                        self.selectedImage = name
                                }.border(Color.orange, width: self.selectedImage == name ? 10 : 0)
                            }
                        }
                    }
                    
                    Button ("intelligent recognition"){
                        self.imageDetectionVM.detect(self.selectedImage)
                    }.padding()
                        .background(Color.orange)
                        .foregroundColor(Color.white)
                        .cornerRadius(10)
                        .padding()
                    
                    Text(self.imageDetectionVM.predictionLabel)
                        .font(.system(size: 26))
                    
                    
                }
            }
                
            .navigationBarTitle("Core ML")
            
        }
    }
}

The third step is to build a model of photo recognition

import Foundation
import SwiftUI
import Combine

class ImageDetectionViewModel: ObservableObject {
    
    var name: String = ""
    var manager: ImageDetectionManager
    
    @Published var predictionLabel: String = ""
    
    init(manager: ImageDetectionManager) {
        self.manager = manager
    }
    
    func detect(_ name: String) {
        
        let sourceImage = UIImage(named: name)
        
        guard let resizedImage = sourceImage?.resizeImage(targetSize: CGSize(width: 224, height: 224)) else {
            fatalError("Unable to resize the image!")
        }
        
        if let label = self.manager.detect(resizedImage) {
            self.predictionLabel = label
        }
      
    }
    
}

The fourth step is to identify the business logic

import Foundation
import CoreML
import UIKit

class ImageDetectionManager {
    
    let model = Resnet50()
    
    func detect(_ img: UIImage) -> String? {
        
        guard let pixelBuffer = img.toCVPixelBuffer(),
            let prediction = try? model.prediction(image: pixelBuffer) else {
                return nil
        }
        
        return prediction.classLabel
        
    }
    
}

IOS AI project complete code

Download address:
https://www.jianshu.com/p/f7c…

More swiftui tutorials and code focus columns

QQ:3365059189
Swiftui technology exchange QQ group: 518696470

  • Please pay attention to my column icloudend, swiftui tutorial and source code

https://www.jianshu.com/c/7b3…