This iOS machine learning tutorial will introduce you to Core ML and Vision, two brand-new frameworks introduced in iOS 11. For this we need Xcode 9 or greater, iOS 11 or greater.
Getting Started:
First create a new project -> open Xcode -> File -> New -> Project -> Single View Application, then tap next button. Type product name as 'Object Detector' then tap next and select the folder to save project.
For detecting objects we need to access device camera. Open ViewController.swift and import this framework, just below 'import UIKit'.
Now its time to write some code, open ViewController.swift and inside viewDidLoad() method write the following code to access camera.
Build and run, Ouchh app crashes. No worries we need to add cameraUsageDescription in info.plist. Open info.plist add 'Privacy - Camera Usage Description' and description as 'We need to access camera for detecting objects'.
Now build and run. Great we see camera permission alert then tap ok.
We need to add camera to the view for that create 'PreviewLayer' as the following. Add these lines of code after 'captureSession.startRunning()' line in viewDidLoad().
Build and run , now you can see camera running on your device.
Great, now for detecting object we need image containg object for that we need to get frames from the camera. So use the following code at the end of viewDidLoad().
Add 'AVCaptureVideoDataOutputSampleBufferDelegate' delgate to the ViewController.swift class.
Add delegate method, it will call every time when camera is going to capture a frame.
Now it's time to start using 'Machine Learning'. Open ViewController.swift and import this framework, just below 'import AVKit'.
Go to this url 'https://developer.apple.com/machine-learning/' and download Resnet50 file. Drag that ML file to our project.
Add the following code inside delegate method.
VNImageRequestHandler will perform all operation on image using VNCoreMLRequest.
VNCoreMLRequest accepts a VNCoreMLModel, here our model is Resnet50 model.
Build and Run, you will see the output in console with VNClassificationObservation objects.
Great we are getting some data. Lets parse the data using following code.
Replace 'print("results ==",finishedReq.results)' with the code below.
Build and Run the project we will see detected objects with confidence in console.
Download sample project with examples :
Getting Started:
First create a new project -> open Xcode -> File -> New -> Project -> Single View Application, then tap next button. Type product name as 'Object Detector' then tap next and select the folder to save project.
For detecting objects we need to access device camera. Open ViewController.swift and import this framework, just below 'import UIKit'.
import AVKit
let captureSession = AVCaptureSession()
captureSession.sessionPreset = .photo
guard let captureDevice = AVCaptureDevice.default(for: .video) else {
return
}
guard let input = try? AVCaptureDeviceInput(device: captureDevice) else {
return
}
captureSession.addInput(input)
captureSession.startRunning()
Now build and run. Great we see camera permission alert then tap ok.
We need to add camera to the view for that create 'PreviewLayer' as the following. Add these lines of code after 'captureSession.startRunning()' line in viewDidLoad().
let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
view.layer.addSublayer(previewLayer)
previewLayer.frame = view.frame
Great, now for detecting object we need image containg object for that we need to get frames from the camera. So use the following code at the end of viewDidLoad().
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
captureSession.addOutput(dataOutput)
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
.....
}
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
}
import Vision
Add the following code inside delegate method.
guard let pixelBuffer : CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return }
guard let model = try? VNCoreMLModel(for: Resnet50().model) else {
return }
let request = VNCoreMLRequest(model: model) { (finishedReq, error) in
print("results ==",finishedReq.results)
}
try? VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]).perform([request])
VNCoreMLRequest accepts a VNCoreMLModel, here our model is Resnet50 model.
Build and Run, you will see the output in console with VNClassificationObservation objects.
Great we are getting some data. Lets parse the data using following code.
Replace 'print("results ==",finishedReq.results)' with the code below.
guard let results = finishedReq.results as? [VNClassificationObservation] else {
return
}
guard let firstObservation = results.first else {
return
}
print(firstObservation.identifier, firstObservation.confidence)
Download sample project with examples :
Find the best deals on high-tech spy gadgets online during our exclusive Diwali sale. Shop at Spy Shop Online and buy spy gadgets onlineto fulfill all your covert surveillance needs. Don't miss out on our top-quality products at unbeatable prices. Explore our collection now!
ReplyDelete