This iOS machine learning tutorial will introduce you to Core ML and Vision, two brand-new frameworks introduced in iOS 11. For this we need Xcode 9 or greater, iOS 11 or greater.
Getting Started:
First create a new project -> open Xcode -> File -> New -> Project -> Single View Application, then tap next button. Type product name as 'Object Detector' then tap next and select the folder to save project.
For detecting objects we need to access device camera. Open ViewController.swift and import this framework, just below 'import UIKit'.
Getting Started:
First create a new project -> open Xcode -> File -> New -> Project -> Single View Application, then tap next button. Type product name as 'Object Detector' then tap next and select the folder to save project.
For detecting objects we need to access device camera. Open ViewController.swift and import this framework, just below 'import UIKit'.
import AVKit
let captureSession = AVCaptureSession()
captureSession.sessionPreset = .photo
guard let captureDevice = AVCaptureDevice.default(for: .video) else {
return
}
guard let input = try? AVCaptureDeviceInput(device: captureDevice) else {
return
}
captureSession.addInput(input)
captureSession.startRunning()
Now build and run. Great we see camera permission alert then tap ok.
We need to add camera to the view for that create 'PreviewLayer' as the following. Add these lines of code after 'captureSession.startRunning()' line in viewDidLoad().
let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
view.layer.addSublayer(previewLayer)
previewLayer.frame = view.frame
Great, now for detecting object we need image containg object for that we need to get frames from the camera. So use the following code at the end of viewDidLoad().
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
captureSession.addOutput(dataOutput)
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
.....
}
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
}
import Vision
Add the following code inside delegate method.
guard let pixelBuffer : CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return }
guard let model = try? VNCoreMLModel(for: Resnet50().model) else {
return }
let request = VNCoreMLRequest(model: model) { (finishedReq, error) in
print("results ==",finishedReq.results)
}
try? VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]).perform([request])
VNCoreMLRequest accepts a VNCoreMLModel, here our model is Resnet50 model.
Build and Run, you will see the output in console with VNClassificationObservation objects.
Great we are getting some data. Lets parse the data using following code.
Replace 'print("results ==",finishedReq.results)' with the code below.
guard let results = finishedReq.results as? [VNClassificationObservation] else {
return
}
guard let firstObservation = results.first else {
return
}
print(firstObservation.identifier, firstObservation.confidence)
Download sample project with examples :
Find the best deals on high-tech spy gadgets online during our exclusive Diwali sale. Shop at Spy Shop Online and buy spy gadgets onlineto fulfill all your covert surveillance needs. Don't miss out on our top-quality products at unbeatable prices. Explore our collection now!
ReplyDelete