你好,歡迎來到IOS教程網

 Ios教程網 >> IOS編程開發 >> IOS開發綜合 >> iOS8 Core Image In Swift:人臉檢測以及馬賽克

iOS8 Core Image In Swift:人臉檢測以及馬賽克

編輯:IOS開發綜合

iOS8 Core Image In Swift:自動改善圖像以及內置濾鏡的使用

iOS8 Core Image In Swift:更復雜的濾鏡

iOS8 Core Image In Swift:人臉檢測以及馬賽克


Core Image不僅內置了諸多濾鏡,還能檢測圖像中的人臉,不過Core Image只是檢測,並非識別,檢測人臉是指在圖像中尋找符合人臉特征(只要是個人臉)的區域,識別是指在圖像中尋找指定的人臉(比如某某某的臉)。Core Image在找到符合人臉特征的區域後,會返回該特征的信息,比如人臉的范圍、眼睛和嘴巴的位置等。



人臉檢測並標記檢測到的區域

先做好以下幾步:新建一個Single View Application工程然後在Storyboard裡放入UIImageView,ContentMode設置為Aspect Fit將UIImageView連接到VC裡放入一個名為“人臉檢測”的UIButton,然後連接到VC的faceDetecting方法上關閉Auto Layout以及Size ClassesUIImageView的frame以及VC的UI如下:
\
以下是工程中會用到的圖,齊刷刷一排臉,點擊圖片顯示原圖:
\

然後在VC上添加基本的屬性:懶加載的originalImage、context(Core Image框架繞不開的對象)。

class ViewController: UIViewController {<喎?/kf/ware/vc/" target="_blank" class="keylink">vcD48cD4gICAgQElCT3V0bGV0IHZhciBpbWFnZVZpZXc6IFVJSW1hZ2VWaWV3ITwvcD48cD4gICAgbGF6eSB2YXIgb3JpZ2luYWxJbWFnZTogVUlJbWFnZSA9IHs8L3A+PHA+ICAgICAgICByZXR1cm4gVUlJbWFnZShuYW1lZDog"Image")

}()

lazy var context: CIContext = {

return CIContext(options: nil)

}()

......

在viewDidLoad方法裡顯示originalImage:

override func viewDidLoad() {

super.viewDidLoad()

// Do any additional setup after loading the view, typically from a nib.

self.imageView.image = originalImage

}

然後就可以准備實現faceDetecting方法了。在Core Image框架中,CIDetector對象提供了對圖像檢測的功能,只需要通過幾個APIs就能完成CIDetector的初始化並得到檢測結果:


@IBAction func faceDetecing() {

let inputImage = CIImage(image: originalImage)

let detector = CIDetector(ofType: CIDetectorTypeFace,

context: context,

options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])

var faceFeatures: [CIFaceFeature]!

if let orientation: AnyObject = inputImage.properties()?[kCGImagePropertyOrientation] {

faceFeatures = detector.featuresInImage(inputImage,

options: [CIDetectorImageOrientation: orientation]

) as [CIFaceFeature]

} else {

faceFeatures = detector.featuresInImage(inputImage) as [CIFaceFeature]

}


println(faceFeatures)

......

使用kCGImagePropertyOrientation的時候,可能需要導入ImageIO框架originalImage和context通過懶加載都得到了,在創建CIDetector對象的時候,必須告訴它要檢測的內容,這裡當然是傳CIDetectorTypeFace了,除了CIDetectorTypeFace外,CIDetector還能檢測二維碼;然後傳遞一個context,多個CIDetector可以共用一個context對象;第三個參數是一個字典,我們能夠指定檢測的精度,除了CIDetectorAccuracyHigh以外,還有CIDetectorAccuracyLow,精度高會識別度更高,但識別速度就更慢。創建完CIDetector之後,把要識別的CIImage傳遞給它,在這裡,我判斷了CIImage是否帶有方向的元數據,如果帶的話調用就featuresInImage:options這個方法,因為方向對CIDetector來說至關重要,直接導致識別的成功與否;而有的圖片沒有方向這些元數據,就調用featuresInImage方法,由於這張《生活大爆炸》的圖是不帶方向元數據的,所以是執行的featuresInImage方法,但是大多數情況下應該會用到前者。featuresInImage方法的返回值是一個CIFaceFeature數組,CIFaceFeature包含了面部的范圍、左右眼、嘴巴的位置等,我們通過使用bounds就能標記出面部的范圍。我們很容易寫出這樣的代碼:獲取所有的面部特征用bounds實例化一個UIView把View顯示出來實現出來就像這樣:

@IBAction func faceDetecing() {

let inputImage = CIImage(image: originalImage)

let detector = CIDetector(ofType: CIDetectorTypeFace,

context: context,

options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])

var faceFeatures: [CIFaceFeature]!

if let orientation: AnyObject = inputImage.properties()?[kCGImagePropertyOrientation] {

faceFeatures = detector.featuresInImage(inputImage, options: [CIDetectorImageOrientation: orientation]) as [CIFaceFeature]

} else {

faceFeatures = detector.featuresInImage(inputImage) as [CIFaceFeature]

}

println(faceFeatures)


for faceFeature in faceFeatures {

let faceView = UIView(frame: faceFeature.bounds)

faceView.layer.borderColor = UIColor.orangeColor().CGColor

faceView.layer.borderWidth = 2

imageView.addSubview(faceView)

}

}

這樣寫是否可以呢?如果你運行起來會得到這樣的效果:\
這是因為我們的inputImage,其實是用originalImage初始化的,而我這張originalImage的真實大小比實際看到的大得多:\
它的寬實有600像素,我把它以@2x命名,實際顯示有300像素,且在imageView裡以Aspect Fit模式展示(imageViwe寬為300像素),在顯示的時候被縮放了,但是在內存中它是完整的,除此之外,CIImage的坐標系統和UIView的坐標系統也不一樣,CIImage的坐標系統就像數學坐標系統,原點在下,在UIView看來,就是倒置的,這是因Core Image、Core Graphics這些框架都來源於Mac OS X,在Mac OS X上這種坐標系統已存在多年,iOS直接引入了這些框架,這解決了Cocoa App和iOS App底層兼容性的問題,但是在上層就只能自己解決了。所以實際上它是這樣的:\
我們需要做兩步工作:調整transform,讓它正過來縮放bounds,讓它適配imageView然後再次很容易的寫下了這樣的代碼:

@IBAction func faceDetecing() {

let inputImage = CIImage(image: originalImage)

let detector = CIDetector(ofType: CIDetectorTypeFace,

context: context,

options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])

var faceFeatures: [CIFaceFeature]!

if let orientation: AnyObject = inputImage.properties()?[kCGImagePropertyOrientation] {

faceFeatures = detector.featuresInImage(inputImage, options: [CIDetectorImageOrientation: orientation]) as [CIFaceFeature]

} else {

faceFeatures = detector.featuresInImage(inputImage) as [CIFaceFeature]

}

println(faceFeatures)

// 1.

let inputImageSize = inputImage.extent().size

var transform = CGAffineTransformIdentity

transform = CGAffineTransformScale(transform, 1, -1)

transform = CGAffineTransformTranslate(transform, 0, -inputImageSize.height)


for faceFeature in faceFeatures {

var faceViewBounds = CGRectApplyAffineTransform(faceFeature.bounds, transform)

// 2.

let scaleTransform = CGAffineTransformMakeScale(0.5, 0.5)

faceViewBounds = CGRectApplyAffineTransform(faceViewBounds, scaleTransform)

let faceView = UIView(frame: faceViewBounds)

faceView.layer.borderColor = UIColor.orangeColor().CGColor

faceView.layer.borderWidth = 2

imageView.addSubview(faceView)

}

}

現在看起來就沒有問題了,在第一步裡我們放置了一個調整坐標系統的tranform,在第二步對bounds進行了縮放(等同於把x、y、width、height全部乘以0.5),由於我們知道實際scale是0.5(原圖600像素,imageView寬為300像素),就直接寫死了0.5,但運行後出現了一點點偏移:\
這其實是因為我把imageView的ContentMode設為Aspect Fit的結果 :
\
一般來講,我們不會拉伸照片,通常會按寬、高進行適配,所以我們還需要對Aspect Fit進行處理,上面代碼修改後如下:

@IBAction func faceDetecing() {

let inputImage = CIImage(image: originalImage)

let detector = CIDetector(ofType: CIDetectorTypeFace,

context: context,

options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])

var faceFeatures: [CIFaceFeature]!

if let orientation: AnyObject = inputImage.properties()?[kCGImagePropertyOrientation] {

faceFeatures = detector.featuresInImage(inputImage, options: [CIDetectorImageOrientation: orientation]) as [CIFaceFeature]

} else {

faceFeatures = detector.featuresInImage(inputImage) as [CIFaceFeature]

}

println(faceFeatures)

// 1.

let inputImageSize = inputImage.extent().size

var transform = CGAffineTransformIdentity

transform = CGAffineTransformScale(transform, 1, -1)

transform = CGAffineTransformTranslate(transform, 0, -inputImageSize.height)


for faceFeature in faceFeatures {

var faceViewBounds = CGRectApplyAffineTransform(faceFeature.bounds, transform)

// 2.

var scale = min(imageView.bounds.size.width / inputImageSize.width,

imageView.bounds.size.height / inputImageSize.height)

var offsetX = (imageView.bounds.size.width - inputImageSize.width * scale) / 2

var offsetY = (imageView.bounds.size.height - inputImageSize.height * scale) / 2

faceViewBounds = CGRectApplyAffineTransform(faceViewBounds, CGAffineTransformMakeScale(scale, scale))

faceViewBounds.origin.x += offsetX

faceViewBounds.origin.y += offsetY

let faceView = UIView(frame: faceViewBounds)

faceView.layer.borderColor = UIColor.orangeColor().CGColor

faceView.layer.borderWidth = 2

imageView.addSubview(faceView)

}

}

在第二步裡,除了通過寬、高比計算scale外,還計算了x、y軸的偏移,以確保在寬或高縮放的情況下都能正常工作(最後除以2是因為縮放時是居中顯示,上下或左右都各有一半)。編譯、運行,在不同的高度下的效果圖:\\

面部馬賽克

檢測到面部以後,我們還能做一些有趣的操作,比如打上馬賽克:\
這是蘋果官方例子上的一張圖,展示了把一張照片中所有的面部打上馬賽克的方法:基於原圖,創建一個將所有部分都馬賽克的圖片為檢測到的人臉創建一張蒙版圖用蒙版圖,將完全馬賽克的圖和原圖混合起來我們在VC上添加一個名為“馬賽克”的按鈕,將其事件連接到VC的pixellated方法上,然後開始實現馬賽克的效果。具體步驟如下:

創建完全馬賽克的圖

使用CIPixellate濾鏡,其參數設置:設置inputImage為原圖可以根據自己的需要,選擇設置inputScale參數,inputScale取值為1到100,取值越大,馬賽克就越大這一步的效果圖:\

為檢測到的人臉創建蒙版圖

和之前一樣,使用CIDetector檢測人臉,然後為每一張臉:使用CIRadialGradient濾鏡創建一個把臉包圍起來的圓使用CISourceOverCompositing濾鏡把各個蒙版(有幾張臉其實就有幾個蒙版)組合起來這一步的效果圖:\

混合馬賽克圖、蒙版圖以及原圖

CIBlendWithMask濾鏡來混合三者,其參數設置如下:設置inputImage為馬賽克圖設置inputBackground為原圖設置inputMaskImage為蒙版圖完整實現代碼如下:

@IBAction func pixellated() {

// 1.

var filter = CIFilter(name: "CIPixellate")

println(filter.attributes())

let inputImage = CIImage(image: originalImage)

filter.setValue(inputImage, forKey: kCIInputImageKey)

// filter.setValue(max(inputImage.extent().size.width, inputImage.extent().size.height) / 60, forKey: kCIInputScaleKey)

let fullPixellatedImage = filter.outputImage

// let cgImage = context.createCGImage(fullPixellatedImage, fromRect: fullPixellatedImage.extent())

// imageView.image = UIImage(CGImage: cgImage)

// 2.

let detector = CIDetector(ofType: CIDetectorTypeFace,

context: context,

options: nil)

let faceFeatures = detector.featuresInImage(inputImage)

// 3.

var maskImage: CIImage!

for faceFeature in faceFeatures {

println(faceFeature.bounds)

// 4.

let centerX = faceFeature.bounds.origin.x + faceFeature.bounds.size.width / 2

let centerY = faceFeature.bounds.origin.y + faceFeature.bounds.size.height / 2

let radius = min(faceFeature.bounds.size.width, faceFeature.bounds.size.height)

let radialGradient = CIFilter(name: "CIRadialGradient",

withInputParameters: [

"inputRadius0" : radius,

"inputRadius1" : radius + 1,

"inputColor0" : CIColor(red: 0, green: 1, blue: 0, alpha: 1),

"inputColor1" : CIColor(red: 0, green: 0, blue: 0, alpha: 0),

kCIInputCenterKey : CIVector(x: centerX, y: centerY)

])

println(radialGradient.attributes())

// 5.

let radialGradientOutputImage = radialGradient.outputImage.imageByCroppingToRect(inputImage.extent())

if maskImage == nil {

maskImage = radialGradientOutputImage

} else {

println(radialGradientOutputImage)

maskImage = CIFilter(name: "CISourceOverCompositing",

withInputParameters: [

kCIInputImageKey : radialGradientOutputImage,

kCIInputBackgroundImageKey : maskImage

]).outputImage

}

}

// 6.

let blendFilter = CIFilter(name: "CIBlendWithMask")

blendFilter.setValue(fullPixellatedImage, forKey: kCIInputImageKey)

blendFilter.setValue(inputImage, forKey: kCIInputBackgroundImageKey)

blendFilter.setValue(maskImage, forKey: kCIInputMaskImageKey)

// 7.

let blendOutputImage = blendFilter.outputImage

let blendCGImage = context.createCGImage(blendOutputImage, fromRect: blendOutputImage.extent())

imageView.image = UIImage(CGImage: blendCGImage)

}

我詳細的分為了7個部分:用CIPixellate濾鏡對原圖先做個完全馬賽克檢測人臉,並保存在faceFeatures中初始化蒙版圖,並開始遍歷檢測到的所有人臉由於我們要基於人臉的位置,為每一張臉都單獨創建一個蒙版,所以要先計算出臉的中心點,對應為x、y軸坐標,再基於臉的寬度或高度給一個半徑,最後用這些計算結果初始化一個CIRadialGradient濾鏡(我將inputColor1的alpha賦值為0,表示將這些顏色值設為透明,因為我不關心除了蒙版以外的顏色,這點和蘋果官網中的例子有太一樣,蘋果將其賦值為了1)由於CIRadialGradient濾鏡創建的是一張無限大小的圖,所以在使用之前先對它進行裁剪(蘋果官網例子中沒有對其裁剪。。),然後把每一張臉的蒙版圖合在一起用CIBlendWithMask濾鏡把馬賽克圖、原圖、蒙版圖混合起來輸出,在界面上顯示運行效果:\
一個簡單的對照片進行馬賽克處理的例子就完成了。

GitHub下載地址

我在GitHub上會保持更新。

UPDATED:

細心的朋友會發現馬賽克的面積比檢測到的面積要大:\
這是因為計算馬賽克radius時沒有考慮縮放的因素,只要先計算出scale,再把scale和現在的radius相乘就能得到精確的范圍。計算scale:

var scale = min(imageView.bounds.size.width / inputImage.extent().size.width,

imageView.bounds.size.height / inputImage.extent().size.height)

修正radius:

let radius = min(faceFeature.bounds.size.width, faceFeature.bounds.size.height) * scale

修正後的馬賽克效果與人臉檢測效果:



參考資料:

https://developer.apple.com/library/mac/documentation/graphicsimaging/conceptual/CoreImaging/ci_intro/ci_intro.html

  1. 上一頁:
  2. 下一頁:
蘋果刷機越獄教程| IOS教程問題解答| IOS技巧綜合| IOS7技巧| IOS8教程
Copyright © Ios教程網 All Rights Reserved