Objective-C基于CIDetector的人脸检测

时间:2022-04-16 15:20:35

人脸识别过程一般分为以下3个步骤:

1.首先建立人脸的面纹数据库.可以通过照相机或摄像机采集人脸的面相图片,将这些面相图片生成面纹编码保存到数据库中.

2.获取当前人脸面相图片.即通过照相机或摄像机采集人脸的面相图片,将当前的面相文件生成面纹编码

3.用当前的面纹编码与数据库中的面纹编码进行对比

Objective-C中的Core Image 已经提供了 CIDetector 类。CIDetector是Core Image框架中的一个特征识别滤镜,CIDetector主要用于人脸特征识别。通过它还可以获得眼睛和嘴的特征信息.但是CIDetector并不包括面纹编码提取,面纹编码处理还需要更为复杂的算法处理。也就是说使用CIDetector类可以找到一张图片中的人脸,但这张脸是谁,CIDetector无法判断,这需要有一个面纹数据库,把当前人脸提取面纹编码然后与数据库进行对比,这里推荐OpenCV库,通过神经网络等学习训练算法可以得到很好的识别效果。只做人脸的检测的话用CIDetector检测效果也相当不错,并且它已经被优化过,使用起来也很容易:

CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:context options:@{CIDetectorAccuracy: CIDetectorAccuracyHigh}];

NSArray *faces = [faceDetector featuresInImage:image];

从该图片中检测到的每一张面孔都在数组 faces 中保存着一个 CIFaceFeature 实例。这个实例中保存着这张面孔的所处的位置和宽高,除此之外,眼睛和嘴的位置也是可选的。
识别代码不多,50行就搞定:

- (void)faceTextByImage:(UIImage *)image{

//识别图片:
CIImage *ciimage = [CIImage imageWithCGImage:image.CGImage];
//设置识别参数
NSDictionary* opts = [NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy];
//声明一个CIDetector,并设定识别类型
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:opts];
//取得识别结果
NSArray* features = [detector featuresInImage:ciimage];
UIView *resultView = [[UIView alloc] initWithFrame:_imgView.frame];
[self.view addSubview:resultView];
//标出脸部,眼睛和嘴:
for (CIFaceFeature *faceFeature in features){
// 标出脸部
CGFloat faceWidth = faceFeature.bounds.size.width;
UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [[UIColor redColor] CGColor];
[resultView addSubview:faceView];
// 标出左眼
if(faceFeature.hasLeftEyePosition) {
UIView* leftEyeView = [[UIView alloc] initWithFrame:
CGRectMake(faceFeature.leftEyePosition.x-faceWidth*0.15,
faceFeature.leftEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
[leftEyeView setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
[leftEyeView setCenter:faceFeature.leftEyePosition];
leftEyeView.layer.cornerRadius = faceWidth*0.15;
[resultView addSubview:leftEyeView];
}
// 标出右眼
if(faceFeature.hasRightEyePosition) {
UIView* rightEyeView = [[UIView alloc] initWithFrame:
CGRectMake(faceFeature.rightEyePosition.x-faceWidth*0.15,
faceFeature.rightEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
[rightEyeView setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
[rightEyeView setCenter:faceFeature.rightEyePosition];
rightEyeView.layer.cornerRadius = faceWidth*0.15;
[resultView addSubview:rightEyeView];
}
// 标出嘴部
if(faceFeature.hasMouthPosition) {
UIView* mouth = [[UIView alloc] initWithFrame:
CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2,
faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)];
[mouth setBackgroundColor:[[UIColor greenColor] colorWithAlphaComponent:0.3]];
[mouth setCenter:faceFeature.mouthPosition];
mouth.layer.cornerRadius = faceWidth*0.2;
[resultView addSubview:mouth];
}
}
//得到的坐标点中,y值是从下开始的。比如说图片的高度为300,左眼的y值为100,说明左眼距离底部的高度为100,换成我们习惯的,距离顶部的距离就是200,这一点需要注意
[resultView setTransform:CGAffineTransformMakeScale(1, -1)];

}

来看看识别效果:
Objective-C基于CIDetector的人脸检测
GitHub:https://github.com/FEverStar/FaceDemo