怎么在Android项目中实现一个人脸检测功能-创新互联
今天就跟大家聊聊有关怎么在Android项目中实现一个人脸检测功能,可能很多人都不太了解,为了让大家更加了解,小编给大家总结了以下内容,希望大家根据这篇文章可以有所收获。
创新互联建站是一家专业提供河曲企业网站建设,专注与网站制作、网站建设、H5网站设计、小程序制作等业务。10年已为河曲众多企业、政府机构等服务。创新互联专业网站制作公司优惠进行中。(1)背景。
Google 于2006年8月收购Neven Vision 公司 (该公司拥有10多项应用于移动设备领域的图像识别的专利),以此获得了图像识别的技术,并加入到android中。Android 中的人脸识别技术,用到的底层库:android/external/neven/,framework 层:frameworks/base/media/java/android/media/FaceDetector.java。
Java 层接口的限制:A,只能接受Bitmap 格式的数据;B,只能识别双眼距离大于20 像素的人脸像(当然,这个可在framework层中修改);C,只能检测出人脸的位置(双眼的中心点及距离),不能对人脸进行匹配(查找指定的脸谱)。
人脸识别技术的应用:A,为Camera 添加人脸识别的功能,使得Camera 的取景器上能标识出人脸范围;如果硬件支持,可以对人脸进行对焦。B,为相册程序添加按人脸索引相册的功能,按人脸索引相册,按人脸分组,搜索相册。
(2)Neven库给上层提供的主要方法:
A,android.media.FaceDetector .FaceDetector(int width, int height, int maxFaces):Creates a FaceDetector, configured with the size of the images to be analysed and the maximum number of faces that can be detected. These parameters cannot be changed once the object is constructed.
B,int android.media.FaceDetector .findFaces(Bitmap bitmap, Face [] faces):Finds all the faces found in a given Bitmap . The supplied array is populated with FaceDetector.Face s for each face found. The bitmap must be in 565 format (for now).
(3) 静态图片处理代码实例:
通过对位图的处理,捕获位图中的人脸,并以绿框显示,有多个人脸就提示多个绿框。首先新建一个activity,由于位图资源会用代码显示出来,所以不需在layout中使用widget。
package com.example.mydetect2; import android.os.Bundle; import android.app.Activity; import android.util.Log; import android.view.Menu; import android.content.Context; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.graphics.Canvas; import android.graphics.Color; import android.graphics.Paint; import android.graphics.PointF; import android.media.FaceDetector; //人脸识别的关键类 import android.media.FaceDetector.Face; import android.view.View; public class MainActivity2 extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); //setContentView(R.layout.activity_main_activity2); setContentView(new myView(this)); //使用自建的view来显示 Log.i("zhangcheng","MainActivity2 run here"); } private class myView extends View{ private int imageWidth, imageHeight; private int numberOfFace = 5; //大检测的人脸数 private FaceDetector myFaceDetect; //人脸识别类的实例 private FaceDetector.Face[] myFace; //存储多张人脸的数组变量 float myEyesDistance; //两眼之间的距离 int numberOfFaceDetected; //实际检测到的人脸数 Bitmap myBitmap; public myView(Context context){ //view类的构造函数,必须有 super(context); BitmapFactory.Options BitmapFactoryOptionsbfo = new BitmapFactory.Options(); BitmapFactoryOptionsbfo.inPreferredConfig = Bitmap.Config.RGB_565; //构造位图生成的参数,必须为565。类名+enum myBitmap = BitmapFactory.decodeResource(getResources(), R.drawable.baby, BitmapFactoryOptionsbfo); imageWidth = myBitmap.getWidth(); imageHeight = myBitmap.getHeight(); myFace = new FaceDetector.Face[numberOfFace]; //分配人脸数组空间 myFaceDetect = new FaceDetector(imageWidth, imageHeight, numberOfFace); numberOfFaceDetected = myFaceDetect.findFaces(myBitmap, myFace); //FaceDetector 构造实例并解析人脸 Log.i("zhangcheng","numberOfFaceDetected is " + numberOfFaceDetected); } protected void onDraw(Canvas canvas){ //override函数,必有 canvas.drawBitmap(myBitmap, 0, 0, null); //画出位图 Paint myPaint = new Paint(); myPaint.setColor(Color.GREEN); myPaint.setStyle(Paint.Style.STROKE); myPaint.setStrokeWidth(3); //设置位图上paint操作的参数 for(int i=0; i < numberOfFaceDetected; i++){ Face face = myFace[i]; PointF myMidPoint = new PointF(); face.getMidPoint(myMidPoint); myEyesDistance = face.eyesDistance(); //得到人脸中心点和眼间距离参数,并对每个人脸进行画框 canvas.drawRect( //矩形框的位置参数 (int)(myMidPoint.x - myEyesDistance), (int)(myMidPoint.y - myEyesDistance), (int)(myMidPoint.x + myEyesDistance), (int)(myMidPoint.y + myEyesDistance), myPaint); } } } }
(4) 动态预览识别人脸代码实例
该过程用于后台工作,没有界面也没有预览。所以没有采用上面那种处理位图资源的方式。Import的类就不列出了,核心的代码和流程如下:
A,打开摄像头,获得初步摄像头回调数据,用到是setpreviewcallback
protected Camera mCameraDevice = null;// 摄像头对象实例 private long mScanBeginTime = 0; // 扫描开始时间 private long mScanEndTime = 0; // 扫描结束时间 private long mSpecPreviewTime = 0; // 扫描持续时间 private int orientionOfCamera ; //前置摄像头layout角度 int numberOfFaceDetected; //最终识别人脸数目
public void startFaceDetection() { try { mCameraDevice = Camera.open(1); //打开前置 if (mCameraDevice != null) Log.i(TAG, "open cameradevice success! "); } catch (Exception e) { //Exception代替很多具体的异常 mCameraDevice = null; Log.w(TAG, "open cameraFail"); mHandler.postDelayed(r,5000); //如果摄像头被占用,人眼识别每5秒检测看有没有释放前置 return; } Log.i(TAG, "startFaceDetection"); Camera.Parameters parameters = mCameraDevice.getParameters(); setCameraDisplayOrientation(1,mCameraDevice); //设置预览方向 mCameraDevice.setPreviewCallback(new PreviewCallback(){ public void onPreviewFrame(byte[] data, Camera camera){ mScanEndTime = System.currentTimeMillis(); //记录摄像头返回数据的时间 mSpecPreviewTime = mScanEndTime - mScanBeginTime; //从onPreviewFrame获取摄像头数据的时间 Log.i(TAG, "onPreviewFrame and mSpecPreviewTime = " + String.valueOf(mSpecPreviewTime)); Camera.Size localSize = camera.getParameters().getPreviewSize(); //获得预览分辨率 YuvImage localYuvImage = new YuvImage(data, 17, localSize.width, localSize.height, null); ByteArrayOutputStream localByteArrayOutputStream = new ByteArrayOutputStream(); localYuvImage.compressToJpeg(new Rect(0, 0, localSize.width, localSize.height), 80, localByteArrayOutputStream); //把摄像头回调数据转成YUV,再按图像尺寸压缩成JPEG,从输出流中转成数组 byte[] arrayOfByte = localByteArrayOutputStream.toByteArray(); CameraRelease(); //及早释放camera资源,避免影响camera设备的正常调用 StoreByteImage(arrayOfByte); } }); mCameraDevice.startPreview(); //该语句可放在回调后面,当执行到这里,调用前面的setPreviewCallback mScanBeginTime = System.currentTimeMillis();// 记录下系统开始扫描的时间 }
B,设置预览方向的函数说明,该函数比较重要,因为方向直接影响bitmap构造时的矩阵旋转角度,影响最终人脸识别的成功与否
public void setCameraDisplayOrientation(int paramInt, Camera paramCamera){ CameraInfo info = new CameraInfo(); Camera.getCameraInfo(paramInt, info); int rotation = ((WindowManager)getSystemService("window")).getDefaultDisplay().getRotation(); //获得显示器件角度 int degrees = 0; Log.i(TAG,"getRotation's rotation is " + String.valueOf(rotation)); switch (rotation) { case Surface.ROTATION_0: degrees = 0; break; case Surface.ROTATION_90: degrees = 90; break; case Surface.ROTATION_180: degrees = 180; break; case Surface.ROTATION_270: degrees = 270; break; } orientionOfCamera = info.orientation; //获得摄像头的安装旋转角度 int result; if (info.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) { result = (info.orientation + degrees) % 360; result = (360 - result) % 360; // compensate the mirror } else { // back-facing result = (info.orientation - degrees + 360) % 360; } paramCamera.setDisplayOrientation(result); //注意前后置的处理,前置是映象画面,该段是SDK文档的标准DEMO }
C,对摄像头回调数据进行转换并最终解成BITMAP后再人脸识别的过程
public void StoreByteImage(byte[] paramArrayOfByte){ mSpecStopTime = System.currentTimeMillis(); mSpecCameraTime = mSpecStopTime - mScanBeginTime; Log.i(TAG, "StoreByteImage and mSpecCameraTime is " + String.valueOf(mSpecCameraTime)); BitmapFactory.Options localOptions = new BitmapFactory.Options(); Bitmap localBitmap1 = BitmapFactory.decodeByteArray(paramArrayOfByte, 0, paramArrayOfByte.length, localOptions); int i = localBitmap1.getWidth(); int j = localBitmap1.getHeight(); //从上步解出的JPEG数组中接出BMP,即RAW->JPEG->BMP Matrix localMatrix = new Matrix(); //int k = cameraResOr; Bitmap localBitmap2 = null; FaceDetector localFaceDetector = null; switch(orientionOfCamera){ //根据前置安装旋转的角度来重新构造BMP case 0: localFaceDetector = new FaceDetector(i, j, 1); localMatrix.postRotate(0.0F, i / 2, j / 2); localBitmap2 = Bitmap.createBitmap(i, j, Bitmap.Config.RGB_565); break; case 90: localFaceDetector = new FaceDetector(j, i, 1); //长宽互换 localMatrix.postRotate(-270.0F, j / 2, i / 2); //正90度的话就反方向转270度,一样效果 localBitmap2 = Bitmap.createBitmap(i, j, Bitmap.Config.RGB_565); break; case 180: localFaceDetector = new FaceDetector(i, j, 1); localMatrix.postRotate(-180.0F, i / 2, j / 2); localBitmap2 = Bitmap.createBitmap(i, j, Bitmap.Config.RGB_565); break; case 270: localFaceDetector = new FaceDetector(j, i, 1); localMatrix.postRotate(-90.0F, j / 2, i / 2); localBitmap2 = Bitmap.createBitmap(j, i, Bitmap.Config.RGB_565); //localBitmap2应是没有数据的 break; } FaceDetector.Face[] arrayOfFace = new FaceDetector.Face[1]; Paint localPaint1 = new Paint(); Paint localPaint2 = new Paint(); localPaint1.setDither(true); localPaint2.setColor(-65536); localPaint2.setStyle(Paint.Style.STROKE); localPaint2.setStrokeWidth(2.0F); Canvas localCanvas = new Canvas(); localCanvas.setBitmap(localBitmap2); localCanvas.setMatrix(localMatrix); localCanvas.drawBitmap(localBitmap1, 0.0F, 0.0F, localPaint1); //该处将localBitmap1和localBitmap2关联(可不要?) numberOfFaceDetected = localFaceDetector.findFaces(localBitmap2, arrayOfFace); //返回识脸的结果 localBitmap2.recycle(); localBitmap1.recycle(); //释放位图资源 FaceDetectDeal(numberOfFaceDetected); }
看完上述内容,你们对怎么在Android项目中实现一个人脸检测功能有进一步的了解吗?如果还想了解更多知识或者相关内容,请关注创新互联行业资讯频道,感谢大家的支持。
本文名称:怎么在Android项目中实现一个人脸检测功能-创新互联
URL地址:http://scyanting.com/article/dogjoc.html