次阅读 来源:互联网(转载协议) 2015-09-29 10:14 我要评论(0)


final YuvImage image = new YuvImage(mData, ImageFormat.NV21, w, h, null);

ByteArrayOutputStream os = new ByteArrayOutputStream(mData.length);

if(!image.compressToJpeg(new Rect(0, 0, w, h), 100, os)){

return null;


byte[] tmp = os.toByteArray();

Bitmap bmp = BitmapFactory.decodeByteArray(tmp, 0,tmp.length);

因为这个mData是byte[ ]格式,转换流程是:byte[ ]---YuvImage----ByteArrayOutputStream---byte[ ]-----Bitmap。乍一看这个转换还真是多余了。看看看goolge的api:


public abstract void onPreviewFrame (byte[] data, Camera camera)

Added in API level 1

Called as preview frames are displayed. This callback is invoked on the event thread open(int) was called from.

If using the YV12 format, refer to the equations in setPreviewFormat(int) for the arrangement of the pixel data in the preview callback buffers.


datathe contents of the preview frame in the format defined by ImageFormat, which can be queried with getPreviewFormat(). If setPreviewFormat(int) is never called, the default will be the YCbCr_420_SP (NV21) format.

camerathe Camera service object.

大致意思是:可以用getPreviewFormat()查询支持的预览帧格式。如果setPreviewFormat(INT) 从未被调用,默认将使用YCbCr_420_SP的格式(NV21)。



public void setPreviewFormat (int pixel_format)

Added in API level 1

Sets the image format for preview pictures.

If this is never called, the default format will be NV21, which uses the NV21 encoding format.

Use getSupportedPreviewFormats() to get a list of the available preview formats.

It is strongly recommended that either NV21 or YV12 is used, since they are supported by all camera devices.

For YV12, the image buffer that is received is not necessarily tightly packed, as there may be padding at the end of each row of pixel data, as described in YV12. For camera callback data, it can be assumed that the stride of the Y and UV data is the smallest possible that meets the alignment requirements. That is, if the preview size is width x height, then the following equations describe the buffer index for the beginning of row y for the Y plane and row c for the U and V planes:

yStride= (int) ceil(width / 16.0) * 16;

uvStride= (int) ceil( (yStride / 2) / 16.0) * 16;

ySize= yStride * height;

uvSize= uvStride * height / 2;

yRowIndex = yStride * y;

uRowIndex = ySize + uvSize + uvStride * c;

vRowIndex = ySize + uvStride * c;

size= ySize + uvSize * 2;



getSupportedPreviewFormats ()

Added in API level 5

Gets the supported preview formats. NV21 is always supported. YV12 is always supported since API level 12.


  • Michael I. Jordan带你解读百万奖金ATEC蚂蚁人工智能大赛

    Michael I. Jordan带你解读百万奖金ATEC蚂蚁人工智能大赛

  • 2018深圳国际人工智能展览会 2018 shenzhen International Artif

    2018深圳国际人工智能展览会 2018 shenzhen International Artif

  • teamLab创始人猪子寿之: 抛去衣食住行,我还剩下什么?

    teamLab创始人猪子寿之: 抛去衣食住行,我还剩下什么?

  • Oculus公布原型机,大幅度提升可视角,能实现140°的视场水平




近日,美国软性机器抓手制造商 Soft Robotics 宣布,获得 2000 万美元的融资,本轮投资者包括 Scale Venture Partners,Calibrate Ventures...

据外媒报道,加州车管局发布了《2017自动驾驶脱离报告(California Autonomous Vehicle Disengagement Reports)》,其中谈及了脱离的具体...