【车机xxx视频需求实现 5】- Android 车机Native层实现打开、获取、硬编码 V4L2 /dev/video0摄像头数据
发布日期:2021-06-29 14:55:10 浏览次数:3 分类:技术文章

本文共 30345 字,大约阅读时间需要 101 分钟。

【车机xxx视频需求实现 5】- Android 车机Native层实现打开、获取、硬编码 V4L2 /dev/video0摄像头数据

有关车机xxx视频需求,前面我们依次实现了虚拟化、H264编码,

本需求系列文章:

  1. 《》
  2. 《》
  3. 《》
  4. 《》
  5. 《》
  6. 《》
  7. 《》
  8. 《》
  9. 《》
  10. 《》
  11. 《》
  12. 《》
  13. 《》
  14. 《》
  15. 《》
  16. 《》

今天我们在前面虚拟代成功基础上,来实现在Native 层打开并获取V4L2 /dev/video0 摄像头的数据。

一、摄像头上报部分

前面我们使用的是UVC 摄像头来替代调试的,它的节点是 /dev/video14,

现在我们这边有实际要用的摄像头了,所以改为 /dev/video0 节点来调试:

char id_real[] = "video0";std::string CAMERA_ID = "video0";std::string id_101 = "video101";std::string id_102 = "video102";std::string id_103 = "video103";

1.1 过滤 /dev/video0 节点

因为我们目前项目只有 /dev/video0 节点的需求,所以在遍历video节点时,只过滤 video0 节点,其他的节点不遍历。

这样做的好处在于,在开机过程中,如果所有video 节点都遍历一遍,

因为遍历时要打开摄像头,所以可能会出现我们应用程序在遍历时打开摄像头,其他应用程序没法打开的情况,出现bug。
本着需要啥就用啥的原则,此处加上过滤只过滤我们所需要的摄像头,这样好些。

std::vector
FindVideoFiles(){
std::vector
videoFiles; DIR *dir = opendir(VIDEO_FILE_DIR.c_str()); if (dir == nullptr) {
ALOGE("%s: can not open %s, check path or permission", __FUNCTION__, VIDEO_FILE_DIR.c_str()); return videoFiles; } struct dirent *file; while ((file = readdir(dir)) != nullptr) {
if (strcmp(file->d_name, ".") == 0 || strcmp(file->d_name, "..") == 0) {
continue; } std::string fileName(file->d_name); int pos = fileName.find(VIDEO_FILE_PREF); // ciellee make three virtual camera device FindVideoFiles +++ /*if (pos != fileName.npos && pos == 0 && !(strcmp(file->d_name, "video0") == 0) && !(strcmp(file->d_name, "video1") == 0) && !(strcmp(file->d_name, "video2") == 0)) { ALOGI("%s: find one video file: %s", __FUNCTION__, file->d_name); videoFiles.push_back(fileName); }*/ if (pos != fileName.npos && pos == 0 && strcmp(file->d_name, CAMERA_ID.c_str()) == 0) {
//if (pos != fileName.npos && pos == 0 ) {
ALOGI("%s: find one video file: %s", __FUNCTION__, file->d_name); videoFiles.push_back(fileName); } // ciellee make three virtual camera device FindVideoFiles --- } closedir(dir); return videoFiles;}

1.2 打开 /dev/video0 节点,获取节点信息

int CheckVideoFile(std::string id, std::vector
& infos){
struct stat st; sprintf(dev_name,"/dev/%s", id.c_str()); ALOGI("[native_camera] %s: %s +++ \n",__func__, dev_name); if (-1 == stat (dev_name, &st)){
ALOGI("[native_camera] CheckVideoFile: Cannot identify '%s': %d, %s", dev_name, errno, strerror(errno)); return -1; } if (!S_ISCHR (st.st_mode)){
ALOGI("[native_camera] CheckVideoFile: %s is no device", dev_name); return -1; } struct vdIn *vd = (struct vdIn *) calloc(1, sizeof(struct vdIn)); vd->fd = open (dev_name, O_RDWR); if (-1 == vd->fd){
ALOGI("[native_camera] CheckVideoFile: Cannot open '%s': %d, %s", dev_name, errno, strerror(errno)); return -1; } struct v4l2_capability cap; if (-1 == xioctl (vd->fd, VIDIOC_QUERYCAP, &cap)){
close(vd->fd); if (EINVAL == errno){
ALOGI("[native_camera] CheckVideoFile: %s is no V4L2 device", dev_name); return -1; }else{
ALOGI("[native_camera] CheckVideoFile: %s VIDIOC_QUERYCAP error", dev_name); return errnoexit ("VIDIOC_QUERYCAP"); } } ALOGI("[native_camera] CheckVideoFile: %s driver:%s, card:%s, bus_info:%s, version:%u.%u.%u, capabilities:0x%08x, device_caps:0x%08x", dev_name, cap.driver, cap.card, cap.bus_info, (caps.version >> 16) & 0xFF, (caps.version >> 8) & 0xFF, (caps.version) & 0xFF), cap.capabilities, cap.device_caps); if (!(cap.capabilities & V4L2_CAP_VIDEO_CAPTURE_MPLANE)) //ciellee: V4L2_CAP_VIDEO_CAPTURE {
ALOGI("[native_camera] CheckVideoFile: %s is no video capture device", dev_name); close(vd->fd); return -1; } if (!(cap.capabilities & V4L2_CAP_STREAMING)) {
ALOGI("[native_camera] CheckVideoFile: %s does not support streaming id/o", dev_name); close(vd->fd); return -1; } int ret = CollectV4L2DriverInfos(id, vd, infos); if (ret != 0) {
ALOGE("[native_camera] %s: collect camera info failed, ret: %d", __FUNCTION__, ret); close(vd->fd); return -1; } ALOGI("[native_camera] %s: find one valid v4l2 driver: %s", __FUNCTION__, id.c_str()); close(vd->fd); return 0;}

1.3 获取摄像头支持的Format格式,并虚拟化三个节点出来

在本函数中,主要是调用 VIDIOC_ENUM_FMT 逻列出摄像头支持的所有格式,

如果判断到是NV12格式,则将其修改为H264格式上报。
接着,再对摄像头进行虚拟化,实际上报 /dev/video101、 /dev/video102、 /dev/video103 三个虚拟节点

int32_t CollectV4L2DriverInfos(std::string id, struct vdIn *vd, std::vector
& infos){
int ret = -1; struct v4l2_fmtdesc fmt; memset(&fmt, 0, sizeof(fmt)); fmt.index = 0; #ifdef NV12_SUPPORT fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;#else fmt.type = V4L2_CAP_VIDEO_CAPTURE;#endif bool isHasH264 = false; __u32 h264Fmt = 0; bool isHashNv12 = false; __u32 nv12Fmt = 0; //__u32 mjpgFmt = 0; while ((ret = ioctl(vd->fd, VIDIOC_ENUM_FMT, &fmt)) == 0) {
fmt.index++; std::stringstream stream; stream << (char)(fmt.pixelformat & 0xff) << (char)((fmt.pixelformat >> 8) & 0xff) << (char)((fmt.pixelformat >> 16) & 0xff) << (char)((fmt.pixelformat >> 24) & 0xff); std::string pixFmt = stream.str(); ALOGI("[native_camera] id=%s get pixel format: %s, pixel fmt: 0x%x", id.c_str(), pixFmt.c_str(), fmt.pixelformat); ALOGI("[native_camera] %2d: %s 0x%08x 0x%x", fmt.index, fmt.description, fmt.pixelformat, fmt.flags); if (pixFmt.compare(H264_FMT) == 0) {
isHasH264 = true; h264Fmt = fmt.pixelformat; break; } if (pixFmt.compare(NV12_FMT) == 0) {
isHashNv12 = true; nv12Fmt = fmt.pixelformat; } /*if (pixFmt.compare(LOGI_MJPG_FMT) == 0) { mjpgFmt = fmt.pixelformat; }*/ } if (isHasH264==false && isHashNv12==false) {
ALOGW("[native_camera] %s: do not support H264 or NV12 Format", __FUNCTION__); return -1; // ciellee } std::vector
supportFpsRange; std::vector
supportSizes; /* 1. get resolution info */ struct v4l2_frmsizeenum frmsize; memset(&frmsize, 0, sizeof(frmsize)); frmsize.index = 0; if(isHasH264 == true) frmsize.pixel_format = h264Fmt; else if(isHashNv12 == true) frmsize.pixel_format = 0x34363248; // 462H, 将NV12 上报成H264 //H264: 0x34363248 //frmsize.pixel_format = 0x34363248; //frmsize.pixel_format = mjpgFmt; while ((ret = ioctl(vd->fd, VIDIOC_ENUM_FRAMESIZES, &frmsize)) == 0) {
frmsize.index++; DMSDPSize size; size.width = frmsize.discrete.width; size.height = frmsize.discrete.height; ALOGI("[native_camera] %s: support width: %d, height: %d", __FUNCTION__, size.width, size.height); supportSizes.push_back(size); } if(isHashNv12 == true){
DMSDPSize size; size.width = 1280; size.height = 720; ALOGI("[native_camera] %s: support width: %d, height: %d", __FUNCTION__, size.width, size.height); supportSizes.push_back(size); } /* 2. get fps info */ struct v4l2_streamparm streamparm; memset(&streamparm, 0, sizeof(streamparm)); streamparm.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; ret = ioctl(vd->fd, VIDIOC_G_PARM, &streamparm); if (ret != 0) {
ALOGE("[native_camera] %s: get camera fps info failed, ret: %d", __FUNCTION__, ret); return -1; } ALOGI("[native_camera] capture.capability=0x%x, capture.capturemode=0x%x \n", streamparm.parm.capture.capability, streamparm.parm.capture.capturemode); ALOGI("[native_camera] %s: support fps ctl: %s, is high quality: %d", __FUNCTION__, streamparm.parm.capture.capability == V4L2_CAP_TIMEPERFRAME ? "true" : "false", streamparm.parm.capture.capturemode); __u32 numerator = streamparm.parm.capture.timeperframe.numerator; __u32 denominator = streamparm.parm.capture.timeperframe.denominator; ALOGI("[native_camera] %s: fps info, numerator: %d, denominator: %d, fps:%d", __FUNCTION__, numerator, denominator, (int)(denominator / numerator)); int fps = 30; if (numerator != 0) {
fps = (int)(denominator / numerator); } if (fps < 15) {
fps = 30; } DMSDPFpsRange fr; fr.min = fps * 1000; fr.max = fps * 1000; supportFpsRange.push_back(fr); if (fps >= 30) {
DMSDPFpsRange fr15; fr15.min = 15000; fr15.max = 15000; DMSDPFpsRange fr25; fr25.min = 25000; fr25.max = 25000; supportFpsRange.push_back(fr15); supportFpsRange.push_back(fr25); } DMSDPCameraAbilityInner cii; cii.supportFpsRanges = supportFpsRange; cii.supportSizes = supportSizes; cii.dataType = DMSDPCameraDataType::H264; std::vector
abilities; abilities.push_back(cii); DMSDPCameraCapabilitiesInner ciis; ciis.id = id; ciis.abilities = abilities; // ciellee make three virtual camera device CollectV4L2DriverInfos +++ ALOGI("[native_camera]2 %s: real id: %s virtual node: %s, %s, %s", __FUNCTION__, id.c_str(), id_101.c_str(), id_102.c_str(), id_103.c_str()); if (id.compare(CAMERA_ID) == 0) {
DMSDPCameraCapabilitiesInner ciis_1; ciis_1.id = id_101; ciis_1.abilities = abilities; infos.push_back(ciis_1); DMSDPCameraCapabilitiesInner ciis_2; ciis_2.id = id_102; ciis_2.abilities = abilities; infos.push_back(ciis_2); DMSDPCameraCapabilitiesInner ciis_3; ciis_3.id = id_103; ciis_3.abilities = abilities; infos.push_back(ciis_3); } else {
infos.push_back(ciis); } // ciellee make three virtual camera device CollectV4L2DriverInfos --- ALOGI("[native_camera] %s: collect info succes, id: %s", __FUNCTION__, id.c_str()); return 0;}

1.4 修改v4l2Videos为三个虚拟化后的节点

std::vector
ValidV4L2Drivers(std::vector
videos, std::vector
& infos){
std::vector
v4l2Videos; for (int i = 0; i < videos.size(); i++) {
std::string video = videos[i]; int ret = CheckVideoFile(video, infos); if (ret != 0) {
ALOGI("%s: video file not valid v4l2 driver: %s", __FUNCTION__, video.c_str()); continue; } // ciellee make three virtual camera device ValidV4L2Drivers +++ ALOGI("[native_camera]1 %s: %s, push_bak(%s %s %s)", __FUNCTION__, video.c_str(), id_101.c_str(), id_102.c_str(), id_103.c_str()); if (video.compare(CAMERA_ID) == 0) {
v4l2Videos.push_back(id_101); v4l2Videos.push_back(id_102); v4l2Videos.push_back(id_103); } else {
v4l2Videos.push_back(video); } // ciellee make three virtual camera device ValidV4L2Drivers --- } return v4l2Videos;}

至此,上报相关的修改修改完毕。

二、打开摄像头部分

接下来就是打开摄像头部分的修改了,我们主要修改的地方为打开摄像头节点部分,不相关的代码就省略掉。

  1. 注册camera 回调函数,用于上报编码后的camera数据
  2. 初始化摄像头,此处要修改为打开V4L2 Camera
int32_t OpenCameraInner(const DMSDPCameraParam* param){
。。。。 省略一部分代码 //1. 注册camera 回调函数,用于上报编码后的camera数据 ALOGI("[native_camera] %s: Register sendback function to v4l2 lib, id = %s", __FUNCTION__, id.c_str()); registerSendFrameFunc(currSendDataCB); pthread_t pthreadVideo; //2. 初始化摄像头,此处要修改为打开V4L2 Camera int32_t ret = Init_264camera(id, param->width, param->height, param->fps); if (ret != 0) {
ALOGE("%s: Init camera failed", __FUNCTION__); return -1; } pthread_mutex_init(&mut, nullptr); OpenVideo(); if (pthread_create(&pthreadVideo, nullptr, cap_video, nullptr))//开启视频录制 {
ALOGI("%s: Create cap_video error!", __FUNCTION__); } if (pthreadVideo != 0) {
pthread_detach(pthreadVideo); ALOGI("%s: capture h264 preview exit", __FUNCTION__); } return 0;}

2.1 初始化(摄像头及编码器)并打开摄像头 Init_264camera()

int32_t Init_264camera(std::string id, int mWidth, int mHeight, int fps){
int width = mWidth; int height = mHeight; int format = V4L2_PIX_FMT_H264; int ret; ret = open_device(id); if (ret != 0) {
ALOGE("%s: open device failed, id: %s, ret: %d", __FUNCTION__, id.c_str(), ret); return -1; } ALOGI("[native_camera]%s: start to init h264 encoder\n", __FUNCTION__); if (H264_encoder_init() != 0) {
ALOGE("%s: open device failed, id: %s, ret: %d", __FUNCTION__, id.c_str(), ret); } ALOGI("[native_camera]%s: init_device,width=%d, height=%d, format=0x%x, fps=%d !\n", __FUNCTION__, width, height, format, fps); ret = init_device(width, height, format, fps); if (ret != 0) {
ALOGE("[native_camera]%s: init device failed, id: %s, ret: %d", __FUNCTION__, id.c_str(), ret); return -1; } ALOGI("%s: ------init_device---success------- !\n", __FUNCTION__); ret = start_previewing(); if (ret != 0) {
ALOGE("[native_camera]%s: start previewing failed", __FUNCTION__); return -1; } ALOGI("[native_camera]%s: ---start_previewing------success------- !\n", __FUNCTION__); tdate = localtime(&curdate); XU_OSD_Set_CarcamCtrl(vd->fd, 0, 0, 0); if (XU_OSD_Set_RTC(vd->fd, tdate->tm_year + 1900, tdate->tm_mon + 1, tdate->tm_mday, tdate->tm_hour, tdate->tm_min, tdate->tm_sec) < 0) ALOGI("XU_OSD_Set_RTC_fd = %d Failed\n", vd->fd); if (XU_OSD_Set_Enable(vd->fd, 1, 1) < 0) ALOGI(" XU_OSD_Set_Enable_fd = %d Failed\n", vd->fd); ret = XU_Init_Ctrl(vd->fd); if (ret < 0){
ALOGW("%s: XU_H264_Set_BitRate Failed\n", __FUNCTION__); }else{
double m_BitRate = 0.0; if (XU_H264_Set_BitRate(vd->fd, 4096 * 1024) < 0)//设置码率 {
ALOGI("%s: XU_H264_Set_BitRate Failed\n", __FUNCTION__); } XU_H264_Get_BitRate(vd->fd, &m_BitRate); if (m_BitRate < 0){
ALOGI("%s: XU_H264_Get_BitRate Failed\n", __FUNCTION__); } ALOGI("%s: -----XU_H264_Set_BitRate %fbps----\n", __FUNCTION__, m_BitRate); } ret = XU_Multi_Set_SubStream_FrameRate(vd->fd, fps); return 0;}

2.2 配置摄像头输出格式 init_device()

int init_device(int width, int height, int format, int fps){
struct v4l2_capability cap;#ifndef NV12_SUPPORT struct v4l2_cropcap cropcap; struct v4l2_crop crop;#endif struct v4l2_format fmt; unsigned int min; if (-1 == xioctl(vd->fd, VIDIOC_QUERYCAP, &cap)){
if (EINVAL == errno){
ALOGI("%s is no V4L2 device", dev_name); return -1; }else{
return errnoexit("VIDIOC_QUERYCAP"); } }#ifdef NV12_SUPPORT if (!(cap.capabilities & V4L2_CAP_VIDEO_CAPTURE_MPLANE))#else if (!(cap.capabilities & V4L2_CAP_VIDEO_CAPTURE))#endif {
ALOGI("%s is no video capture device", dev_name); return -1; } if (!(cap.capabilities & V4L2_CAP_STREAMING)){
ALOGI("%s does not support streaming i/o", dev_name); return -1; }#ifndef NV12_SUPPORT CLEAR(cropcap); cropcap.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; if (0 == xioctl(vd->fd, VIDIOC_CROPCAP, &cropcap)){
crop.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; crop.c = cropcap.defrect; if (-1 == xioctl(vd->fd, VIDIOC_S_CROP, &crop)){
switch (errno){
case EINVAL: break; default: break; } } }#endif CLEAR(fmt); // 配置摄像头输出格式#ifdef NV12_SUPPORT fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE; fmt.fmt.pix_mp.width = width ; fmt.fmt.pix_mp.height = height; // 逐行指描不用乘2, 隔行扫描,高度乘2 fmt.fmt.pix_mp.pixelformat = V4L2_PIX_FMT_YUYV; // V4L2_PIX_FMT_YUYV; //V4L2_PIX_FMT_NV12; fmt.fmt.pix_mp.field = V4L2_FIELD_NONE; // 逐行扫描 //fmt.fmt.pix_mp.num_planes = 1; //fmt.fmt.pix_mp.plane_fmt[0].bytesperline = width * 2; //fmt.fmt.pix_mp.plane_fmt[0].sizeimage = height * width * 2; ALOGI("[native_camera]: (%dx%d)Requesting format %c%c%c%c (0x%08X)", fmt.fmt.pix_mp.width, fmt.fmt.pix_mp.height, ((char*)&fmt.fmt.pix_mp.pixelformat)[0], ((char*)&fmt.fmt.pix_mp.pixelformat)[1], ((char*)&fmt.fmt.pix_mp.pixelformat)[2], ((char*)&fmt.fmt.pix_mp.pixelformat)[3], fmt.fmt.pix_mp.pixelformat);#else fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; fmt.fmt.pix.pixelformat = format; fmt.fmt.pix.field = V4L2_FIELD_ANY; fmt.fmt.pix.width = width; fmt.fmt.pix.height = height; ALOGI("[native_camera]: (%dx%d)Requesting format %c%c%c%c (0x%08X)", fmt.fmt.pix.width, fmt.fmt.pix.height, ((char*)&fmt.fmt.pix.pixelformat)[0], ((char*)&fmt.fmt.pix.pixelformat)[1], ((char*)&fmt.fmt.pix.pixelformat)[2], ((char*)&fmt.fmt.pix.pixelformat)[3], fmt.fmt.pix.pixelformat);#endif if (-1 == xioctl(vd->fd, VIDIOC_S_FMT, &fmt)) {
return errnoexit("VIDIOC_S_FMT"); }#ifndef NV12_SUPPORT min = fmt.fmt.pix.width * 2; if (fmt.fmt.pix.bytesperline < min) fmt.fmt.pix.bytesperline = min; min = fmt.fmt.pix.bytesperline * fmt.fmt.pix.height; if (fmt.fmt.pix.sizeimage < min) fmt.fmt.pix.sizeimage = min;#endif // 获取摄像头输出格式 CLEAR(fmt);#ifdef NV12_SUPPORT fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;#else fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;#endif if (-1 == ioctl(vd->fd, VIDIOC_G_FMT, &fmt)){
return errnoexit("VIDIOC_G_FMT"); } ALOGI("[native_camera][%s] Current output format: fmt=0x%X, %dx%d, num_planes=0x%x, sizeimage=%l, bytesperline=%l", __FUNCTION__, fmt.fmt.pix_mp.pixelformat, fmt.fmt.pix_mp.width, fmt.fmt.pix_mp.height, fmt.fmt.pix_mp.num_planes, fmt.fmt.pix_mp.plane_fmt[0].sizeimage, fmt.fmt.pix_mp.plane_fmt[0].bytesperline); #ifndef NV12_SUPPORT struct v4l2_streamparm parm; memset(&parm, 0, sizeof parm); parm.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; ioctl(vd->fd, VIDIOC_G_PARM, &parm); if (fps > 0) {
parm.parm.capture.timeperframe.numerator = 1; parm.parm.capture.timeperframe.denominator = fps; } else {
parm.parm.capture.timeperframe.numerator = 1; parm.parm.capture.timeperframe.denominator = 30; } ALOGE("[native_camera]%s: in init before set fps, fps numerator: %d, denominator: %d", __FUNCTION__, parm.parm.capture.timeperframe.numerator, parm.parm.capture.timeperframe.denominator); ioctl(vd->fd, VIDIOC_S_PARM, &parm); ioctl(vd->fd, VIDIOC_G_PARM, &parm); ALOGE("[native_camera]%s: in init after set fps, fps numerator: %d, denominator: %d", __FUNCTION__, parm.parm.capture.timeperframe.numerator, parm.parm.capture.timeperframe.denominator);#endif return init_mmap();}

2.3 配置Stream Buffers 及 映射内存 init_mmap ()

int init_mmap(void){
struct v4l2_requestbuffers req; CLEAR(req);#ifdef NV12_SUPPORT req.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;#else req.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;#endif req.count = 4; req.memory = V4L2_MEMORY_MMAP; if (-1 == xioctl(vd->fd, VIDIOC_REQBUFS, &req)){
if (EINVAL == errno){
ALOGI("[native_camera] %s does not support memory mapping\n", dev_name); ALOGI("%s does not support memory mapping", dev_name); return -1; }else{
ALOGI("[native_camera] %s VIDIOC_REQBUFS Failed\n", dev_name); return errnoexit("VIDIOC_REQBUFS"); } } if (req.count < 2){
ALOGI("[native_camera] Insufficient buffer memory on %s", dev_name); return -1; } buffers = (buffer*)calloc(req.count, sizeof(*buffers)); if (!buffers){
ALOGI("[native_camera] Out of memory"); return -1; } ALOGI("[native_camera] %s req.count=%d \n", dev_name, req.count); struct v4l2_buffer* buf; struct v4l2_plane* planes; buf = (struct v4l2_buffer*)malloc(sizeof(struct v4l2_buffer)); planes = (struct v4l2_plane*)malloc(sizeof(struct v4l2_plane)); for (n_buffers = 0; n_buffers < req.count; ++n_buffers){
memset(buf, 0, sizeof(struct v4l2_buffer)); memset(planes, 0, sizeof(struct v4l2_plane));#ifdef NV12_SUPPORT buf->type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE; buf->memory = V4L2_MEMORY_MMAP; buf->index = n_buffers; buf->length = 1; // 2; buf->m.planes = planes;#else buf->type = V4L2_BUF_TYPE_VIDEO_CAPTURE; buf->memory = V4L2_MEMORY_MMAP; buf->index = n_buffers;#endif if (-1 == xioctl(vd->fd, VIDIOC_QUERYBUF, buf)) {
ALOGI("[native_camera] %s VIDIOC_QUERYBUF Failed\n", dev_name); return errnoexit("VIDIOC_QUERYBUF"); }#ifdef NV12_SUPPORT buffers[n_buffers].length = (size_t)buf->m.planes->length; buffers[n_buffers].start = mmap(NULL, buffers[n_buffers].length, PROT_READ | PROT_WRITE, MAP_SHARED, vd->fd, buf->m.planes->m.mem_offset);#else buffers[n_buffers].length = buf.length; buffers[n_buffers].start = mmap(NULL, buf.length, PROT_READ | PROT_WRITE, MAP_SHARED, vd->fd, buf.m.offset);#endif ALOGI("[native_camera] Buffer description: index : %d, length: %zu, map start at %p\n", n_buffers, buffers[n_buffers].length, buffers[n_buffers].start); if (MAP_FAILED == buffers[n_buffers].start) {
ALOGI("[native_camera] %s mmap Failed\n", dev_name); return errnoexit("mmap"); } } if (buf != NULL) free(buf); if (planes != NULL) free(planes); return 0;}

2.4 配置 STREAM_ON start_preview()

int start_previewing(void){
unsigned int i; enum v4l2_buf_type type; struct v4l2_buffer* buf; struct v4l2_plane* planes; buf = (struct v4l2_buffer*)malloc(sizeof(struct v4l2_buffer)); planes = (struct v4l2_plane*)malloc(sizeof(struct v4l2_plane)); for (i = 0; i < n_buffers; ++i){
//struct v4l2_buffer buf; //CLEAR (buf); memset(buf, 0, sizeof(struct v4l2_buffer));#ifdef NV12_SUPPORT //struct v4l2_plane planes; //CLEAR(planes); memset(planes, 0, sizeof(struct v4l2_plane)); buf->type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE; buf->memory = V4L2_MEMORY_MMAP; buf->m.planes = planes; buf->index = i; buf->length = 1;// 2; buf->m.planes->length = buffers[i].length; buf->m.planes->m.mem_offset = (unsigned long)buffers[i].start;#else buf->type = V4L2_BUF_TYPE_VIDEO_CAPTURE; buf->memory = V4L2_MEMORY_MMAP; buf->index = i;#endif if (-1 == xioctl(vd->fd, VIDIOC_QBUF, buf)) return errnoexit("VIDIOC_QBUF"); } ALOGI("[native_camera]%s: VIDIOC_STREAMON ^_^\n", __FUNCTION__);#ifdef NV12_SUPPORT type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;#else type = V4L2_BUF_TYPE_VIDEO_CAPTURE;#endif if (-1 == xioctl(vd->fd, VIDIOC_STREAMON, &type)) return errnoexit("VIDIOC_STREAMON"); vd->isstreaming = 1; ALOGI("[native_camera]%s: VIDIOC_STREAMON success ^_^\n", __FUNCTION__); if (buf != NULL) free(buf); if (planes != NULL) free(planes); return 0;}

2.5 循环获取Camera 数据,保存成本地图片 VIDIOC_DQBUF

void* cap_video(void* arg){
int ret; struct v4l2_buffer buf; struct v4l2_plane planes; struct timeval tv; tv.tv_sec = 0; tv.tv_usec = 10000; fd_set rfds; int retval = 0;#ifdef Save_RawData_Support int raw_index = 0; FILE* raw_fp; const char* raw_path = "/sdcard/video/test_raw_"; const char* raw_yuv = "nv12"; char raw_filename[50];#endif int uv_size = 1280 * 720 / 2; uint8_t* uv_data = (uint8_t*)malloc(uv_size); memset(uv_data, 128, uv_size); while (capturing) {
CLEAR(buf); CLEAR(planes);#ifdef NV12_SUPPORT buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE; buf.memory = V4L2_MEMORY_MMAP; buf.m.planes = &planes; buf.length = 2;#else buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; buf.memory = V4L2_MEMORY_MMAP;#endif FD_ZERO(&rfds); FD_SET(vd->fd, &rfds); retval = select(vd->fd + 1, &rfds, NULL, NULL, &tv); if (retval < 0) {
ALOGI("select error\n"); } else//有数据要收 {
ret = ioctl(vd->fd, VIDIOC_DQBUF, &buf); if (ret < 0) {
ALOGI("Unable to dequeue buffer, stop catpure!\n"); //exit(1); {
std::lock_guard
lock(m_captureMtx); capturing = 0; } } // 在此处理数据,进行编码,后续修改为将数据放入队列,由独立线程来实现编码 ALOGI("[native_camera] %s got frame data size =%zu, %p, real size=%d", __FUNCTION__, buffers[buf.index].length, buffers[buf.index].start, buf.length); uint8_t* p_b = (uint8_t*)buffers[buf.index].start; ALOGI("[native_camera]1 %s got data: %x - %x - %x - %x - %x - %x - %x - %x - %x - %x - %x - %x", __FUNCTION__, p_b[0], p_b[1], p_b[2], p_b[3], p_b[4], p_b[5], p_b[6], p_b[7], p_b[8], p_b[9], p_b[10], p_b[11]); ALOGI("[native_camera] %s: index:0x%x type:0x%x bytesused:0x%x flags:0x%x field:0x%x sequence:0x%x memory:0x%x", __FUNCTION__, buf.index, buf.type, buf.bytesused, buf.flags, buf.field, buf.sequence, buf.memory); ALOGI("[native_camera] %s: %ld.%06ld\n", __FUNCTION__, buf.timestamp.tv_sec, buf.timestamp.tv_usec); if (buf.length > 0 && buffers[buf.index].start != NULL) if (!(p_b[0] == 0 && p_b[1] == 0 && p_b[2] == 0 && p_b[3] == 0 && p_b[4] == 0 && p_b[5] == 0 && p_b[6] == 0)) {
#ifdef Save_RawData_Support if (g_debug_flag == 1 && raw_index <= 50) {
// 在此处将数据转存 if (access("/sdcard/video/", F_OK) == -1) {
//如果文件夹不存在 ALOGI("[native_camera] create /sdcard/video/"); mkdir("/sdcard/video/", 0644); //则创建 } memset(raw_filename, '\0', 50); sprintf(raw_filename, "%s%d.%s", raw_path, raw_index++, raw_yuv); raw_fp = fopen(raw_filename, "wb+"); if (raw_fp != NULL) {
ret = fwrite((const uint8_t*)buffers[buf.index].start, 1, buffers[buf.index].length, raw_fp); fflush(raw_fp); fclose(raw_fp); ALOGI("[native_camera] write file %s ,size=%zu, real_write=%d\n", raw_filename, buffers[buf.index].length, ret); } else {
ALOGI("[native_camera] open file %s failed !!!\n", raw_filename); } }#endif H264_EnCode((const uint8_t*)buffers[buf.index].start, buffers[buf.index].length); //memset((uint8_t*)buffers[buf.index].start, 0, buffers[buf.index].length); } //fwrite(buffers[buf.index].start, buf.bytesused, 1, rec_fp1); //SendFrame((const uint8_t *)buffers[buf.index].start, buf.bytesused); ret = ioctl(vd->fd, VIDIOC_QBUF, &buf); if (ret < 0) {
ALOGI("Unable to requeue buffer, stop capture"); //exit(1); {
std::lock_guard
lock(m_captureMtx); capturing = 0; } } } } //stopPreviewing(); close_v4l2_new(); isRunning = false; pthread_exit(NULL);}

2.6 配置 STREAM_OFF stop_preview()

int stopPreviewing(void) {
enum v4l2_buf_type type;#ifdef NV12_SUPPORT type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;#else type = V4L2_BUF_TYPE_VIDEO_CAPTURE;#endif if (-1 == xioctl(vd->fd, VIDIOC_STREAMOFF, &type)) {
ALOGE("%s: stop previewing failed", __FUNCTION__); return -1; } ALOGI("%s: stop previewing success", __FUNCTION__); return 0;}

2.7 释放buffer

int close_v4l2_new() {
std::lock_guard
lock(m_stopMtx); if (vd == nullptr) {
ALOGW("%s: vd has release, no need close", __FUNCTION__); return 0; } video_disable_ex(vd); /* If the memory maps are not released the device will remain opened even after a call to close(); */ for (n_buffers = 0; n_buffers < 4; n_buffers++) {
int result = munmap(buffers[n_buffers].start, buffers[n_buffers].length); ALOGI("[native_camera]%d, munmap result : %d", n_buffers, result); } // 告诉V4L2驱动,释放stream buff v4l2_requestbuffers bufrequest;#ifdef NV12_SUPPORT bufrequest.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE; bufrequest.memory = V4L2_MEMORY_MMAP; bufrequest.count = 0;#else bufrequest.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; bufrequest.memory = V4L2_MEMORY_MMAP; bufrequest.count = 0;#endif if (ioctl(vd->fd, VIDIOC_REQBUFS, &bufrequest) < 0) {
ALOGE("[native_camera]%s VIDIOC_REQBUFS failed! %s.", __FUNCTION__, strerror(errno)); } if (vd->tmpbuffer) free (vd->tmpbuffer); vd->tmpbuffer = NULL; free (vd->framebuffer); vd->framebuffer = NULL; free (vd->videodevice); free (vd->status); free (vd->pictName); vd->videodevice = NULL; vd->status = NULL; vd->pictName = NULL; close (vd->fd); free(vd); vd = nullptr; ALOGI("[native_camera]%s: close vd handler", __FUNCTION__); return 0;}

三、调试log 分析

// 1. 下发打开虚拟化的摄像头命令11-09 05:18:56.464  4356 11135 I CamSDK_3rdCamera: [native_camera] OpenCamera: open camera id video101 11-09 05:18:56.464  4356 11135 I CamSDK_3rdCamera: [native_camera] OpenCameraInner: Register sendback function to v4l2 lib, id = video101// 2. 发送CAN 报文切换摄像头11-09 05:18:56.464  4356 11135 D native_camera_canbus: [native_camera]@@SendTrackLineEvent:0x6d-0x8-0xf0-0xff-0xff-0x3-0x30-0x1-0x0-0x011-09 05:18:56.464  4356 11135 D native_camera_canbus: [native_camera]@@SendTrackLineEvent:0x6d-0x8-0xf0-0xff-0xff-0x3-0x30-0x1-0x0-0x0// 3. 打开摄像头11-09 05:18:56.464  4356 11135 I CamSDK_3rdCamera: [native_camera] open camera hicar open video101, real open /dev/video0// 4. 初始化 H264 编码器11-09 05:18:56.465  4356 11135 I CamSDK_3rdCamera: [native_camera]Init_264camera: start to init h264 encoder11-09 05:18:56.465  4356 11135 I H264_Encoder: [native_camera][H264_encoder_init](180) +11-09 05:18:56.465  4356 11135 I H264_Encoder: [native_camera] ---- g_debug_flag=1 ----11-09 05:18:56.525  4356 11135 I H264_Encoder: [native_camera][H264_encoder_init](236) Start OMX.Freescale.std.video_encoder.avc.hw-based Success ^_^// 5. 初始化 摄像头,配置摄像头输出格式,申请buff,下发 stream on 命令11-09 05:18:56.538  4356 11135 I CamSDK_3rdCamera: [native_camera]Init_264camera: init_device,width=1280, height=720, format=0x34363248, fps=0 !11-09 05:18:56.538  4356 11135 I CamSDK_3rdCamera: [native_camera]: (1280x720)Requesting format YUYV (0x56595559)11-09 05:18:56.538  4356 11135 I CamSDK_3rdCamera: [native_camera][init_device] Current output format:  fmt=0x56595559, 1280x720, num_planes=0x1, sizeimage=, bytesperline=11-09 05:18:56.552  4356 11135 I CamSDK_3rdCamera: [native_camera] /dev/video0 req.count=4 11-09 05:18:56.552  4356 11135 I CamSDK_3rdCamera: [native_camera] Buffer description: index : 0, length: 1843200, map start at 0xe6b6365c400011-09 05:18:56.552  4356 11135 I CamSDK_3rdCamera: [native_camera] Buffer description: index : 1, length: 1843200, map start at 0xe6b63640200011-09 05:18:56.552  4356 11135 I CamSDK_3rdCamera: [native_camera] Buffer description: index : 2, length: 1843200, map start at 0xe6b63624000011-09 05:18:56.552  4356 11135 I CamSDK_3rdCamera: [native_camera] Buffer description: index : 3, length: 1843200, map start at 0xe6b63607e00011-09 05:18:56.552  4356 11135 I CamSDK_3rdCamera: [native_camera]start_previewing: VIDIOC_STREAMON ^_^11-09 05:18:56.828  4356 11135 I CamSDK_3rdCamera: [native_camera]start_previewing: VIDIOC_STREAMON success ^_^11-09 05:18:56.828  4356 11135 I CamSDK_3rdCamera: [native_camera]Init_264camera: ---start_previewing------success------- !11-09 05:18:56.857  4356 11135 I CamSDK_3rdCamera: [native_camera] OpenCamera: open camera video101 success// 6. 获取摄像头数据11-09 05:18:58.044  4356 18830 I CamSDK_3rdCamera: [native_camera] cap_video  got frame data size =1843200, 0xe6b636240000, real size=111-09 05:18:58.044  4356 18830 I CamSDK_3rdCamera: [native_camera]1 cap_video  got data: 0 - 80 - 0 - 80 - e - 80 - 14 - 80 - 11 - 80 - 10 - 8011-09 05:18:58.044  4356 18830 I CamSDK_3rdCamera: [native_camera] cap_video: index:0x2 type:0x9 bytesused:0x0 flags:0x2001 field:0x0 sequence:0x14  memory:0x111-09 05:18:58.044  4356 18830 I CamSDK_3rdCamera: [native_camera] cap_video: 780.240728// 6.1 保存摄像头数据为图片11-09 05:18:58.115  4356 18830 I CamSDK_3rdCamera: [native_camera] write file /sdcard/video/test_raw_6.nv12 ,size=1843200, real_write=1843200// 6.2 开始编码11-09 05:18:58.281  4356 18830 I H264_Encoder: [native_camera][H264_EnCode](335) queueInputBuffer index=(0) size=(1382400)// 6.3 获取编码后的数据,并上报11-09 05:18:58.282  4356 18830 I H264_Encoder: [native_camera][H264_EnCode](359) Got data in buffer 2, size=15488, pts=, debugNumFrames=2247

调试保存的RawData 和 H264 文件

在这里插入图片描述

在这里插入图片描述

实际车机上的效果,就不上图了。

至此,整个功能,实现ok,接下来主要是优化效果。

https://blog.csdn.net/Mark_minGE/article/details/82704149

转载地址:https://ciellee.blog.csdn.net/article/details/109525546 如侵犯您的版权,请留言回复原文章的地址,我们会给您删除此文章,给您带来不便请您谅解!

上一篇:【车机xxx视频需求实现 6】- 效果优化 - 增加 buff 队列、增加独立线程实现H264编码
下一篇:【Android 视频硬件编码】在Native层实现MediaCodec H264 编码 Demon - 实现任意Size尺寸图片的编码

发表评论

最新留言

感谢大佬
[***.8.128.20]2024年04月05日 10时14分12秒

关于作者

    喝酒易醉,品茶养心,人生如梦,品茶悟道,何以解忧?唯有杜康!
-- 愿君每日到此一游!

推荐文章

有个码龄5年的程序员跟我说:“他连wifi从来不用密码” 2019-04-29
领导让我整理上个季度的销售额,幸好我会Python数据分析,你猜我几点下班 2019-04-29
【Python爬虫实战】为何如此痴迷Python?还不是因为爱看小姐姐图 2019-04-29
零基础自学Python,你也可以实现经济独立! 2019-04-29
ElasticSearch与Mysql对比(ElasticSearch常用方法大全,持续更新) 2019-04-29
数字化转型的主干道上,华为云以“三大关键”成企业智能化推手 2019-04-29
数字化为何不走“捷”“径”? 2019-04-29
和总裁、专家交朋友,华为云助推政企智能化升级又做到前面去了 2019-04-29
BCOP章鱼船长,6月22日晚上8点上线薄饼 2019-04-29
为战疫助力,半导体功不可没 2019-04-29
了解这些操作,Python中99%的文件操作都将变得游刃有余! 2019-04-29
知道如何操作还不够!深入了解4大热门机器学习算法 2019-04-29
只有经历过,才能深刻理解的9个编程道理 2019-04-29
发现超能力:这些数据科学技能助你更高效专业 2019-04-29
AI当道,人工智能将如何改变金融业? 2019-04-29
消除性别成见,技术领域需要更多“乘风破浪的姐姐” 2019-04-29
7行代码击败整个金融业,这对20多岁的爱尔兰兄弟是如何做到的? 2019-04-29
2020十大编程博客:私藏的宝藏编程语言博客大放送! 2019-04-29
编程中的角色选择:哪类工作角色最适合你? 2019-04-29
10种算法一文打尽!基本图表算法的视觉化阐释 2019-04-29