【车机xxx视频需求实现 2】 - 车内DMS/AVR/ROA三个摄像头虚拟化代码实现
发布日期:2021-06-29 14:55:07 浏览次数:3 分类:技术文章

本文共 34521 字,大约阅读时间需要 115 分钟。

【车机xxx视频需求实现 2】 - 车内DMS/AVR/ROA三个摄像头虚拟化代码实现

本需求系列文章:

  1. 《》
  2. 《》
  3. 《》
  4. 《》
  5. 《》
  6. 《》
  7. 《》
  8. 《》
  9. 《》
  10. 《》
  11. 《》
  12. 《》
  13. 《》
  14. 《》
  15. 《》
  16. 《》

在前文中我们说过,车内的 DMS/AVR/ROA 三个摄像头是由ICM 模块来总控的,

对于车机来说,在同一时刻,它只能获取到这三个摄像头数据中的一个,
因此,当要指定打开哪个摄像头时,我们在open 摄像头之前,就要对发送 CAN 报文给到 ICM ,让它先切换摄像头。

如下图:

在这里插入图片描述

本文,主要实现车内DMS/AVR/ROA三个摄像头虚拟化代码实现,

因为车机只有一个节点,对于车机来说其实只有一个摄像头,但对于 xxx 来说, xxx 感知到的其实是三个摄像头,
所以本文主要实现 从一个摄像头,虚拟化三个摄像头出来,并将三个摄像头上报给到 xxx

下面应是根据整个 xxx 上报 及 下发 open 的流程来修改代码,涉及的函数如下:

int32_t DMSDPGetCameraHandler(DMSDPCameraHandler* cameraHandler){
memset(cameraHandler, 0, sizeof(DMSDPCameraHandler)); cameraHandler->RegisterSendDataCallback = CameraRegisterSendDataCallback; cameraHandler->OpenCamera = OpenCamera; cameraHandler->CloseCamera = CloseCamera; cameraHandler->UpdateConfig = CameraUpdateConfig; cameraHandler->GetCameraCapability = GetCameraCapability; cameraHandler->GetCameraCapabilityString = GetCameraCapabilityString; cameraHandler->Release = CameraRelease; cameraHandler->RegisterListener = CameraRegisterListener; cameraHandler->BusinessControl = CameraBusinessControl; return DMSDP_NO_ERROR;}

好,我们开始修改代码吧。

现在是2020/10/28,今天在公司调试了一天,发现昨天的这个修改方案有问题。

今天重新修改了下,测试OK,接下来,
本文分两章,
第一章是今天修改过后的OK 的方案,第二章是昨天的修改方案,就不删了,先保留下来吧。

(一)今天的调试ok的修改方案

今天的修改方案也是在昨天的基础上改了,相对比之前的更加简加,所以,我注释也会相对少些。

先定义下节点的名字:

video14 是当前车机上实际的video节点, video101、video102、video103 是在lib 库中虚拟化出来上报给 XXX 的三个虚拟video节点。
PS: 名字起的有点随意,这个后面功能实现后再说。

// ciellee make three virtual camera device  camera id +++char id_real[] = "video14";	std::string CAMERA_ID = "video14";std::string id_101 = "video101";std::string id_102 = "video102";std::string id_103 = "video103";// ciellee make three virtual camera device  camera id ---

一、虚拟化三个摄像头并上报

1.1、在FindVideoFiles() 中找到当前车机上使用的节点

std::vector
FindVideoFiles(){
std::vector
videoFiles; DIR *dir = opendir(VIDEO_FILE_DIR.c_str()); if (dir == nullptr) {
ALOGE("%s: can not open %s, check path or permission", __FUNCTION__, VIDEO_FILE_DIR.c_str()); return videoFiles; } struct dirent *file; while ((file = readdir(dir)) != nullptr) {
if (strcmp(file->d_name, ".") == 0 || strcmp(file->d_name, "..") == 0) {
continue; } std::string fileName(file->d_name); int pos = fileName.find(VIDEO_FILE_PREF); // ciellee make three virtual camera device FindVideoFiles +++ /*if (pos != fileName.npos && pos == 0 && !(strcmp(file->d_name, "video0") == 0) && !(strcmp(file->d_name, "video1") == 0) && !(strcmp(file->d_name, "video2") == 0)) { ALOGI("%s: find one video file: %s", __FUNCTION__, file->d_name); videoFiles.push_back(fileName); }*/ if (pos != fileName.npos && pos == 0 && strcmp(file->d_name, CAMERA_ID.c_str()) == 0) {
//if (pos != fileName.npos && pos == 0 ) {
ALOGI("[native_camera] %s: find one video file: %s", __FUNCTION__, file->d_name); videoFiles.push_back(fileName); } // ciellee make three virtual camera device FindVideoFiles --- } closedir(dir); return videoFiles;}

1.2、虚拟化三个摄像头出来

在 ValidV4L2Drivers() 中依次遍历前面找到的节点,判断到如果下 我们需要虚拟化出来的摄像头的话,就将 video101/video102/video103 放到 v4l2Videos vector数组中。

std::vector
ValidV4L2Drivers(std::vector
videos, std::vector
& infos){
std::vector
v4l2Videos; for (int i = 0; i < videos.size(); i++) {
std::string video = videos[i]; int ret = CheckVideoFile(video, infos); // ciellee make three virtual camera device ValidV4L2Drivers +++ ALOGI("[native_camera]1 %s: %s, push_bak(%s %s %s)", __FUNCTION__, video.c_str(), id_101.c_str(), id_102.c_str(), id_103.c_str()); if (video.compare(CAMERA_ID) == 0) {
v4l2Videos.push_back(id_101); v4l2Videos.push_back(id_102); v4l2Videos.push_back(id_103); } else {
v4l2Videos.push_back(video); } // ciellee make three virtual camera device ValidV4L2Drivers --- } return v4l2Videos;}

在收集摄像头信息的末尾,如果是需要虚拟化的摄像头,就拷贝三份,将 将 video101/video102/video103 上报。

int32_t CollectV4L2DriverInfos(std::string id, struct vdIn *vd, std::vector
& infos){
//。。。。。。。。。。。。 // 省略一大段代码 //。。。。。。。。。。。。 DMSDPCameraAbilityInner cii; cii.supportFpsRanges = supportFpsRange; cii.supportSizes = supportSizes; cii.dataType = DMSDPCameraDataType::H264; std::vector
abilities; abilities.push_back(cii); DMSDPCameraCapabilitiesInner ciis; ciis.id = id; ciis.abilities = abilities; // ciellee make three virtual camera device CollectV4L2DriverInfos +++ ALOGI("[native_camera]2 %s: real id: %s virtual node: %s, %s, %s", __FUNCTION__, id.c_str(), id_101.c_str(), id_102.c_str(), id_103.c_str()); if (id.compare(CAMERA_ID) == 0) {
DMSDPCameraCapabilitiesInner ciis_1; ciis_1.id = id_101; ciis_1.abilities = abilities; infos.push_back(ciis_1); DMSDPCameraCapabilitiesInner ciis_2; ciis_2.id = id_102; ciis_2.abilities = abilities; infos.push_back(ciis_2); DMSDPCameraCapabilitiesInner ciis_3; ciis_3.id = id_103; ciis_3.abilities = abilities; infos.push_back(ciis_3); } else {
infos.push_back(ciis); } // ciellee make three virtual camera device CollectV4L2DriverInfos --- ALOGI("[native_camera] %s: collect info succes, id: %s", __FUNCTION__, id.c_str()); return 0;}

二、open_device() 打开摄像头

前面虚拟化出来上报后 ,接下来重点工作就放在打开摄像头里面了,

相较于昨天的方案,今天改的地方就在这里。

主要思路为:

  1. 由于上报的是 video101、video102、video103,所以xxx 下发打开时,也是下发这个名这了,因此在open_device() 中做下处理。
  2. 根据 video101、video102、video103的差别来发送CAN 报文给到 ICM 模块切换摄像头。 ---- 这块后面实现
  3. 将 video101、video102、video103全部转成 video14 ,实际打的节点是 video14
int  open_device(std::string i){
// ciellee changg // ciellee make three virtual camera device open_device +++ // 1. 根据 video101、video102、video103的差别来发送CAN 报文给到 ICM 模块切换摄像头。 // 2. 将 video101、video102、video103全部转成 video14 ,实际打的节点是 video14 if(i.compare(id_101)==0 || i.compare(id_102) == 0 || i.compare(id_103) == 0){
sprintf(dev_name, "/dev/%s", CAMERA_ID.c_str()); } else {
sprintf(dev_name, "/dev/%s", i.c_str()); } ALOGI("[native_camera] open camera %s, real open %s", i.c_str(), dev_name); // ciellee make three virtual camera device open_device --- vd = (struct vdIn *) calloc(1, sizeof(struct vdIn)); vd->fd = open (dev_name, O_RDWR); return 0;}

好了,至此修改结束。

今天的整个修改方案,比昨天的修改简洁了太多了,实测ok,

后面还需要研究如下两点:

  1. 如何在库中发CAN报文切换摄像头
  2. 如何在库中调用机器硬件编码,编码成H264码流

软编码我懂,但硬件编码这块我还不懂,明天优先研究硬编码这块。

三、调试log

下面是调试过程的log。

1. 获取video 节点的信息

10-28 09:24:10.700  4194  4194 I CamSDK_3rdCamera: [native_camera] VIDIOC_ENUM_FMT: id=video14 get pixel format: H264, pixel fmt: 87596704810-28 09:24:10.700  4194  4194 I CamSDK_3rdCamera: [native_camera] CollectV4L2DriverInfos: support width: 1920, height: 108010-28 09:24:10.700  4194  4194 I CamSDK_3rdCamera: [native_camera] CollectV4L2DriverInfos: support width: 1280, height: 72010-28 09:24:10.700  4194  4194 I CamSDK_3rdCamera: [native_camera] CollectV4L2DriverInfos: support width: 640, height: 48010-28 09:24:10.700  4194  4194 I CamSDK_3rdCamera: [native_camera] CollectV4L2DriverInfos: support width: 640, height: 360

2. 上报虚拟化后的 video 节点

可以看出分别上报了 video101、video102、video103 三个video 节点

10-28 09:24:33.545  4194  4585 I DMSDP   : CamSDK_CameraHandle:Return cached camera ability, Usable Android camera num: 0, Native camera num: 3, Camera infos: [{
"extendInfos":{
},"frameAbility":{
"supportFormats":[],"supportFpsRange":[],"supportSizes":[]},"id":"video101","streamAbility":{
"supportStreamTypes":{
"H264":{
"supportFpsRange":[{
"max":30000,"min":30000},{
"max":15000,"min":15000},{
"max":25000,"min":25000}],"supportSizes":[{
"height":1080,"width":1920},{
"height":720,"width":1280},{
"height":480,"width":640},{
"height":360,"width":640},{
"height":288,"width":352},{
"height":240,"width":320},{
"height":144,"width":176},{
"height":120,"width":160},{
"height":1080,"width":1920}]}}}},{
"extendInfos":{
},"frameAbility":{
"supportFormats":[],"supportFpsRange":[],"supportSizes":[]},"id":"video102","streamAbility":{
"supportStreamTypes":{
"H264":{
"supportFpsRange":[{
"max":30000,"min":30000},{
"max":15000,"min":15000},{
"max":25000,"min":25000}],"supportSizes":[{
"height":1080,"width":1920},{
"height":720,"width":1280},{
"height":480,"width":640},{
"height":360,"width":640},{
"height":288,"width":352},{
"height":240,"width":320},{
"height":144,"width":176},{
"height":120,"width":160},{
"height":1080,"width":1920}]}}}},{
"extendInfos":{
},"frameAbility":{
"supportFormats":[],"supportFpsRange":[],"supportSizes":[]},"id":"video103","streamAbility":{
"supportStreamTypes":{
"H264":{
"supportFpsRange":[{
"max":30000,"min":30000},{
"max":15000,"min":15000},{
"max":25000,"min":25000}],"supportSizes":[{
"height":1080,"width":1920},{
"height":720,"width":1280},{
"height":480,"width":640},{
"height":360,"width":640},{
"height":288,"width":352},{
"height":240,"width":320},{
"height":144,"width":176},{
"height":120,"width":160},{
"height":1080,"width":1920}]}}}}]

3. 打开摄像头

在打开摄像头前先注册回调函数,然后,再将虚拟 video101 节点,转换为 video14 ,实际打开的是 video14 节点。

10-28 09:25:55.669  4194 10012 I CamSDK_3rdCamera: [native_camera] OpenCameraInner: Register sendback function to v4l2 lib, id = video10110-28 09:25:55.669  4194 10012 I CamSDK_3rdCamera: Register sendback h264 frame func10-28 09:25:55.669  4194 10012 I CamSDK_3rdCamera: [native_camera] open camera video101, real open /dev/video1410-28 09:25:55.669  4194 10012 I CamSDK_3rdCamera: Init_264camera: ------open_device--success-- !

video103 也是同理

10-28 09:25:28.432  4194 10877 I CamSDK_3rdCamera: [native_camera] OpenCameraInner: Register sendback function to v4l2 lib, id = video10310-28 09:25:28.432  4194 10877 I CamSDK_3rdCamera: Register sendback h264 frame func10-28 09:25:28.432  4194 10877 I CamSDK_3rdCamera: [native_camera] open camera video103, real open /dev/video1410-28 09:25:28.432  4194 10877 I CamSDK_3rdCamera: Init_264camera: ------open_device--success-- !

(二)昨天的有问题的修改方案

一、GetCameraCapability() 初始化并上报车机所有Camera 信息

在车机开机时, xxx 初始化时,会调用 GetCameraCapability() 函数,获取车机上所有的 Camera 节点及能力。

其化码如下:

  1. 如果是首次扫描,会调用 CollectCameraInfo() 方法,遍历所有的 video 节点,获取所有有效的video 信息,将信息存在 h264CamInfos中。
  2. 如果非次扫描,则直接根据全局的 vector h264CamInfos 信息来上报,该信息定时有线程来扫描关更新。
int32_t GetCameraCapability(DMSDPCameraCapabilities** capabilities, uint32_t* num){
// 如果是首次扫描,会调用 CollectCameraInfo() 方法 if (!finishFullCollect) {
ALOGI("%s: do the full camera h264 collect", __FUNCTION__); CollectCameraInfo(); finishFullCollect = true; } *num = h264CamInfos.size(); ALOGI("%s: valid h264 camera num: %d", __FUNCTION__, *num); return GetCameraCapabilityInner(capabilities);}

1、CollectCameraInfo() 获取所有有效video节点信息

  1. 遍历所有camera video 节点,返回值为 string 类型的 vector
  2. 分别打开所有 camera video,下载命令获取 video 节点信息,有效节点保存在 vector h264CamInfos 中。
  3. videoFilesvalidOnes 信息转存到 mVideoDriversmH264Drivers
void CollectCameraInfo(){
ALOGI("%s: collect camera infos", __FUNCTION__); // 1.遍历所有camera video 节点,返回值为 string 类型的 vector std::vector
videoFiles = FindVideoFiles(); // 2. 分别打开所有 camera video,下载命令获取 video 节点信息,有效节点保存在 `vector h264CamInfos` 中 std::vector
validOnes = ValidV4L2Drivers(videoFiles, h264CamInfos); int videoNum = videoFiles.size(); int validVideosNum = validOnes.size(); ALOGI("[native_camera] %s: video files num: %d, valid v4l2 ones: %d", __FUNCTION__, videoNum, validVideosNum); // 3. 将 videoFiles 及 validOnes 信息转存到 mVideoDrivers 及 mH264Drivers 中 mVideoDrivers.clear(); mVideoDrivers.assign(videoFiles.begin(), videoFiles.end()); mH264Drivers.clear(); mH264Drivers.assign(validOnes.begin(), validOnes.end()); ALOGI("%s: save video drivers info", __FUNCTION__); ALOGI("%s: start thread to monitor video drivers", __FUNCTION__);}
1.1、FindVideoFiles() 遍历所有 video 节点 — 修改点

FindVideoFiles 中整代流程如下:

  1. 打开 /dev 文件夹
  2. 过滤所有的 ...,获得所有 video 开头的节点,放入 videoFiles vector 中。

由于所有放入 videoFiles 中的节点,都会被open 打开,通过 ioctl 来获取信息,这样可能存在一个bug。

整个遍历扫描是在开机过程中进行的,这个扫描过程是需要时间的,
如果某个 xxx 其实不会使用的节点, xxx 做了扫描,在这个打开过程中会导致其他应用没法打开当前这个节点,可能出现buf。

因此,此处最好过滤下所有可能会使用的节点,而不是把所有 video 节点都push_back进来。

全局Camera 节点 及 虚拟化的 camera id 定义如下:

// ciellee make three virtual camera device  camera id +++std::string CAMERA_ID = "video14";std::string id_1 = "video101";std::string id_2 = "video102";std::string id_3 = "video103";// ciellee make three virtual camera device  camera id ---

FindVideoFiles 代码如下:

如下,在代码中指定扫描 /dev/video14 节点,其他节点不扫描。

std::vector
FindVideoFiles(){
std::vector
videoFiles; // 1. 打开 /dev/ 文件夹 DIR* dir = opendir(VIDEO_FILE_DIR.c_str()); struct dirent* file; while ((file = readdir(dir)) != nullptr) {
if (strcmp(file->d_name, ".") == 0 || strcmp(file->d_name, "..") == 0) {
continue; } std::string fileName(file->d_name); int pos = fileName.find(VIDEO_FILE_PREF); // ciellee make three virtual camera device FindVideoFiles +++ /*if (pos != fileName.npos && pos == 0 && !(strcmp(file->d_name, "video0") == 0) && !(strcmp(file->d_name, "video1") == 0) && !(strcmp(file->d_name, "video2") == 0)) { ALOGI("%s: find one video file: %s", __FUNCTION__, file->d_name); videoFiles.push_back(fileName); }*/ // 扫描车上指定的 video 节点 if (pos != fileName.npos && pos == 0 && strcmp(file->d_name, CAMERA_ID.c_str()) == 0) {
// 扫描车上所有的 video 节点 //if (pos != fileName.npos && pos == 0 ) {
ALOGI("[native_camera] %s: find one video file: %s", __FUNCTION__, file->d_name); videoFiles.push_back(fileName); } // ciellee make three virtual camera device FindVideoFiles --- } closedir(dir); return videoFiles;}
1.2、ValidV4L2Drivers() 过滤有效video 节点 — 修改点
  1. 根据videos 节点数据,依次调用 CheckVideoFile 获取有效信息
  2. 如果video 节点是我们需要虑拟化的节点,则将预先定义好的 video101video102video103 三个虚拟节点存在v4l2Videos中。
std::vector
ValidV4L2Drivers(std::vector
videos, std::vector
& infos){
std::vector
v4l2Videos; // 1. 根据videos 节点数据,依次调用 CheckVideoFile 获取有效信息 for (int i = 0; i < videos.size(); i++) {
std::string video = videos[i]; int ret = CheckVideoFile(video, infos); if (ret != 0) {
ALOGI("%s: video file not valid v4l2 driver: %s", __FUNCTION__, video.c_str()); continue; } // 2. 如果video 节点是我们需要虑拟化的节点,则将预先定义好的 video101、video102、video103 三个虚拟节点存在v4l2Videos中。 // ciellee make three virtual camera device ValidV4L2Drivers +++ if (video.compare(CAMERA_ID) == 0) {
ALOGI("[native_camera] %s: %s, push_bak(%s %s %s)", __FUNCTION__, video.c_str(), id_101.c_str(), id_102.c_str(), id_103.c_str()); v4l2Videos.push_back(id_101); v4l2Videos.push_back(id_102); v4l2Videos.push_back(id_103); } else {
v4l2Videos.push_back(video); } // ciellee make three virtual camera device ValidV4L2Drivers --- } return v4l2Videos;}
1.3、CheckVideoFile() 打开设备节点,获取设备信息
  1. 拼凑节点名字,如 /dev/video14,然后判断节点是否存在,及节点类型是否是字符设备
  2. 以可读可写的方式打开节点
  3. 获取 video 节点对应的设备的性能、判断设备是否支持 CAPTURE 及 是否是 STREAMING
  4. 收集设备信息,并将收集好的信息 放到 infos中,(即 h264CamInfos )
int CheckVideoFile(std::string id, std::vector
& infos){
struct stat st; // 1. 拼凑节点名字,如 /dev/video14,然后判断节点是否存在,及节点类型是否是字符设备 sprintf(dev_name, "/dev/%s", id.c_str()); ALOGI("[native_camera] %s: %s +++ \n",__func__, dev_name); if (-1 == stat(dev_name, &st)){
ALOGI("[native_camera] CheckVideoFile: Cannot identify '%s': %d, %s", dev_name, errno, strerror(errno)); return -1; } if (!S_ISCHR(st.st_mode)){
ALOGI("[native_camera] CheckVideoFile: %s is no device", dev_name); return -1; } struct vdIn* vd = (struct vdIn*)calloc(1, sizeof(struct vdIn)); // 2. 以可读可写的方式打开节点 vd->fd = open(dev_name, O_RDWR); if (-1 == vd->fd){
ALOGI("[native_camera] CheckVideoFile: Cannot open '%s': %d, %s", dev_name, errno, strerror(errno)); return -1; } // 3. 获取 video 节点对应的设备的性能 struct v4l2_capability cap; if (-1 == xioctl(vd->fd, VIDIOC_QUERYCAP, &cap)){
close(vd->fd); if (EINVAL == errno){
ALOGI("[native_camera] CheckVideoFile: %s is no V4L2 device", dev_name); return -1; }else{
ALOGI("[native_camera] CheckVideoFile: %s VIDIOC_QUERYCAP error", dev_name); return errnoexit("VIDIOC_QUERYCAP"); } } // 判断设备是否支持 CAPTURE 及 是否是 STREAMING if (!(cap.capabilities & V4L2_CAP_VIDEO_CAPTURE)){
ALOGI("[native_camera] CheckVideoFile: %s is no video capture device", dev_name); close(vd->fd); return -1; } if (!(cap.capabilities & V4L2_CAP_STREAMING)){
ALOGI("[native_camera] CheckVideoFile: %s does not support streaming id/o", dev_name); close(vd->fd); return -1; } // 4. 收集设备信息,并将收集好的信息 放到 infos中,(即 h264CamInfos ) int ret = CollectV4L2DriverInfos(id, vd, infos); if (ret != 0) {
ALOGE("[native_camera] %s: collect camera info failed, ret: %d", __FUNCTION__, ret); close(vd->fd); return -1; } ALOGI("[native_camera] %s: find one valid v4l2 driver: %s", __FUNCTION__, id.c_str()); close(vd->fd); return 0;}
1.4、CollectV4L2DriverInfos() 收集设备信息 — 修改点
  1. 获取设备的格式,如果不是 H264 格式的设备,则直接退出
    注意: TODO: 此处要修改为,获取格式成功后,直接上报
    由于车上的摄像头,默认吐的数据是 NV21 格式的RAW data,因此此处需不过滤H264
  2. 获取设备支持的fps,及 resolution 分辨率
  3. 如果判断到当前节点是需要虚拟化的节点,则在处处转换
    如果 video14 一个节点,在此处转换为上报 video101video102video103 三个节点
int32_t CollectV4L2DriverInfos(std::string id, struct vdIn* vd, std::vector
& infos){
int ret = -1; struct v4l2_fmtdesc fmt; memset(&fmt, 0, sizeof(fmt)); fmt.index = 0; fmt.type = V4L2_CAP_VIDEO_CAPTURE; bool isHasH264 = false; __u32 h264Fmt = 0; //__u32 mjpgFmt = 0; // 1. 获取设备的格式,如果不是 H264 格式的设备,则直接退出 while ((ret = ioctl(vd->fd, VIDIOC_ENUM_FMT, &fmt)) == 0) {
fmt.index++; std::stringstream stream; stream << (char)(fmt.pixelformat & 0xff) << (char)((fmt.pixelformat >> 8) & 0xff) << (char)((fmt.pixelformat >> 16) & 0xff) << (char)((fmt.pixelformat >> 24) & 0xff); std::string pixFmt = stream.str(); ALOGI("[native_camera] VIDIOC_ENUM_FMT: id=%s get pixel format: %s, pixel fmt: %u", id.c_str(), pixFmt.c_str(), fmt.pixelformat); //注意: TODO: 此处要修改为,获取格式成功后,直接上报 //由于车上的摄像头,默认吐的数据是 NV21 格式的RAW data,因此此处需不过滤H264 if (pixFmt.compare(H264_FMT) == 0) {
isHasH264 = true; h264Fmt = fmt.pixelformat; break; } /*if (pixFmt.compare(LOGI_MJPG_FMT) == 0) { mjpgFmt = fmt.pixelformat; }*/ } if (!isHasH264) {
ALOGW("[native_camera] %s: do not support H264", __FUNCTION__); return -1; } // 2. 获取设备支持的fps,及 resolution 分辨率 std::vector
supportFpsRange; std::vector
supportSizes; /* 1. get resolution info */ struct v4l2_frmsizeenum frmsize; memset(&frmsize, 0, sizeof(frmsize)); frmsize.index = 0; frmsize.pixel_format = h264Fmt; //H264: 0x34363248 //frmsize.pixel_format = 0x34363248; //frmsize.pixel_format = mjpgFmt; // 3. 获取设备支持的宽高 while ((ret = ioctl(vd->fd, VIDIOC_ENUM_FRAMESIZES, &frmsize)) == 0) {
frmsize.index++; DMSDPSize size; size.width = frmsize.discrete.width; size.height = frmsize.discrete.height; ALOGI("[native_camera] %s: support width: %d, height: %d", __FUNCTION__, size.width, size.height); supportSizes.push_back(size); } /* 2. get fps info */ struct v4l2_streamparm streamparm; memset(&streamparm, 0, sizeof(streamparm)); streamparm.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; ret = ioctl(vd->fd, VIDIOC_G_PARM, &streamparm); if (ret != 0) {
ALOGE("[native_camera] %s: get camera fps info failed, ret: %d", __FUNCTION__, ret); return -1; } ALOGI("[native_camera] %s: support fps ctl: %s, is high quality: %d", __FUNCTION__, streamparm.parm.capture.capability == V4L2_CAP_TIMEPERFRAME ? "true" : "false", streamparm.parm.capture.capturemode); __u32 numerator = streamparm.parm.capture.timeperframe.numerator; __u32 denominator = streamparm.parm.capture.timeperframe.denominator; ALOGI("[native_camera] %s: fps info, numerator: %d, denominator: %d, fps:%d", __FUNCTION__, numerator, denominator, (int)(denominator / numerator)); int fps = 30; if (numerator != 0) {
fps = (int)(denominator / numerator); } if (fps < 15) {
fps = 30; } DMSDPFpsRange fr; fr.min = fps * 1000; fr.max = fps * 1000; supportFpsRange.push_back(fr); if (fps >= 30) {
DMSDPFpsRange fr15; fr15.min = 15000; fr15.max = 15000; DMSDPFpsRange fr25; fr25.min = 25000; fr25.max = 25000; supportFpsRange.push_back(fr15); supportFpsRange.push_back(fr25); } DMSDPCameraAbilityInner cii; cii.supportFpsRanges = supportFpsRange; cii.supportSizes = supportSizes; cii.dataType = DMSDPCameraDataType::H264; std::vector
abilities; abilities.push_back(cii); DMSDPCameraCapabilitiesInner ciis; ciis.id = id; ciis.abilities = abilities; // 3. 如果判断到当前节点是需要虚拟化的节点,则在处处转换 // 如果 video14 一个节点,在此处转换为上报 video101、video102、video103 三个节点 // ciellee make three virtual camera device CollectV4L2DriverInfos +++ if (id.compare(CAMERA_ID) == 0) {
DMSDPCameraCapabilitiesInner ciis_1; ciis_1.id = id_101; ciis_1.abilities = abilities; infos.push_back(ciis_1); DMSDPCameraCapabilitiesInner ciis_2; ciis_2.id = id_102; ciis_2.abilities = abilities; infos.push_back(ciis_2); DMSDPCameraCapabilitiesInner ciis_3; ciis_3.id = id_103; ciis_3.abilities = abilities; infos.push_back(ciis_3); ALOGI("[native_camera] %s: virtual node: %s, %s, %s", __FUNCTION__, ciis_1.id.c_str(), ciis_2.id.c_str(), ciis_3.id.c_str()); } else {
infos.push_back(ciis); } // ciellee make three virtual camera device CollectV4L2DriverInfos --- ALOGI("[native_camera] %s: collect info succes, id: %s", __FUNCTION__, id.c_str()); return 0;}

2、CollectCameraInfo() 上报节点给 xxx

GetCameraCapabilityInner 存在两个重载函数,

在如下 int32_t GetCameraCapabilityInner(DMSDPCameraCapabilities** capabilities) 中,
主要工作为调用它的第二个重载函数,并且起一个线程 monitorThread ,每 500ms 扫描运行 MonitorH264VideoDrivers() 更新设备情况

int32_t GetCameraCapabilityInner(DMSDPCameraCapabilities** capabilities) {
int32_t ret = GetCameraCapabilityInner(capabilities, h264CamInfos); if (!isMonitorThreadStart) {
std::thread monitorThread(MonitorDriversLoop); monitorThread.detach(); isMonitorThreadStart = true; } return ret;}

如下代码,主要工作就是将 h264CamInfos 信息,转换为 capabilities,最终上报给 xxx

int32_t GetCameraCapabilityInner(DMSDPCameraCapabilities** capabilities, std::vector
&h264CamInfos) {
ALOGI("%s: start set camera abilities", __FUNCTION__); *capabilities = (DMSDPCameraCapabilities*)malloc(sizeof(DMSDPCameraCapabilities) * h264CamInfos.size()); if (*capabilities == nullptr) {
ALOGE("%s: no memory for DMSDPCameraCapabilities", __FUNCTION__); return DMSDP_ERR_NO_MEMORY; } memset(*capabilities, 0, sizeof(DMSDPCameraCapabilities) * h264CamInfos.size()); DMSDPCameraCapabilities *items = *capabilities; for (int i = 0; i < h264CamInfos.size(); i++) {
ALOGI("%s: start set camera idx: %d", __FUNCTION__, i); DMSDPCameraCapabilitiesInner itemInner = h264CamInfos[i]; /* 1. set camera id */ char* id = (char*) malloc(VENDOR_CAMERA_ID_MAX_LEN); if (id == nullptr) {
ALOGE("%s: no memory for camera id", __FUNCTION__); CameraRelease(RELEASE_TYPE_QUERY_ABILITY, *capabilities, h264CamInfos.size()); return DMSDP_ERR_NO_MEMORY; } memset(id, 0, VENDOR_CAMERA_ID_MAX_LEN); uint32_t idLen = itemInner.id.length(); memcpy(id, itemInner.id.c_str(), idLen); items[i].id = id; items[i].idLen = idLen; /* 2. set ability */ uint32_t abilityNum = itemInner.abilities.size(); DMSDPCameraAbility* ability = (DMSDPCameraAbility*)malloc(sizeof(DMSDPCameraAbility) * abilityNum); if (ability == nullptr) {
ALOGE("%s: no memory for camera ability", __FUNCTION__); CameraRelease(RELEASE_TYPE_QUERY_ABILITY, *capabilities, h264CamInfos.size()); *capabilities = nullptr; return DMSDP_ERR_NO_MEMORY; } memset(ability, 0, sizeof(DMSDPCameraAbility) * abilityNum); items[i].abilities = ability; items[i].abilityNum = abilityNum; for (uint32_t i = 0; i < abilityNum; i++) {
/* a. set data type */ ability[i].dataType = H264; /* b. set fps range */ uint32_t fpsNum = itemInner.abilities[i].supportFpsRanges.size(); DMSDPFpsRange* fpsRanges = (DMSDPFpsRange*)malloc(sizeof(DMSDPFpsRange) * fpsNum); if (fpsRanges == nullptr) {
ALOGE("%s: no memory for camera fpsRanges", __FUNCTION__); CameraRelease(RELEASE_TYPE_QUERY_ABILITY, *capabilities, h264CamInfos.size()); *capabilities = nullptr; return DMSDP_ERR_NO_MEMORY; } memset(fpsRanges, 0, sizeof(DMSDPFpsRange) * fpsNum); for (uint32_t j = 0; j < fpsNum; j++) {
fpsRanges[j].min = itemInner.abilities[i].supportFpsRanges[j].min; fpsRanges[j].max = itemInner.abilities[i].supportFpsRanges[j].max; } ability[i].fpsRangeNum = fpsNum; ability[i].supportFpsRanges = fpsRanges; /* c. set size */ uint32_t sizeNum = itemInner.abilities[i].supportSizes.size(); DMSDPSize* sizes = (DMSDPSize*) malloc(sizeof(DMSDPSize) * sizeNum); if (sizes == nullptr) {
ALOGE("%s: no memory for camera sizes", __FUNCTION__); CameraRelease(RELEASE_TYPE_QUERY_ABILITY, *capabilities, h264CamInfos.size()); *capabilities = nullptr; return DMSDP_ERR_NO_MEMORY; } memset(sizes, 0, sizeof(DMSDPSize) * sizeNum); for (uint32_t k = 0; k < sizeNum; k++) {
sizes[k].width = itemInner.abilities[i].supportSizes[k].width; sizes[k].height = itemInner.abilities[i].supportSizes[k].height; } ability[i].sizeNum = sizeNum; ability[i].supportSizes = sizes; } } ALOGI("%s: Finish set camera abilities", __FUNCTION__); return DMSDP_NO_ERROR;}

好,至此 有关上报 Camera 节点流程修改完毕,接下来就是当 xxx 下发打开Camera 的流程了。

由于我们上报的是 video101、video102、video103 这样的节点,实际上车机设备中是不存在些节点的,实际车机的设备节点是 video14

因此,接下我们就要将 video101、video102、video103 这三个虚拟化出来的节点转换为实体节点 video14了。

二、CameraRegisterSendDataCallback() 注册数据回调函数 — 修改点

在open camera 之前, xxx 会先注册绑定数据回调函数。

在此处下发 video101、video102、video103 ,我们将其转为 video14

int32_t CameraRegisterSendDataCallback(const char* id, uint32_t idLen, const DMSDPSendBackDataCB cb){
/* regist callback senddata */ ALOGI("[native_camera] CameraRegisterSendDataCallback, id=%s, idLen=%d", id, idLen); if (!IsCameraIdValid(id, idLen)) {
return DMSDP_ERR_INVALID_PARAMETER; } // ciellee make three virtual camera device CameraRegisterSendDataCallback +++ if ( strcmp(id, id_101.c_str()) ==0 || strcmp(id, id_102.c_str())==0 || strcmp(id, id_103.c_str())==0 ) {
ALOGI("[native_camera] CameraRegisterSendDataCallback, change to id=%s, idLen=%d", CAMERA_ID.c_str(), CAMERA_ID.length()); RegisterSendDataCallbackInner(CAMERA_ID.c_str(), CAMERA_ID.length(), cb); } else {
RegisterSendDataCallbackInner(id, idLen, cb); } // ciellee make three virtual camera device CameraRegisterSendDataCallback --- return DMSDP_NO_ERROR;}

所谓的注册回调,原理其实很简单,sendDataCBMap 是一个 stringDMSDPSendBackDataCB 一一对应的map 数组。

std::shared_ptr
> sendDataCBMap = std::make_shared
>(); void RegisterSendDataCallbackInner(const char* id, uint32_t idLen, const DMSDPSendBackDataCB cb) {
std::string cameraId(id, idLen); std::string innerCamId = getInnerCamId(cameraId); auto iter = sendDataCBMap->find(innerCamId); if (iter != sendDataCBMap->end()) {
sendDataCBMap->erase(innerCamId); } sendDataCBMap->emplace(innerCamId, cb); ALOGI("[native_camera] %s: Register send data callback success, native camera id: %s", __FUNCTION__, id);}

三、CameraUpdateConfig() 更新设备信息 — 修改点

在此处,我们思路同样是将 xxx 下发的 id 转换为我们实际的id。

如下,id 是 xxx 下发的, id_real 下我们转换之后的。

int32_t CameraUpdateConfig(const char* id, uint32_t idLen, int32_t key, char* value, uint32_t valLen){
/* update config fps */ ALOGI("[native_camera]1 CameraUpdateConfig camera id: %s, key: %d, value: %s", id, key, value); // ciellee make three virtual camera device CloseCamera +++ std::string id_real; if (strcmp(id, id_101.c_str()) == 0 || strcmp(id, id_102.c_str()) == 0 || strcmp(id, id_102.c_str()) == 0) {
id_real = CAMERA_ID; } else {
id_real = std::string(id); } ALOGI("[native_camera]2 CameraUpdateConfig camera id: %s, key: %d, value: %s", id_real.c_str(), key, value); //UpdateConfigInner(id, idLen, key, value, valLen); UpdateConfigInner(id_real.c_str(), id_real.length(), key, value, valLen); // ciellee make three virtual camera device CloseCamera --- return DMSDP_NO_ERROR;}

四、OpenCamera() 打开Camera — 修改点

由于 xxx 下发的是 param 结构体指针,在此处,我们比较简单的做法就是复制一份 param,

将其id 修改为我们所需的 id。

int32_t OpenCamera(const DMSDPCameraParam* param) {
// ciellee make three virtual camera device OpenCamera +++ DMSDPCameraParam param_real; param_real.height = param->height; param_real.width = param->width; param_real.fps = param->fps; param_real.dataFormat = param->dataFormat; if (strcmp(param->id, id_101.c_str()) == 0 || strcmp(param->id, id_102.c_str()) == 0 || strcmp(param->id, id_103.c_str()) == 0){
ALOGI("[native_camera] %s: open camera change id %s to %s ", __FUNCTION__, param->id, param_real.id); param_real.id = id_real; param_real.idLen = strlen(id_real); } else {
param_real.id = param->id; param_real.idLen = param->idLen; } // ciellee make three virtual camera device OpenCamera --- //int32_t ret = OpenCameraInner(param); int32_t ret = OpenCameraInner(&param_real); if (ret != 0) {
ALOGI("%s: opean camera error", __FUNCTION__); return DMSDP_ERR_CAMERA_OPEN_ERROR; } if (mDMSDPListener != nullptr) {
//int32_t updateRet = mDMSDPListener->UpdateServiceStatus(param->id, param->idLen, DMSDPServiceStatus::UNAVAILABLE); int32_t updateRet = mDMSDPListener->UpdateServiceStatus(param_real.id, param_real.idLen, DMSDPServiceStatus::UNAVAILABLE); if (updateRet != 0) {
ALOGE("[native_camera] %s: update camera to busy failed, id: %s", __FUNCTION__, param_real.id); } } ALOGI("[native_camera] %s: open camera %s success", __FUNCTION__, param_real.id); return DMSDP_NO_ERROR;}

OpenCameraInner() 函数中,

  1. 如果打节点已经被打开,则会做三次尝试打开,每次间隔100ms
  2. 初始化 Camera 后,start preview
  3. 创建数据接收线程,函数为 cap_video
int32_t OpenCameraInner(const DMSDPCameraParam* param){
ALOGI("[native_camera] %s: start open native camera para.id = %s", __FUNCTION__, param->id); int count = 0; // 1. 如果打节点已经被打开,则会做三次尝试打开,每次音隔100ms while (isRunning) {
ALOGW("%s: camera is running, wait for close", __FUNCTION__); std::this_thread::sleep_for(std::chrono::milliseconds(100)); count++; if (count >= 3) {
break; } } checkAndClear(); isRunning = true; std::string camId(param->id, param->idLen); currId = camId; std::string id = getInnerCamId(camId); auto iter = sendDataCBMap->find(id); if (iter == sendDataCBMap->end() || iter->second == nullptr) {
ALOGE("%s: can not find send data callback func, start camera error, id: %s", __FUNCTION__, param->id); // ciellee log return -1; } currSendDataCB = iter->second; ALOGI("%s: Register sendback function to v4l2 lib, id = %s", __FUNCTION__, id.c_str()); registerSendFrameFunc(currSendDataCB); pthread_t pthreadVideo; // 2. 初始化 Camera 后,start preview int32_t ret = Init_264camera(id, param->width, param->height, param->fps); if (ret != 0) {
ALOGE("%s: Init camera failed", __FUNCTION__); return -1; } pthread_mutex_init(&mut, nullptr); OpenVideo(); // 3. 创建数据接收线程 if (pthread_create(&pthreadVideo, nullptr, cap_video, nullptr))//开启视频录制 {
ALOGI("%s: Create cap_video error!", __FUNCTION__); } if (pthreadVideo != 0) {
pthread_detach(pthreadVideo); ALOGI("%s: capture h264 preview exit", __FUNCTION__); } return 0;}

4.1 cap_video() 循环读取Camera 数据 — 修改点

之前我们说过,车机上的摄像头,可以读取到的是NV21 格式的 RAW data,所以我们在此读到数据后,还需要对其做一次H264编码,再将编码后的数据,通过回调SendFrame 到 xxx 上。

如果是软编码的话,可以在这实现,

如果使用硬编码,由于对硬编码不太懂,目前还不确定应该怎么改,待研这究。

我们当前任务下先要 H264 UVC 虚拟化三个摄像头出来,所以此处代码暂可不用修改。

void* cap_video(void* arg){
int ret; struct v4l2_buffer buf; struct timeval tv; tv.tv_sec = 0; tv.tv_usec = 10000; fd_set rfds; int retval = 0; while (capturing) {
CLEAR(buf); buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; buf.memory = V4L2_MEMORY_MMAP; FD_ZERO(&rfds); FD_SET(vd->fd, &rfds); retval = select(vd->fd + 1, &rfds, NULL, NULL, &tv); if (retval < 0) {
ALOGI("select error\n"); } else//有数据要收 {
ret = ioctl(vd->fd, VIDIOC_DQBUF, &buf); if (ret < 0) {
ALOGI("Unable to dequeue buffer, stop catpure!\n"); //exit(1); {
std::lock_guard
lock(m_captureMtx); capturing = 0; } } // TODO: 获取数据,进行编码 //fwrite(buffers[buf.index].start, buf.bytesused, 1, rec_fp1); SendFrame((const uint8_t*)buffers[buf.index].start, buf.bytesused); ret = ioctl(vd->fd, VIDIOC_QBUF, &buf); if (ret < 0) {
ALOGI("Unable to requeue buffer, stop capture"); //exit(1); {
std::lock_guard
lock(m_captureMtx); capturing = 0; } } } } //stopPreviewing(); close_v4l2_new(); isRunning = false; pthread_exit(NULL);}

五、CloseCamera() 关闭Camera — 修改点

关闭camera 下发的是id,所以我们此处新建一个string来转发。

int32_t CloseCamera(const char* id, uint32_t idLen){
// ciellee make three virtual camera device CloseCamera +++ std::string id_real; if (strcmp(id, id_101.c_str()) == 0 || strcmp(id, id_102.c_str()) == 0 || strcmp(id, id_102.c_str()) == 0) {
id_real = CAMERA_ID; } else {
id_real = std::string(id); } ALOGI("[native_camera] start close camera id = %s", __FUNCTION__, id_real.c_str() ); //CloseCameraInner(id, idLen); CloseCameraInner(id_real.c_str(), id_real.length()); if (mDMSDPListener != nullptr) {
//int32_t updateRet = mDMSDPListener->UpdateServiceStatus(id, idLen, DMSDPServiceStatus::AVAILABLE); int32_t updateRet = mDMSDPListener->UpdateServiceStatus(id_real.c_str(), id_real.length(), DMSDPServiceStatus::AVAILABLE); if (updateRet != 0) {
ALOGE("%s: update camera to free failed, id: %s", __FUNCTION__, id_real.c_str()); } } // ciellee make three virtual camera device CloseCamera --- return DMSDP_NO_ERROR;}

转载地址:https://ciellee.blog.csdn.net/article/details/109306189 如侵犯您的版权,请留言回复原文章的地址,我们会给您删除此文章,给您带来不便请您谅解!

上一篇:【车机xxx视频需求实现 3】 - Android 车机Native层实现 MediaCodec H264硬件编码
下一篇:【LeetCode #47 题解】 带重复全排列 II(递归回溯法、非递归实现)

发表评论

最新留言

哈哈,博客排版真的漂亮呢~
[***.90.31.176]2024年04月03日 18时04分58秒