【Android 视频硬件编码】screenrecord.cpp 代码中编码流程分析
发布日期:2021-06-29 14:55:08 浏览次数:2 分类:技术文章

本文共 17249 字,大约阅读时间需要 57 分钟。

【Android 视频硬件编码】screenrecord.cpp 代码中编码流程分析

本文主要学习如何实现在 Native C++层实现硬件编,参考的是 \frameworks\av\cmds\screenrecord\screenrecord.cpp

我们主要关注的是编码相关,其他部分可以先忽略,为方便分析,我会把代码中一些不相干的先删了。
好,接下来废话不多说,我们来看代码吧。(参考代码为 SMD660 高通BSP高卓代码)

一、ScreenRecord.cpp

1. Android.mk

先下看下 android.mk,可以看出,screenrecord 使用到了 libstagefright 这个库。

有关 stagefright 可以参考:《》

# frameworks\av\cmds\screenrecord\Android.mkLOCAL_PATH:= $(call my-dir)include $(CLEAR_VARS)LOCAL_SRC_FILES := \	screenrecord.cpp \	EglWindow.cpp \	FrameOutput.cpp \	TextRenderer.cpp \	Overlay.cpp \	Program.cppLOCAL_SHARED_LIBRARIES := \	libstagefright libmedia libutils libbinder libstagefright_foundation \	libjpeg libgui libcutils liblog libEGL libGLESv2LOCAL_C_INCLUDES := \	frameworks/av/media/libstagefright \	frameworks/av/media/libstagefright/include \	$(TOP)/frameworks/native/include/media/openmax \	external/jpegLOCAL_CFLAGS += -Wno-multichar#LOCAL_CFLAGS += -UNDEBUGLOCAL_MODULE_TAGS := optionalLOCAL_MODULE:= screenrecordinclude $(BUILD_EXECUTABLE)

2. main() 主函数代码

主要工作为:

  1. 解析输出参数,如果解析失败,则跳出当前循环
  2. 根据输入参数配置 视频的宽高gVideoWidth/gVideoHeight、码率gBitRate、输出格式gOutputFormat
  3. 获取最后一个参数,即是 fileName 文件名
  4. 调用 recordScreen 开始录屏,编码后的内容保存在fileName 文件中
# frameworks\av\cmds\screenrecord\screenrecord.cppstatic const uint32_t kMinBitRate = 100000;         // 0.1Mbps		// 最小码率static const uint32_t kMaxBitRate = 200 * 1000000;  // 200Mbps		// 最大码率int main(int argc, char* const argv[]) {
static const struct option longOptions[] = {
{
"size", required_argument, NULL, 's' }, // 配置视频宽高 {
"bit-rate", required_argument, NULL, 'b' }, // 配置码率 {
"output-format", required_argument, NULL, 'o' }, // 配置输出格式 }; while (true) {
// 1. 解析输出参数,如果解析失败,则跳出当前循环 int ic = getopt_long(argc, argv, "", longOptions, &optionIndex); if (ic == -1) break; // 2. 根据输入参数配置 视频的宽高gVideoWidth/gVideoHeight、码率gBitRate、输出格式gOutputFormat switch (ic) {
case 's': parseWidthHeight(optarg, &gVideoWidth, &gVideoHeight); gSizeSpecified = true; break; case 'b': parseValueWithUnit(optarg, &gBitRate); break; case 'o': if (strcmp(optarg, "mp4") == 0) {
gOutputFormat = FORMAT_MP4; } else if (strcmp(optarg, "h264") == 0) {
gOutputFormat = FORMAT_H264; } else if (strcmp(optarg, "frames") == 0) {
gOutputFormat = FORMAT_FRAMES; } else if (strcmp(optarg, "raw-frames") == 0) {
gOutputFormat = FORMAT_RAW_FRAMES; } break; } } // 3. 获取最后一个参数,即是fileName 文件名 const char* fileName = argv[optind]; // 4. 调用 recordScreen 开始录屏,编码后的内容保存在fileName 文件中 status_t err = recordScreen(fileName); if (err == NO_ERROR) {
notifyMediaScanner(fileName); } ALOGD(err == NO_ERROR ? "success" : "failed"); return (int) err;}

二、recordScreen(fileName) 录屏代码分析

recordScreen() 中主要工作为配置 编码器codec、音视频复用器muxer

  1. 初始化 Binder 线程池
  2. 获取并配置显示参数,其中包括:宽、高、帧率等信息
  3. 如果未自定义宽高,根据旋转方向来配置宽gVideoWidth、高gVideoHeight
  4. 初始化编码器 encoder
  5. 配置虚拟显示
  6. 开始编码
/* Main "do work" start point. * Configures codec, muxer, and virtual display, then starts moving bits around. */static status_t recordScreen(const char* fileName) {
// 1. 初始化 Binder 线程池 // Start Binder thread pool. MediaCodec needs to be able to receive messages from mediaserver. sp
self = ProcessState::self(); self->startThreadPool(); // 2. 获取并配置显示参数,其中包括:宽、高、帧率等信息 // Get main display parameters. sp
mainDpy = SurfaceComposerClient::getBuiltInDisplay( ISurfaceComposer::eDisplayIdMain); DisplayInfo mainDpyInfo; err = SurfaceComposerClient::getDisplayInfo(mainDpy, &mainDpyInfo); printf("Main display is %dx%d @%.2ffps (orientation=%u)\n", mainDpyInfo.w, mainDpyInfo.h, mainDpyInfo.fps,mainDpyInfo.orientation); // 3. 如果未自定义宽高,根据旋转方向来配置宽gVideoWidth、高gVideoHeight bool rotated = isDeviceRotated(mainDpyInfo.orientation); if (gVideoWidth == 0) gVideoWidth = rotated ? mainDpyInfo.h : mainDpyInfo.w; if (gVideoHeight == 0) gVideoHeight = rotated ? mainDpyInfo.w : mainDpyInfo.h; // 4. 初始化编码器 encoder sp
encoder; // Configure and start the encoder. sp
frameOutput; sp
encoderInputSurface; // 如果输出格式不是 FORMAT_FRAMES 和 FORMAT_RAW_FRAMES,则说明需要编码 if (gOutputFormat != FORMAT_FRAMES && gOutputFormat != FORMAT_RAW_FRAMES) {
// 准备编码器 err = prepareEncoder(mainDpyInfo.fps, &encoder, &encoderInputSurface); // From here on, we must explicitly release() the encoder before it goes // out of scope, or we will get an assertion failure from stagefright // later on in a different thread. } else {
// We're not using an encoder at all. The "encoder input surface" we hand to // SurfaceFlinger will just feed directly to us. frameOutput = new FrameOutput(); err = frameOutput->createInputSurface(gVideoWidth, gVideoHeight, &encoderInputSurface); } // Use the encoder's input surface as the virtual display surface. bufferProducer = encoderInputSurface; // 5. 配置虚拟显示 // Configure virtual display. sp
dpy; err = prepareVirtualDisplay(mainDpyInfo, bufferProducer, &dpy); sp
muxer = NULL; FILE* rawFp = NULL; switch (gOutputFormat) { case FORMAT_MP4: { // Configure muxer. We have to wait for the CSD blob from the encoder // before we can start it. err = unlink(fileName); int fd = open(fileName, O_CREAT | O_LARGEFILE | O_TRUNC | O_RDWR, S_IRUSR | S_IWUSR); muxer = new MediaMuxer(fd, MediaMuxer::OUTPUT_FORMAT_MPEG_4); close(fd); break; } case FORMAT_H264: case FORMAT_FRAMES: case FORMAT_RAW_FRAMES: { rawFp = prepareRawOutput(fileName); break; } } // 6. 开始编码,传参:encoder编码器,muxer // Main encoder loop. err = runEncoder(encoder, muxer, rawFp, mainDpy, dpy, mainDpyInfo.orientation); printf("Stopping encoder and muxer\n"); // Shut everything down, starting with the producer side. encoderInputSurface = NULL; SurfaceComposerClient::destroyDisplay(dpy); if (encoder != NULL) encoder->stop(); if (muxer != NULL) { err = muxer->stop(); //MP4 } else if (rawFp != stdout) { fclose(rawFp); //H264 } if (encoder != NULL) encoder->release(); return err;}

2.1 配置编码器 prepareEncoder()

传入的三个参数为:displayFps帧率,pCodec编码器,pBufferProducer编码器输入Buffer

  1. 格式化编码器数
  2. 创建并启动录屏循环
  3. 创建MediaCodec 编码器对象
  4. 配置编码器
  5. 配置编码器瑜入缓冲区
static status_t prepareEncoder(float displayFps, sp
* pCodec, sp
* pBufferProducer) {
printf("Configuring recorder for %dx%d %s at %.2fMbps\n", gVideoWidth, gVideoHeight, kMimeTypeAvc, gBitRate / 1000000.0); // 1. 格式化编码器数 sp
format = new AMessage; format->setInt32("width", gVideoWidth); format->setInt32("height", gVideoHeight); format->setString("mime", kMimeTypeAvc); format->setInt32("color-format", OMX_COLOR_FormatAndroidOpaque); format->setInt32("bitrate", gBitRate); format->setFloat("frame-rate", displayFps); format->setInt32("i-frame-interval", 10); // 2. 创建并启动录屏循环 sp
looper = new ALooper; looper->setName("screenrecord_looper"); looper->start(); ALOGV("Creating codec"); // 3. 创建MediaCodec 编码器对象 static const char* kMimeTypeAvc = "video/avc"; sp
codec = MediaCodec::CreateByType(looper, kMimeTypeAvc, true); // 4. 配置编码器 err = codec->configure(format, NULL, NULL, MediaCodec::CONFIGURE_FLAG_ENCODE); // 5. 配置编码器瑜入缓冲区 ALOGV("Creating encoder input surface"); sp
bufferProducer; err = codec->createInputSurface(&bufferProducer); // 6、启动编码器 ALOGV("Starting codec"); err = codec->start(); ALOGV("Codec prepared"); *pCodec = codec; *pBufferProducer = bufferProducer; return 0;}

2.1.1 配置编码器参数

status_t MediaCodec::configure( const sp
&format, const sp
&surface, const sp
&crypto, uint32_t flags) {
sp
msg = new AMessage(kWhatConfigure, this); if (mIsVideo) {
format->findInt32("width", &mVideoWidth); format->findInt32("height", &mVideoHeight); } msg->setMessage("format", format); msg->setInt32("flags", flags); msg->setObject("surface", surface); msg->setPointer("crypto", crypto.get()); // save msg for reset mConfigureMsg = msg; status_t err; Vector
resources; MediaResource::Type type = (mFlags & kFlagIsSecure) ? MediaResource::kSecureCodec : MediaResource::kNonSecureCodec; // 编码器类型 MediaResource::SubType subtype = mIsVideo ? MediaResource::kVideoCodec : MediaResource::kAudioCodec; resources.push_back(MediaResource(type, subtype, 1)); // Don't know the buffer size at this point, but it's fine to use 1 because // the reclaimResource call doesn't consider the requester's buffer size for now. resources.push_back(MediaResource(MediaResource::kGraphicMemory, 1)); for (int i = 0; i <= kMaxRetry; ++i) {
if (i > 0) {
// Don't try to reclaim resource for the first time. if (!mResourceManagerService->reclaimResource(resources)) {
break; } } sp
response; // 将配置信息发送给编码器 err = PostAndAwaitResponse(msg, &response); if (err != OK && err != INVALID_OPERATION) { // MediaCodec now set state to UNINITIALIZED upon any fatal error. // To maintain backward-compatibility, do a reset() to put codec // back into INITIALIZED state. // But don't reset if the err is INVALID_OPERATION, which means // the configure failure is due to wrong state. ALOGE("configure failed with err 0x%08x, resetting...", err); reset(); } if (!isResourceError(err)) { break; } } return err;}

2.1.2 配置输入buffer

status_t MediaCodec::createInputSurface( sp
* bufferProducer) {
sp
msg = new AMessage(kWhatCreateInputSurface, this); sp
response; status_t err = PostAndAwaitResponse(msg, &response); if (err == NO_ERROR) {
// unwrap the sp
sp
obj; bool found = response->findObject("input-surface", &obj); CHECK(found); sp
wrapper( static_cast
(obj.get())); *bufferProducer = wrapper->getBufferProducer(); } return err;}

2.1.3 启动编码器

status_t MediaCodec::start() {
sp
msg = new AMessage(kWhatStart, this); status_t err; Vector
resources; MediaResource::Type type = (mFlags & kFlagIsSecure) ? MediaResource::kSecureCodec : MediaResource::kNonSecureCodec; MediaResource::SubType subtype = mIsVideo ? MediaResource::kVideoCodec : MediaResource::kAudioCodec; resources.push_back(MediaResource(type, subtype, 1)); // Don't know the buffer size at this point, but it's fine to use 1 because // the reclaimResource call doesn't consider the requester's buffer size for now. resources.push_back(MediaResource(MediaResource::kGraphicMemory, 1)); for (int i = 0; i <= kMaxRetry; ++i) {
if (i > 0) {
// Don't try to reclaim resource for the first time. if (!mResourceManagerService->reclaimResource(resources)) {
break; } // Recover codec from previous error before retry start. err = reset(); sp
response; err = PostAndAwaitResponse(mConfigureMsg, &response); } sp
response; err = PostAndAwaitResponse(msg, &response); if (!isResourceError(err)) {
break; } } return err;}

2.2 准备虚拟显示 prepareVirtualDisplay()

/* * Configures the virtual display.  When this completes, virtual display * frames will start arriving from the buffer producer. */static status_t prepareVirtualDisplay(const DisplayInfo& mainDpyInfo,        const sp
& bufferProducer,sp
* pDisplayHandle) {
sp
dpy = SurfaceComposerClient::createDisplay(String8("ScreenRecorder"), false /*secure*/); SurfaceComposerClient::openGlobalTransaction(); SurfaceComposerClient::setDisplaySurface(dpy, bufferProducer); setDisplayProjection(dpy, mainDpyInfo); SurfaceComposerClient::setDisplayLayerStack(dpy, 0); // default stack SurfaceComposerClient::closeGlobalTransaction(); *pDisplayHandle = dpy; return NO_ERROR;}

2.3 打开文件 prepareRawOutput()

/* * Raw H.264 byte stream output requested.  Send the output to stdout * if desired.  If the output is a tty, reconfigure it to avoid the * CRLF line termination that we see with "adb shell" commands. */static FILE* prepareRawOutput(const char* fileName) {
FILE* rawFp = NULL; // 1. 打开输出文件 rawFp = fopen(fileName, "w"); // 2. 将文件流指针转换成文件描述符 int fd = fileno(rawFp); if (isatty(fd)) {
// best effort -- reconfigure tty for "raw" ALOGD("raw video output to tty (fd=%d)", fd); struct termios term; if (tcgetattr(fd, &term) == 0) {
cfmakeraw(&term); if (tcsetattr(fd, TCSANOW, &term) == 0) {
ALOGD("tty successfully configured for raw"); } } } return rawFp;}

2.4 开始编码 runEncoder()

  1. 配置输出buffer
  2. 获得编码后的buffer索引和 size
  3. 将buffers 中的对应索引的数据写入文件中
  4. 释放该索引对应的buffer
/* * Runs the MediaCodec encoder, sending the output to the MediaMuxer.   * The input frames are coming from the virtual display as fast as SurfaceFlinger wants to send them. * * Exactly one of muxer or rawFp must be non-null. * The muxer must *not* have been started before calling. */static status_t runEncoder(const sp
& encoder, const sp
& muxer, FILE* rawFp, const sp
& mainDpy, const sp
& virtualDpy, uint8_t orientation){
assert((rawFp == NULL && muxer != NULL) || (rawFp != NULL && muxer == NULL)); // 1. 配置输出buffer Vector
> buffers; err = encoder->getOutputBuffers(&buffers); // Run until we're signaled. while (!gStopRequested) { size_t bufIndex, offset, size; int64_t ptsUsec; uint32_t flags; // 2. 获得编码后的buffer索引和 size ALOGV("Calling dequeueOutputBuffer"); err = encoder->dequeueOutputBuffer(&bufIndex, &offset, &size, &ptsUsec, &flags, kTimeout); ALOGV("dequeueOutputBuffer returned %d", err); switch (err) { case NO_ERROR: // got a buffer if ((flags & MediaCodec::BUFFER_FLAG_CODECCONFIG) != 0) { ALOGV("Got codec config buffer (%zu bytes)", size); if (muxer != NULL) { // ignore this -- we passed the CSD into MediaMuxer when // we got the format change notification size = 0; } } // 编码后的buffer 索引为 bufIndex if (size != 0) { ALOGV("Got data in buffer %zu, size=%zu, pts=%" PRId64, bufIndex, size, ptsUsec); // 将buffers 中的数据写入文件中 if (muxer == NULL) { fwrite(buffers[bufIndex]->data(), 1, size, rawFp); // Flush the data immediately in case we're streaming. // We don't want to do this if all we've written is // the SPS/PPS data because mplayer gets confused. // 更新文件 if ((flags & MediaCodec::BUFFER_FLAG_CODECCONFIG) == 0) { fflush(rawFp); } } else { // The MediaMuxer docs are unclear, but it appears that we // need to pass either the full set of BufferInfo flags, or // (flags & BUFFER_FLAG_SYNCFRAME). // // If this blocks for too long we could drop frames. We may // want to queue these up and do them on a different thread. ATRACE_NAME("write sample"); assert(trackIdx != -1); err = muxer->writeSampleData(buffers[bufIndex], trackIdx, ptsUsec, flags); } debugNumFrames++; } // 释放该索引对应的buffer err = encoder->releaseOutputBuffer(bufIndex); if ((flags & MediaCodec::BUFFER_FLAG_EOS) != 0) { // Not expecting EOS from SurfaceFlinger. Go with it. ALOGI("Received end-of-stream"); gStopRequested = true; } break; case -EAGAIN: // INFO_TRY_AGAIN_LATER ALOGV("Got -EAGAIN, looping"); break; case INFO_FORMAT_CHANGED: // INFO_OUTPUT_FORMAT_CHANGED { // Format includes CSD, which we must provide to muxer. ALOGV("Encoder format changed"); sp
newFormat; encoder->getOutputFormat(&newFormat); if (muxer != NULL) { trackIdx = muxer->addTrack(newFormat); ALOGV("Starting muxer"); err = muxer->start(); } } break; case INFO_OUTPUT_BUFFERS_CHANGED: // INFO_OUTPUT_BUFFERS_CHANGED // Not expected for an encoder; handle it anyway. ALOGV("Encoder buffers changed"); err = encoder->getOutputBuffers(&buffers); break; } } ALOGV("Encoder stopping (req=%d)", gStopRequested); printf("Encoder stopping; recorded %u frames in %" PRId64 " seconds\n", debugNumFrames, nanoseconds_to_seconds(systemTime(CLOCK_MONOTONIC) - startWhenNsec)); return NO_ERROR;}

三、MediaCodec相关

代码位置:frameworks\av\media\libstagefright\MediaCodec.cpp

3.1 MediaCodec 的API

  1. MediaCodec创建:
  • createDecoderByType/createEncoderByType:根据特定MIME类型(如"video/avc")创建codec。
  • createByCodecName:知道组件的确切名称(如OMX.google.mp3.decoder)的时候,根据组件名创建codec。使用MediaCodecList可以获取组件的名称。
  • configure:配置解码器或者编码器。
  • start:成功配置组件后调用start。
  1. buffer处理的接口:
  • dequeueInputBuffer:从输入流队列中取数据进行编码操作。
  • queueInputBuffer:输入流入队列。
  • dequeueOutputBuffer:从输出队列中取出编码操作之后的数据。
  • releaseOutputBuffer:处理完成,释放ByteBuffer数据。
  • getInputBuffers:获取需要编码数据的输入流队列,返回的是一个ByteBuffer数组。
  • getOutputBuffers:获取编解码之后的数据输出流队列,返回的是一个ByteBuffer数组。
  • flush:清空的输入和输出端口。
  • stop:终止decode/encode会话
  • release:释放编解码器实例使用的资源。

《》

《》
《》
《》

转载地址:https://ciellee.blog.csdn.net/article/details/109353339 如侵犯您的版权,请留言回复原文章的地址,我们会给您删除此文章,给您带来不便请您谅解!

上一篇:【Android 视频硬件编码】在Native层实现MediaCodec H264 编码 Demon
下一篇:【车机xxx视频需求实现 3】 - Android 车机Native层实现 MediaCodec H264硬件编码

发表评论

最新留言

关注你微信了!
[***.104.42.241]2024年04月25日 19时59分33秒