Mongodb源代码阅读笔记:Journal机制
发布日期:2021-11-11 09:16:16 浏览次数:2 分类:技术文章

本文共 11083 字,大约阅读时间需要 36 分钟。

Mongodb源代码阅读笔记:Journal机制

 

涉及的文件

mongoDB中和持久性相关的文件只要有以下几个:

 

dur.h: dur.cpp,dur_preplogbuffer.cpp,dur_writetodatafiles.cpp

,dur_commitjob.h: dur_commitjob.cpp

,dur_journal.h:dur_journal.cpp

,dur_recover.h:dur_recover.cpp

,durable_mapped_file.h:durable_mapped_file.cpp

,mmap.h:mmap.cpp,mmap_win.cpp,mmap_mm.cpp,mmap_posix.cpp

一些说明

Dur.cpp中提供了一些说明,结合mongodb手册看很有用。然后代码阅读也从PREPLOGBUFFERWRITETOJOURNALWRITETODATAFILESREMAPPRIVATEVIEW几个阶段开始。

/*phases:

     PREPLOGBUFFER

       we will build an output buffer ourself and then use O_DIRECT

       we could be in read lock for this

       for very large objects write directly to redo log in situ?

     WRITETOJOURNAL

       we could be unlocked (the main db lock that is...) for this, with sufficient care, but there is some complexity have to handle falling behind which would use too much ram (going back into a read lock would suffice to stop that).for now (1.7.5/1.8.0) we are in read lock which is not ideal.

     WRITETODATAFILES

       actually write to the database data files in this phase.  currently done by memcpy'ing the writes back to the non-private MMF.  alternatively one could write to the files the traditional way; however the way our storage engine works that isn't any faster (actually measured a tiny bit slower).

     REMAPPRIVATEVIEW

       we could in a write lock quickly flip readers back to the main view, then stay in read lock and do our real remapping. with many files (e.g., 1000), remapping could be time consuming (several ms), so we don't want to be too frequent.

       there could be a slow down immediately after remapping as fresh copy-on-writes for commonly written pages will be required.  so doing these remaps fractionally is helpful.

*/

PREPLOGBUFFER

dur_preplogbuffer.cpp中有函数:

void PREPLOGBUFFER(/*out*/ JSectHeader& h, AlignedBuilder& ab) {  assertLockedForCommitting();  Timer t;  j.assureLogFileOpen(); // so fileId is set  _PREPLOGBUFFER(h, ab); //直接调用了这个函数  stats.curr->_prepLogBufferMicros += t.micros(); }

注意函数是返回一个JSectHeader,ab是用来交互的buffer

static void _PREPLOGBUFFER(JSectHeader& h, AlignedBuilder& bb) {  ……  resetLogBuffer(/*out*/h, bb); // adds JSectHeader section对应于group commit  // ops other than basic writes (DurOp's) ops干嘛用还不清楚  {    for( vector< shared_ptr
>::iterator i = commitJob.ops().begin(); i != commitJob.ops().end(); ++i ) {      (*i)->serialize(bb);    }  }  prepBasicWrites(bb); //从这里还是把东西写入到bb中  return;}

写入在mongo源代码中被称为写入意向,会在prepBasicWrite中被写入到buffer中,也就是这里的bb变量。之后所有的都会使用这个bb变量。

接下来看prepBasicWrites

static void prepBasicWrites(AlignedBuilder& bb) {   ……  RelativePath lastDbPath;  ……  const vector
& _intents = commitJob.getIntentsSorted(); //取出要处理的JOB写入意向  ……  WriteIntent last;  for( vector
::const_iterator i = _intents.begin(); i != _intents.end(); i++ ) {    //因为last为空所以第一遍的时候last=*i    if( i->start() < last.end() ) { //若job之间重叠就通过absorb连接上变成一个。      last.absorb(*i);    }    else { //若连不上则写入      if( i != _intents.begin() )        prepBasicWrite_inlock(bb, &last, lastDbPath); //对单个意向进行处理        last = *i;      }    }  prepBasicWrite_inlock(bb, &last, lastDbPath); }

prepBasicWrite主要的用处是对读取写入意向,然后对写入意向进行合并,然后单独处理某个写入意向。写入意向的处理在prepBasicWrite_inlock函数中。

进入prepBasicWrite_inlock函数

static void prepBasicWrite_inlock(AlignedBuilder&bb, const WriteIntent *i, RelativePath& lastDbPath) {  size_t ofs = 1;  DurableMappedFile *mmf = findMMF_inlock(i->start(), /*out*/ofs);//查找内存映射文件,应该是privare_view  if( unlikely(!mmf->willNeedRemap()) ) {    // tag this mmf as needed a remap of its private view later.    // usually it will already be dirty/already set, so we do the if above first    // to avoid possibility of cpu cache line contention    mmf->willNeedRemap() = true; //标记等会儿要remap  }   JEntry e; //JEntry表示group commit中单个的写操祝,整个entry要不被执行,要不不被执行  e.len = min(i->length(), (unsigned)(mmf->length() - ofs)); //don't write past end of file 不能超过mmf大小  ……  e.setFileNo( mmf->fileSuffixNo() );  if( mmf->relativePath() == local ) {    e.setLocalDbContextBit();  }  else if( mmf->relativePath() != lastDbPath ) {    lastDbPath = mmf->relativePath();    JDbContext c;    bb.appendStruct(c); //把db上下文写入到bb    bb.appendStr(lastDbPath.toString());//把路径写入到日志  }  bb.appendStruct(e);//把JEntry写入到日志  ……  bb.appendBuf(i->start(), e.len); //把写入意向的内容写入到bb  ……            }

这样PREPLOGBUFFER就结束了,主要就是把写入意向存到bb缓存里面。

WRITETOJOURNAL

写入WRITETOJOURNAL主要都是在dur_journal.cpp文件中。

void WRITETOJOURNAL(JSectHeader h, AlignedBuilder& uncompressed) {  Timer t;  j.journal(h, uncompressed); //调用Journal::jounal  stats.curr->_writeToJournalMicros += t.micros();}

没啥可看直接调用了journal函数

void Journal::journal(const JSectHeader& h, const AlignedBuilder& uncompressed) {  ……  static AlignedBuilder b(32*1024*1024); //分配一个值用于写文件的buf  const unsigned headTailSize = sizeof(JSectHeader) + sizeof(JSectFooter); //section头尾的大小  const unsigned max = maxCompressedLength(uncompressed.len()) + headTailSize;//获取buffer未压缩的所有大小  b.reset(max);//重置这部分大小的b  {    ……    b.appendStruct(h);//写入section头到b这个buffer上  }  size_t compressedLength = 0;  rawCompress(uncompressed.buf(), uncompressed.len(), b.cur(), &compressedLength); //把带job的buffer放到b中,返回压缩后的长度  ……  b.skip(compressedLength);//跳过compressedLength的大小,准备写下来的写入  unsigned L = 0xffffffff;  {    ……    JSectFooter f(b.buf(), b.len()); // computes checksum    b.appendStruct(f);//写入section尾    ……    b.skip(L - lenUnpadded);//跳过尾的大小    ……  }  try {    SimpleMutex::scoped_lock lk(_curLogFileMutex);    ……    _curLogFile->synchronousAppend((constvoid *) b.buf(), L); //写入数据到日志文件,LogFile::synchronousAppend    ……  }  catch(std::exception& e) {    ……  }}

真正的写入在LogFile::synchronousAppend完成,LogFile::synchronousAppend有一个函数WriteFile来完成整个写入journal的动作

WRITETODATAFILES

把数据写入到数据文件上,其实是把数据文件写入到数据文件的内存映射文件(_view_write),代码主要在dur_writetodatafiles.cpp上。

WRITETODATAFILESdur.cpp上直接调用了dur_writetodatafiles.cpp上的WRITETODATAFILES_Impl1

static void WRITETODATAFILES_Impl1(const JSectHeader& h, AlignedBuilder& uncompressed) {  LOG(3) << "journal WRITETODATAFILES 1" << endl;  RecoveryJob::get().processSection(&h, uncompressed.buf(), uncompressed.len(), 0); //在这里进入  LOG(3) << "journal WRITETODATAFILES 2" << endl;}

WRITETODATAFILES_Impl1调用了RecoveryJob::processSection

void RecoveryJob::processSection(const JSectHeader *h, constvoid *p, unsigned len, const JSectFooter *f) {  ……  auto_ptr
i;  if( _recovering /*表示recovering 或WRITETODATAFILES*/ ) {    ……   }  else { //如果是WRITETODATAFILES    i = auto_ptr
(new JournalSectionIterator(*h, /*after header*/p, /*w/out header*/len));    //把buffer转化为JournalSectionIterator类型  }  static vector
entries; //解析JEntry,然后放到entries上  entries.clear();  ParsedJournalEntry e;  while( !i->atEof() ) {    i->next(e); //把bb中的数据转化为ParsedJournalEntry    entries.push_back(e);//一个一个写入到entries中  }  ……  // got all the entries for one group commit. apply them:  applyEntries(entries); //应用这些entries}

buffer里面的写入意向,转化为ParsedJournalEntry,然后一个一个的应用。在applyEntries中,

void RecoveryJob::applyEntries(const vector
&entries) { …… for( vector
::const_iterator i = entries.begin(); i != entries.end(); ++i ) { applyEntry(last, *i, apply, dump); //一个一个应用 } ……}

进入applyEntry

void RecoveryJob::applyEntry(Last& last, const ParsedJournalEntry& entry, bool apply, bool dump) {  if( entry.e ) { //如果e存在写入操作    ……  if( apply ) {
//WRITETODATAFILES    write(last, entry); //在这里一个一个的写入  }} …… }

进入到write函数里面

void RecoveryJob::write(Last& last, const ParsedJournalEntry& entry) {  ……  DurableMappedFile *mmf = last.newEntry(entry, *this); //获取要写入的对象  if ((entry.e->ofs + entry.e->len) <= mmf->length()) {    ……    void* dest = (char*)mmf->view_write() + entry.e->ofs;//目标位_view_write    memcpy(dest, entry.e->srcData(), entry.e->len); //通过memcopy写入日志到_view_write,也就是datafile    ……  }  ……}

这样写入到datafile就完成了。

REMAPPRIVATEVIEW

写完了datafile之后,要对private view(_view_private)做重新映射

void REMAPPRIVATEVIEW() {
//重新映射privare_view  Timer t;  _REMAPPRIVATEVIEW(); //直接进入  stats.curr->_remapPrivateViewMicros += t.micros();}

直接进入_REMAPPRIVATEVIEW

static void _REMAPPRIVATEVIEW() {  ……  set
& files = MongoFile::getAllFiles(); //获取所有文件准备重新映射  ……  constset
::iterator b = files.begin();  constset
::iterator e = files.end();   Timer t;  for( unsigned x = 0; x < ntodo; x++ ) {    ……    if( (*i)->isDurableMappedFile() ) { //判断是不是DurableMappedFile,继承在DurableMappedFile中重写      DurableMappedFile *mmf = (DurableMappedFile*) *i; //有继承关系所以可以直接把MongoFile转化为DurableMappedFile      verify(mmf);      if( mmf->willNeedRemap() ) { //如果需要重新映射        mmf->willNeedRemap() = false;        mmf->remapThePrivateView(); //重新映射      }      i++;      if( i == e ) i = b;    }  } ……}

重新映射在函数remapThePrivateView中完成,位于文件durable_mapped_file.cpp下。

void DurableMappedFile::remapThePrivateView() {  ……  void *old = _view_private;  _view_private = remapPrivateView(_view_private); //对private_view进行重新映射,也就是_view_pirvate  ……}

进入remapPrivateViewmmap_win.cpp下的方法,当然在mmap_posix中也有这个方法,哪我就用mmap_win.cpp下的方法。

void* MemoryMappedFile::remapPrivateView(void *oldPrivateAddr) {  ……  void* newPrivateView = MapViewOfFileEx(  //重新映射        maphandle,          // file mapping handle        FILE_MAP_READ,      // access        0, 0,               // file offset, high and low        0,                  // bytes to map, 0 == all        oldPrivateAddr );   // we want the same address we had before  ……  return newPrivateView;}

可以发现,进行了重新映射,但是里面有个maphandle,为了check一下手册里面说的,private view重新映射到shared view_view_write)我们继续往下看。

重新回到durable_mapped_file.cpp,有个方法create,用来打开映射文件

bool DurableMappedFile::create(const std::string& fname, unsigned longlong& len, bool sequentialHint) {  LOG(3) << "mmf create " << fname << endl;  setPath(fname);  _view_write = map(fname.c_str(), len, sequentialHint ? SEQUENTIAL : 0);  return finishOpening();}

然后调用map,并把结果赋值给_view_writemap函数我们还是以mmap_win.cpp为例

void* MemoryMappedFile::map(constchar *filenameIn, unsigned longlong &length, int options) {  ……   DWORD flProtect = PAGE_READWRITE; //(options & READONLY)?PAGE_READONLY:PAGE_READWRITE;  maphandle = CreateFileMappingW(fd, NULL, flProtect,                      length >> 32/*maxsizehigh*/,                      (unsigned) length /*maxsizelow*/,                       NULL/*lpName*/); //在map数据文件的时候把返回结果复制给maphandle  ……  void *view = 0;  {    ……    view = MapViewOfFileEx(//创建了map,最后返回view                    maphandle,      // file mapping handle                    access,         // access                    0, 0,           // file offset, high and low                    0,              // bytes to map, 0 == all                    thisAddress );  // address to place file    ……  }  ……  return view;}

所以这个maphandle_view_writemaphandle也就是shared view(数据库文件)。

posted on
2014-02-20 13:37  阅读(
...) 评论(
...) 收藏

转载于:https://www.cnblogs.com/Amaranthus/p/3557475.html

转载地址:https://blog.csdn.net/dida4700/article/details/101644481 如侵犯您的版权,请留言回复原文章的地址,我们会给您删除此文章,给您带来不便请您谅解!

上一篇:Mongodb Manual阅读笔记:CH5 安全性
下一篇:Mongodb Manual阅读笔记:CH6 聚合

发表评论

最新留言

哈哈,博客排版真的漂亮呢~
[***.90.31.176]2024年03月28日 19时29分21秒