(版本定制)第11课:SparkStreaming源码解读
本期内容:
创新互联建站自2013年创立以来,是专业互联网技术服务公司,拥有项目做网站、网站制作网站策划,项目实施与项目整合能力。我们以让每一个梦想脱颖而出为使命,1280元蔡甸做网站,已为上家服务,为蔡甸各地企业和个人服务,联系电话:028-86922220
1、ReceiverTracker的架构设计
2、消息循环系统
3、ReceiverTracker具体实现
上节课讲到了Receiver是如何不断的接收数据的,并且接收到的数据的元数据会汇报给ReceiverTracker,下面我们看看ReceiverTracker具体的功能及实现。
ReceiverTracker主要的功能:
在Executor上启动Receivers。
停止Receivers 。
更新Receiver接收数据的速度(也就是限流)
不断的等待Receivers的运行状态,只要Receivers停止运行,就重新启动Receiver,也就是Receiver的容错功能。
接受Receiver的注册。
借助ReceivedBlockTracker来管理Receiver接收数据的元数据。
汇报Receiver发送过来的错误信息
ReceiverTracker 管理了一个消息通讯体ReceiverTrackerEndpoint,用来与Receiver或者ReceiverTracker 进行消息通信。
在ReceiverTracker的start方法中,实例化了ReceiverTrackerEndpoint,并且在Executor上启动Receivers。
启动Receivr,其实是ReceiverTracker给ReceiverTrackerEndpoint发送了一个本地消息,ReceiverTrackerEndpoint将Receiver封装成RDD以job的方式提交给集群运行。
Receiver启动后,会向ReceiverTracker注册,注册成功才算正式启动了。
当Receiver端接收到数据,达到一定的条件需要将数据写入BlockManager,并且将数据的元数据汇报给ReceiverTracker。
/** Store block and report it to driver */ def pushAndReportBlock( receivedBlock: ReceivedBlock, metadataOption: Option[Any], blockIdOption: Option[StreamBlockId] ) { val blockId = blockIdOption.getOrElse(nextBlockId) val time = System.currentTimeMillis val blockStoreResult = receivedBlockHandler.storeBlock(blockId, receivedBlock) logDebug(s"Pushed block $blockId in ${(System.currentTimeMillis - time)} ms") val numRecords = blockStoreResult.numRecords val blockInfo = ReceivedBlockInfo(streamId, numRecords, metadataOption, blockStoreResult) trackerEndpoint.askWithRetry[Boolean](AddBlock(blockInfo)) logDebug(s"Reported block $blockId") }
当ReceiverTracker收到元数据后,会在线程池中启动一个线程来写数据
case AddBlock(receivedBlockInfo) => if (WriteAheadLogUtils.isBatchingEnabled(ssc.conf, isDriver = true)) { walBatchingThreadPool.execute(new Runnable { override def run(): Unit = Utils.tryLogNonFatalError { if (active) { context.reply(addBlock(receivedBlockInfo)) } else { throw new IllegalStateException("ReceiverTracker RpcEndpoint shut down.") } } }) } else { context.reply(addBlock(receivedBlockInfo)) }
数据的元数据是交由ReceivedBlockTracker管理的
数据最终被写入到streamIdToUnallocatedBlockQueues中,一个流对应一个数据块信息的队列。
每当Streaming 触发job时,会将队列中的数据分配成一个batch,并将数据写入timeToAllocatedBlocks数据结构。
下面是简单的流程图:
标题名称:(版本定制)第11课:SparkStreaming源码解读
URL地址:http://scyanting.com/article/ppohoo.html