Spark1.4源码走读笔记之模式匹配
RDD里的模式匹配:
创新互联为您提适合企业的网站设计 让您的网站在搜索引擎具有高度排名,让您的网站具备超强的网络竞争力!结合企业自身,进行网站设计及把握,最后结合企业文化和具体宗旨等,才能创作出一份性化解决方案。从网站策划到成都网站设计、网站制作, 我们的网页设计师为您提供的解决方案。
def hasNext: Boolean = (thisIter.hasNext, otherIter.hasNext) match {
case (true, true) => true
case (false, false) => false
case _ => throw new SparkException("Can only zip RDDs with " +
"same number of elements in each partition")
}
jobResult = jobResult match {
case Some(value) => Some(f(value, taskResult.get))
case None => taskResult
}
take(1) match {
case Array(t) => t
case _ => throw new UnsupportedOperationException("empty collection")
}
下面的比较好理解:
val len = rdd.dependencies.length
len match {
case 0 => Seq.empty
case 1 =>
val d = rdd.dependencies.head
debugString(d.rdd, prefix, d.isInstanceOf[ShuffleDependency[_, _, _]], true)
case _ => //所有的都到碗里来
val frontDeps = rdd.dependencies.take(len - 1)
val frontDepStrings = frontDeps.flatMap(
d => debugString(d.rdd, prefix, d.isInstanceOf[ShuffleDependency[_, _, _]]))
val lastDep = rdd.dependencies.last
val lastDepStrings =
debugString(lastDep.rdd, prefix, lastDep.isInstanceOf[ShuffleDependency[_, _, _]], true)
(frontDepStrings ++ lastDepStrings)
}
文章名称:Spark1.4源码走读笔记之模式匹配
本文路径:http://scyanting.com/article/pcoedh.html