Scala模式匹配和类型系统

时间:2023-12-21 12:53:20

1.模式匹配比java中的switch case强大很多,除了值,类型,集合等进行匹配,最常见的Case class进行匹配,Master.scala有大量的模式匹配。

Case "_"表示不满足上面的所有情况的体验,举个例子:

def bigdata(data: String){

data match{

case "Spack"=>println("WOW!!!")

case "Hadoop"=>println("OK")

case _=>println("sorry")

}

}

bigdata("Spack")    //Wow!!!

可以在case里面加入条件判断

def bigData(data:String)

{

data match{

case "Spark"=>println("Wow!!")

case "hadoop"=> println("ok")

case _ if data == "Flink" =>println("Flink")

case _=>println("other")

}

}

bigdata("Flink")  //Flink

对类型进行匹配

def exception(e: Exception){

e match{

casefileException:FileNotFoundException =>println("File not fount : " +fileException)

case _:Exception=>println("Exception" ,e) }

}

exception(new FileNotFoundException("oop!!!"))    //> File notfount : java.io.FileNotFoundException: oop!!!

对集合进行匹配

def data (array:Array[String])

{   arraymatch{

case Array("Scala")=>println("Scala")

caseArray(spark,hadoop,flink)=>println(spark +" : " +hadoop +" : " +flink +" : ")

case Array("Spark",_*)=>println("Spark...")

case _=>println("Unkown")

}

}                                                //> data: (array: Array[String])Unit

data(Array("Spark"))                              //> Spark...

data(Array("Scala"))                              //> Scala

data(Array("Scala","Spark","kafaka"))             //> Scala : Spark : kafaka :

对class进行匹配

scala> case class Person(name: String)

defined class Person

case classPerson(name: String)

Person("Spark")                                   // res0: worksheetest.Person = Person(Spark)

1:case class 相对于java中的bean,val 只有个get

2:实例自动调用伴生对象

class Person

case classWorker(name: String,salary: Double) extends Person

case classStudent(name: String,score: Double) extends Person

def sayHi(person:Person)

{

personmatch{

case Student(name,score)=>println("I am Student :"+name +","+score)

case Worker(name,salary)=>println("I am Worker :"+name +","+salary)

case _ =>println("Unknown")

}

}                                                 //> sayHi: (person: worksheetest.Person)Unit

sayHi(Worker("Worker",6.5))                       //> I am Worker :Worker,6.5

sayHi(Student("Student",6.5))                     //> I am Student :Student,6.5

DeployMessages源码中:

caseclassExecutorStateChanged(

appId:String,

execId:Int,

state:ExecutorState,

message:Option[String],

exitStatus:Option[Int])

extends DeployMessage

case class 使用时会生成很多对象

case object 本身就是一个实例,全局唯一

scala 的类型参数(重磅的东西)最好的难点,太有用了,在所有的spark源码中到处都是

例:RDD[T: ClassTag]

泛型,参数本身是有类型,scala的泛型,

泛型类和泛型函数

class Person[T](valcontent:T)

{

def getContent(id: T) = id+ " _ "+ content

}

val p = newPerson[String]("Spark")               //> p  :worksheetest.Person[String] = worksheetest$Person@50134894

p.getContent("Scala")                             //> res0: String = Scala _ Spark

泛型前面有+和-

* scala> def mkArray[T : ClassTag](elems: T*) =Array[T](elems: _*)
* mkArray: [T](elems: T*)(implicit evidence$1:scala.reflect.ClassTag[T])Array[T]
*
* scala> mkArray(42, 13)
* res0: Array[Int] = Array(42, 13)
*
* scala> mkArray("Japan","Brazil","Germany")
* res1: Array[String] = Array(Japan, Brazil, Germany)
* }}}

协变:如果S是T的子类型,并且List[S]也是List[T]的子类型,那么成为协变 class Person[+T] //强制定义为协变类型

C[+T]:如果A是B的子类,那么C[A]是C[B]的子类。逆变范围小
C[-T]:如果A是B的子类,那么C[B]是C[A]的子类。协变 范围大
C[T]:无论A和B是什么关系,C[A]和C[B]没有从属关系。

注:阅读Spark源码 RDD、HadoopRDD、SparkContext、Master、Worker的源码,并分析里面使用的所有的模式匹配和类型参数的内容。

总结:

T <% Writable: ClassTag
T可以隐身转换为Writable类型
ClassTag在上下文中注入隐式值

对于Manifest Context Bounds
[T : Manifest] 进化为ClassTag了,T:ClassTag  运行时传递完整的类型上下文信息

Seq[Dependency[_]] 相当于Seq[Dependency[T]]

另外有段重要注释:

{{{
* scala> def mkArray[T : ClassTag](elems: T*) = Array[T](elems: _*)
* mkArray: [T](elems: T*)(implicit evidence$1: scala.reflect.ClassTag[T])Array[T]
*
* scala> mkArray(42, 13)
* res0: Array[Int] = Array(42, 13)
*
* scala> mkArray("Japan","Brazil","Germany")
* res1: Array[String] = Array(Japan, Brazil, Germany)
* }}}
*

表明了ClassTag 的隐式转换机制。