Spark的Column.isin函数不带List

时间:2022-11-01 22:59:51

I am trying to filter out rows from my Spark Dataframe.

我正在尝试从Spark Dataframe中过滤出行。

val sequence = Seq(1,2,3,4,5)
df.filter(df("column").isin(sequence))

Unfortunately, I get an unsupported literal type error

不幸的是,我得到了一个不受支持的文字类型错误

java.lang.RuntimeException: Unsupported literal type class scala.collection.immutable.$colon$colon List(1,2,3,4,5)

according to the documentation it takes a scala.collection.Seq list

根据文档,它采用scala.collection.Seq列表

I guess I don't want a literal? Then what can I take in, some sort of wrapper class?

我想我不想要文字?然后我可以接受什么样的包装类?

2 个解决方案

#1


7  

@JustinPihony's answer is correct but it's incomplete. The isin function takes a repeated parameter for argument, so you'll need to pass it as so :

@ JustinPihony的答案是正确的,但它不完整。 isin函数为参数采用重复参数,因此您需要将其传递给:

scala> val df = sc.parallelize(Seq(1,2,3,4,5,6,7,8,9)).toDF("column")
// df: org.apache.spark.sql.DataFrame = [column: int]

scala> val sequence = Seq(1,2,3,4,5)
// sequence: Seq[Int] = List(1, 2, 3, 4, 5)

scala> val result = df.filter(df("column").isin(sequence : _*))
// result: org.apache.spark.sql.DataFrame = [column: int]

scala> result.show
// +------+
// |column|
// +------+
// |     1|
// |     2|
// |     3|
// |     4|
// |     5|
// +------+

#2


0  

This is happening because the underlying Scala implementation uses varargs, so the documentation in Java is not quite correct. It is using the @varargs annotation, so you can just pass in an array.

发生这种情况是因为底层的Scala实现使用了varargs,因此Java中的文档并不完全正确。它使用@varargs注释,因此您只需传入一个数组。

#1


7  

@JustinPihony's answer is correct but it's incomplete. The isin function takes a repeated parameter for argument, so you'll need to pass it as so :

@ JustinPihony的答案是正确的,但它不完整。 isin函数为参数采用重复参数,因此您需要将其传递给:

scala> val df = sc.parallelize(Seq(1,2,3,4,5,6,7,8,9)).toDF("column")
// df: org.apache.spark.sql.DataFrame = [column: int]

scala> val sequence = Seq(1,2,3,4,5)
// sequence: Seq[Int] = List(1, 2, 3, 4, 5)

scala> val result = df.filter(df("column").isin(sequence : _*))
// result: org.apache.spark.sql.DataFrame = [column: int]

scala> result.show
// +------+
// |column|
// +------+
// |     1|
// |     2|
// |     3|
// |     4|
// |     5|
// +------+

#2


0  

This is happening because the underlying Scala implementation uses varargs, so the documentation in Java is not quite correct. It is using the @varargs annotation, so you can just pass in an array.

发生这种情况是因为底层的Scala实现使用了varargs,因此Java中的文档并不完全正确。它使用@varargs注释,因此您只需传入一个数组。