如何在Spark DataFrame中添加一个常量列?

时间:2021-05-24 16:52:25

I want to add a column in a DataFrame with some arbitrary value (that is the same for each row). I get an error when I use withColumn as follows:

我想在DataFrame中添加一个列,该列具有任意的值(对于每一行都是相同的)。我在使用withColumn时,会得到如下错误:

dt.withColumn('new_column', 10).head(5)

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-50-a6d0257ca2be> in <module>()
      1 dt = (messages
      2     .select(messages.fromuserid, messages.messagetype, floor(messages.datetime/(1000*60*5)).alias("dt")))
----> 3 dt.withColumn('new_column', 10).head(5)

/Users/evanzamir/spark-1.4.1/python/pyspark/sql/dataframe.pyc in withColumn(self, colName, col)
   1166         [Row(age=2, name=u'Alice', age2=4), Row(age=5, name=u'Bob', age2=7)]
   1167         """
-> 1168         return self.select('*', col.alias(colName))
   1169 
   1170     @ignore_unicode_prefix

AttributeError: 'int' object has no attribute 'alias'

It seems that I can trick the function into working as I want by adding and subtracting one of the other columns (so they add to zero) and then adding the number I want (10 in this case):

似乎我可以通过添加和减去其他列中的一列(因此它们相加为0)然后加上我想要的数字(在这种情况下是10)来欺骗这个函数使其工作正常:

dt.withColumn('new_column', dt.messagetype - dt.messagetype + 10).head(5)

[Row(fromuserid=425, messagetype=1, dt=4809600.0, new_column=10),
 Row(fromuserid=47019141, messagetype=1, dt=4809600.0, new_column=10),
 Row(fromuserid=49746356, messagetype=1, dt=4809600.0, new_column=10),
 Row(fromuserid=93506471, messagetype=1, dt=4809600.0, new_column=10),
 Row(fromuserid=80488242, messagetype=1, dt=4809600.0, new_column=10)]

This is supremely hacky, right? I assume there is a more legit way to do this?

这太陈腐了,对吧?我想有更合法的方式来做这件事?

2 个解决方案

#1


118  

Spark 2.2+

火花2.2 +

Spark 2.2 introduces typedLit to support Seq, Map, and Tuples (SPARK-19254) and following calls should be supported (Scala):

Spark 2.2引入typedLit来支持Seq、Map和tuple (Spark -19254),并支持以下调用(Scala):

import org.apache.spark.sql.functions.typedLit

df.withColumn("some_array", typedLit(Seq(1, 2, 3)))
df.withColumn("some_struct", typedLit(("foo", 1, .0.3)))
df.withColumn("some_map", typedLit(Map("key1" -> 1, "key2" -> 2)))

Spark 1.3+ (lit), 1.4+ (array, struct), 2.0+ (map):

火花1.3 +(点燃),1.4 +(数组、结构),2.0 +(图):

The second argument for DataFrame.withColumn should be a Column so you have to use a literal:

DataFrame的第二个参数。withColumn应该是列,所以必须使用文字:

from pyspark.sql.functions import lit

df.withColumn('new_column', lit(10))

If you need complex columns you can build these using blocks like array:

如果您需要复杂的列,您可以使用数组之类的块来构建这些列:

from pyspark.sql.functions import array, create_map, struct

df.withColumn("some_array", array(lit(1), lit(2), lit(3)))
df.withColumn("some_struct", struct(lit("foo"), lit(1), lit(.3)))
df.withColumn("some_map", create_map(lit("key1"), lit(1), lit("key2"), lit(2)))

Exactly the same methods can be used in Scala.

Scala可以使用完全相同的方法。

import org.apache.spark.sql.functions.{array, lit, map, struct}

df.withColumn("new_column", lit(10))
df.withColumn("map", map(lit("key1"), lit(1), lit("key2"), lit(2)))

It is also possible, although slower, to use an UDF.

使用UDF也有可能,尽管速度较慢。

#2


3  

In spark 2.2 there are two ways to add constant value in a column in DataFrame:

在spark 2.2中,有两种方法可以在DataFrame中的列中添加常量值:

1) Using lit

1)使用点燃

2) Using typedLit.

2)使用typedLit。

The difference between the two is that typedLit can also handle parameterized scala types e.g. List, Seq, and Map

两者的区别在于typedLit也可以处理参数化的scala类型,例如List、Seq和Map

Sample DataFrame:

示例DataFrame:

val df = spark.createDataFrame(Seq((0,"a"),(1,"b"),(2,"c"))).toDF("id", "col1")

+---+----+
| id|col1|
+---+----+
|  0|   a|
|  1|   b|
+---+----+

1) Using lit: Adding constant string value in new column named newcol:

1)使用lit:在new column中添加常量字符串值newcol:

import org.apache.spark.sql.functions.lit
val newdf = df.withColumn("newcol",lit("myval"))

Result:

结果:

+---+----+------+
| id|col1|newcol|
+---+----+------+
|  0|   a| myval|
|  1|   b| myval|
+---+----+------+

2) Using typedLit:

2)使用typedLit:

import org.apache.spark.sql.functions.typedLit
df.withColumn("newcol", typedLit(("sample", 10, .044)))

Result:

结果:

+---+----+-----------------+
| id|col1|           newcol|
+---+----+-----------------+
|  0|   a|[sample,10,0.044]|
|  1|   b|[sample,10,0.044]|
|  2|   c|[sample,10,0.044]|
+---+----+-----------------+

#1


118  

Spark 2.2+

火花2.2 +

Spark 2.2 introduces typedLit to support Seq, Map, and Tuples (SPARK-19254) and following calls should be supported (Scala):

Spark 2.2引入typedLit来支持Seq、Map和tuple (Spark -19254),并支持以下调用(Scala):

import org.apache.spark.sql.functions.typedLit

df.withColumn("some_array", typedLit(Seq(1, 2, 3)))
df.withColumn("some_struct", typedLit(("foo", 1, .0.3)))
df.withColumn("some_map", typedLit(Map("key1" -> 1, "key2" -> 2)))

Spark 1.3+ (lit), 1.4+ (array, struct), 2.0+ (map):

火花1.3 +(点燃),1.4 +(数组、结构),2.0 +(图):

The second argument for DataFrame.withColumn should be a Column so you have to use a literal:

DataFrame的第二个参数。withColumn应该是列,所以必须使用文字:

from pyspark.sql.functions import lit

df.withColumn('new_column', lit(10))

If you need complex columns you can build these using blocks like array:

如果您需要复杂的列,您可以使用数组之类的块来构建这些列:

from pyspark.sql.functions import array, create_map, struct

df.withColumn("some_array", array(lit(1), lit(2), lit(3)))
df.withColumn("some_struct", struct(lit("foo"), lit(1), lit(.3)))
df.withColumn("some_map", create_map(lit("key1"), lit(1), lit("key2"), lit(2)))

Exactly the same methods can be used in Scala.

Scala可以使用完全相同的方法。

import org.apache.spark.sql.functions.{array, lit, map, struct}

df.withColumn("new_column", lit(10))
df.withColumn("map", map(lit("key1"), lit(1), lit("key2"), lit(2)))

It is also possible, although slower, to use an UDF.

使用UDF也有可能,尽管速度较慢。

#2


3  

In spark 2.2 there are two ways to add constant value in a column in DataFrame:

在spark 2.2中,有两种方法可以在DataFrame中的列中添加常量值:

1) Using lit

1)使用点燃

2) Using typedLit.

2)使用typedLit。

The difference between the two is that typedLit can also handle parameterized scala types e.g. List, Seq, and Map

两者的区别在于typedLit也可以处理参数化的scala类型,例如List、Seq和Map

Sample DataFrame:

示例DataFrame:

val df = spark.createDataFrame(Seq((0,"a"),(1,"b"),(2,"c"))).toDF("id", "col1")

+---+----+
| id|col1|
+---+----+
|  0|   a|
|  1|   b|
+---+----+

1) Using lit: Adding constant string value in new column named newcol:

1)使用lit:在new column中添加常量字符串值newcol:

import org.apache.spark.sql.functions.lit
val newdf = df.withColumn("newcol",lit("myval"))

Result:

结果:

+---+----+------+
| id|col1|newcol|
+---+----+------+
|  0|   a| myval|
|  1|   b| myval|
+---+----+------+

2) Using typedLit:

2)使用typedLit:

import org.apache.spark.sql.functions.typedLit
df.withColumn("newcol", typedLit(("sample", 10, .044)))

Result:

结果:

+---+----+-----------------+
| id|col1|           newcol|
+---+----+-----------------+
|  0|   a|[sample,10,0.044]|
|  1|   b|[sample,10,0.044]|
|  2|   c|[sample,10,0.044]|
+---+----+-----------------+