我正在使用PySpark的DataFrame部分来分析来自Apache Kafka的数据。我遇到一些麻烦,需要一些帮助。
from pyspark.sql import functions
# selected_df is dataframe come from kafka use spark.readStream.format("kafka")...
windowed_group_1 = selected_df.withWatermark("kafka_time", "10 minutes").groupBy(functions.window("kafka_time", "10 seconds", "5 seconds"))
windowed_group_2 = selected_df.withWatermark("kafka_time", "10 minutes").groupBy(functions.window("kafka_time", "10 seconds", "5 seconds"))这两个分组是相同的窗口函数吗?它们在相同的选项中。
如果没有,我该如何完成这项任务?
windowed_group_1 == windowed_group_2提前感谢您的帮助。
发布于 2019-04-17 15:49:41
也许这就是我想要的,无论何时使用时间窗口,窗口函数默认采用1970-01-01T00:00:00作为参考帧。
from pyspark.sql import functions as func
a = labeled_df.groupBy(func.window("timestamp", "60 minute"), "proto").count().show(100, truncate=False)
b = labeled_df.groupBy(func.window("timestamp", "60 minute"), "proto").count().show(100, truncate=False)结果A和B是相同的
a
+------------------------------------------+---------+-----+
|window |proto |count|
+------------------------------------------+---------+-----+
|[2010-06-13 08:00:00, 2010-06-13 09:00:00]|UDP |1803 |
|[2010-06-13 02:00:00, 2010-06-13 03:00:00]|TCP |22579|
|[2010-06-13 09:00:00, 2010-06-13 10:00:00]|TCP |2637 |
|[2010-06-13 02:00:00, 2010-06-13 03:00:00]|IPv6-ICMP|453 |
|[2010-06-13 02:00:00, 2010-06-13 03:00:00]|UDP |1183 |
|[2010-06-13 03:00:00, 2010-06-13 04:00:00]|UDP |1467 |
b
+------------------------------------------+---------+-----+
|window |proto |count|
+------------------------------------------+---------+-----+
|[2010-06-13 08:00:00, 2010-06-13 09:00:00]|UDP |1803 |
|[2010-06-13 02:00:00, 2010-06-13 03:00:00]|TCP |22579|
|[2010-06-13 09:00:00, 2010-06-13 10:00:00]|TCP |2637 |
|[2010-06-13 02:00:00, 2010-06-13 03:00:00]|IPv6-ICMP|453 |
|[2010-06-13 02:00:00, 2010-06-13 03:00:00]|UDP |1183 |
|[2010-06-13 03:00:00, 2010-06-13 04:00:00]|UDP |1467 |https://stackoverflow.com/questions/55154017
复制相似问题