Forward filling in Spark
Recently I had the challenge to figure out the status of a certain person in our database on any possible date, while the only thing we store is when the status of a person changes. The query I need to answer is similar to the following SQL statement:
SELECT status FROM person_statuses WHERE person = "John Doe" AND time = current_date()
Step 1 - Create the data¶
To create the dummy data I will create an event generator. The generated events will be the input for the forward fill exercise. I will use the code I have written for the data-pipeline-project
from this repo.
from datetime import datetime, timedelta
from faker import Faker
import json
DATE_END = datetime.now()
DATE_START = DATE_END - timedelta(days=31)
NUM_EVENTS = 10
class EventGenerator:
""" Defines the EventGenerator """
MIN_LIVES = 1
MAX_LIVES = 99
CHARACTERS = ["Mario", "Luigi", "Peach", "Toad"]
def __init__(self, num_events, output_type, start_date, end_date, output_file=None):
""" Initialize the EventGenerator """
self.faker = Faker()
self.num_events = num_events
self.output_type = output_type
self.output_file = output_file
self.start_date = start_date
self.end_date = end_date
def _get_date_between(self, date_start, date_end):
""" Get a date between start and end date """
return self.faker.date_between_dates(date_start=date_start, date_end=date_end)
def _generate_events(self):
""" Generate the metric data """
for _ in range(self.num_events):
yield {
"character": self.faker.random_element(self.CHARACTERS),
"world": self.faker.random_int(min=1, max=8, step=1),
"level": self.faker.random_int(min=1, max=4, step=1),
"lives": self.faker.random_int(
min=self.MIN_LIVES, max=self.MAX_LIVES, step=1
),
"time": str(self._get_date_between(self.start_date, self.end_date)),
}
def store_events(self):
if self.output_type == "jl":
with open(self.output_file, "w") as outputfile:
for event in self._generate_events():
outputfile.write(f"{json.dumps(event)}\n")
elif self.output_type == "list":
return list(self._generate_events())
I only want 10 events to keep the dataframe we use in Spark small.
params = {
"num_events": NUM_EVENTS,
"output_type": "list",
"start_date": DATE_START,
"end_date": DATE_END,
}
# Create the event generator
generator = EventGenerator(**params)
# Create and store the events
events = generator.store_events()
Step 2 - Analyze the data¶
The events represent the persons (Nintendo characters) and their status (current world and level). From looking at the data it is obvious there are big gaps in time before the characters advance to the next level. How do we know where Mario was without storing the daily world/level status?
import pandas as pd
pd.DataFrame(events).sort_values(["character", "time"])
Step 3 - Create Spark Dataframe¶
To create the Spark Dataframe a SparkSession
is used. I don't use any external libraries right now, since we will use plain (Py)Spark.
from pyspark.sql import SparkSession, SQLContext
from pyspark.sql import functions as F
from pyspark.sql.window import Window
import os
os.environ['PYSPARK_PYTHON'] = "/data/jupyter/bin/python"
spark = (SparkSession
.builder
.appName("Spark Forward Fill")
.getOrCreate())
sc = SQLContext(spark)
df = sc.createDataFrame(events)
df.show()
Step 4 - Correct the data¶
Ensure we have the correct datatypes:
df.dtypes
df = df.withColumn("time", F.to_date(F.to_timestamp("time")))
df.dtypes
As a status I combine the world and the level:
df = df.withColumn("status", F.concat(F.col('world'), F.lit('-'), F.col('level')))
df.show()
Step 5 - Forward fill the data¶
The data should be partitioned by character and ordered by the time. For this I will use a simple window function to go through the data.
w = Window().partitionBy("character").orderBy("time")
Apply the window to create a new column where we add the time of the next status update per character.
df = df.withColumn("next_status", F.lead("time").over(w))
df.show()
With the sequence
I create the column containing the dates between the time
of the status change and the time of the next_status
. The sequence
column will contain a list of dates that can be exploded to create a row for each date in the list.
df = df.withColumn("sequence", F.when(F.col("next_status").isNotNull(),
F.expr("sequence(to_date(time), date_sub(to_date(next_status),1), interval 1 day)"))\
.otherwise(F.array("time")))
df.show()
Now explode
is applied to the sequence
column which gives the result shown below. As we can see the events are filled from the first status change until the latest status change.
df.select("character", "status", F.explode("sequence").alias("time")).show()
One thing missing in the approach above is that we only know the status until the latest change for each character. What I need is to know what the status is today, so I need to modify the code and create an artifical end date when there is no next status, or the next status is in the past. That way I can fill dates after the last status up until today with that last status. For completeness I will repeat the code that I have created before.
# Create the dataframe
df = sc.createDataFrame(events)
# Modify the data type
df = df.withColumn("time",
F.to_date(F.to_timestamp("time")))
# Create the status column
df = df.withColumn("status",
F.concat(F.col('world'),
F.lit('-'),
F.col('level')))
# Apply the window
df = df.withColumn("next_status",
F.lead("time").over(w))
# Fill in the empty `next_status` column with today's date
df = df.withColumn("next_status",
F.when(F.col("next_status").isNull(),
F.expr("current_date()"))\
.otherwise(F.col("next_status")))
# Apply the sequence
df = df.withColumn("sequence",
F.when(F.col("next_status").isNotNull(),
F.expr("sequence(to_date(time), date_sub(to_date(next_status), 1), interval 1 day)"))\
.otherwise(F.array("time")))
# Select the columns and explore the sequence
df.select("character", "status", F.explode("sequence").alias("time")).show()
Step 6 - Avoid forward filling!¶
From this small test it is clear it is not wise to use the sequence
to create an event for every date for every character. This will easily become massive and should not be used in reality. The main idea behind trying to do the forward filling is to compare the complexity with Python pandas
. In Pandas you can forward fill by simply using ffill
. I used the following code to retrieve the same result as in the Spark script above.
pdf = pd.DataFrame(events)
# Convert datatypes
pdf['time'] = pd.to_datetime(pdf['time'])
# Add status column
pdf['status'] = pdf.apply(lambda x: f"{x['world']}-{x['level']}", axis=1)
# Create full time range as frame
timeframe = pd.date_range(start=min(pdf['time']),
end=datetime.now().date()).to_frame().reset_index(drop=True).rename(columns={0: 'time'})
# Merge timeframe into original frame
pdf = pdf.merge(timeframe,
left_on='time',
right_on='time',
how='right')
# 1. Pivot to get dates on rows and characters as columns
# 2. Forward fill values per character
# 3. Fill remaining NaNs with False
pdf = pdf.pivot(index='time',
columns='character',
values='status')
pdf = pdf.fillna(method='ffill')
# Drop NaN column and reset the index
pdf = pdf.loc[:, pdf.columns.notnull()].reset_index()
# Melt the columns back
pdf = pd.melt(pdf,
id_vars='time',
value_name='status')
print(f"Original length: {len(events)}, new length: {len(pdf)}")
pdf.head(10)
Step 7 - Be smarter¶
Instead of using the forward fill approach to create all this data, it is a better idea to only add one more column to the table where the end of the status is recorded. For example, if Mario was in level 1-1 seven days ago and today he finally made it to level 1-2, there is no need to create six events containing level 1-1 and one event for level 1-2. Instead it would be better that one event contains 1-1 with start date seven days ago and end date yesterday, plus an event for level 1-2 with start date today and future date somewhere in the future.
Coming back to the initial SQL query I can rewrite this easily to a query with BETWEEN
:
SELECT status FROM person_statuses WHERE person = "John Doe" AND time = current_date()
is rewritten to
SELECT status FROM person_statuses WHERE person = "John Doe" AND current_date() BETWEEN time AND endtime
I use a similar script, but will extract one day from the endtime
column. There is no explode needed with this approach.
# Create the dataframe
df = sc.createDataFrame(events)
# Modify the data type
df = df.withColumn("time",
F.to_date(F.to_timestamp("time")))
# Create the status column
df = df.withColumn("status",
F.concat(F.col('world'),
F.lit('-'),
F.col('level')))
# Apply the window
df = df.withColumn("endtime",
F.lead("time").over(w))
# Substract one day from the endtime
df = df.withColumn("endtime",
F.expr("date_sub(to_date(endtime), 1)"))
# Fill in the empty `endtime` column with today's date
df = df.withColumn("endtime",
F.when(F.col("endtime").isNull(),
F.expr("current_date()"))\
.otherwise(F.col("endtime")))
# Select the columns and explore the sequence
df.select("character", "status", "time", "endtime").show()
Store the dataframe as temporary view such that I can use SQL to query it with Spark:
df.createOrReplaceTempView("df")
sc.sql("SELECT status, time, endtime FROM df WHERE character = 'Peach'").show()
sc.sql("SELECT status FROM df WHERE character = 'Peach' AND '2020-05-24' BETWEEN time AND endtime").show()
Instead of using PySpark to create the dataframe the same can be achieved using SQL.
# Create the events view
eventsdf = sc.createDataFrame(events)
eventsdf.createOrReplaceTempView("events")
sc.sql("""
WITH statuses AS (
SELECT
character,
CONCAT(world, '-', level) AS status,
to_date(time) AS start,
to_date(DATE_SUB(LEAD(time) OVER (PARTITION BY character ORDER BY time), 1)) AS end
FROM
events
)
SELECT
character,
status,
start,
IF(end IS NOT NULL, end, current_date()) AS end
FROM statuses
""").show()
And that's it. This was a good exercise to understand how easy it is to make things too complex..