Example 1: Parse a Column of JSON Strings Using pyspark.sql.functions.from_json

For parsing json string we’ll use from_json() SQL function to parse the column containing json string into StructType with the specified schema. If the string is unparseable, it returns null.

The movie_input.csv file contains 15 records containing movie details(title, rating, releaseYear and genre) present in a JSON string. We want to read this file and parse the json string to extract the movie details into their own separate columns title, rating, releaseYear and genre.

This function requires two required parameters:

  • col: Name of column that contains the json string.
  • schema: a StructType or ArrayType of StructType to use when parsing the json column.

Python3




from pyspark.sql import SparkSession
 
if __name__ == "__main__":
 
    spark = SparkSession.builder.appName('Parse \
    a column of json strings').getOrCreate()
 
    df = spark.read.load('movie_input.csv', header=True, format="csv")
    df.show()
    df.printSchema()


Output:

 

Once we have read the data into a dataframe, now let’s convert the JSON column into multiple columns using from_json(). As mentioned above this function takes the column name with JSON string and the JSON schema as arguments, so let’s create the schema that represents our data.

Python3




import pyspark.sql.types as T
 
#schema to represent out json data
schema = T.StructType(
  [
    T.StructField('title', T.StringType(), True),
    T.StructField('rating', T.StringType(), True),
    T.StructField('releaseYear', T.StringType(), True),
    T.StructField('genre', T.StringType(), True)
  ]
)


Now, lets use the from_json() function which returns the Column struct with all the json columns.

Python3




import pyspark.sql.functions as F
 
mapped_df = df.withColumn("movie", F.from_json("movie", schema))
mapped_df.show(truncate=False)
mapped_df.printSchema()


Output:

 

And finally, we explode the json struct to flatten it using the select method. We could have selected on cols movie.title, movie.rating, ..etc. But the better approach is to use * wildcard character which would select all the columns which has movie. prefix.

Python3




import pyspark.sql.functions as F
 
parsed_df = mapped_df.select(F.col('id'), F.col("movie.*"))
parsed_df.show(truncate=False)
parsed_df.printSchema()


Output:

 

Pyspark – Parse a Column of JSON Strings

In this article, we are going to discuss how to parse a column of json strings into their own separate columns. Here we will parse or read json string present in a csv file and convert it into multiple dataframe columns using Python Pyspark.

Similar Reads

Example 1: Parse a Column of JSON Strings Using pyspark.sql.functions.from_json

For parsing json string we’ll use from_json() SQL function to parse the column containing json string into StructType with the specified schema. If the string is unparseable, it returns null....

Example 2: Parse a column of json strings using to_json() and json_tuple()

...

Contact Us