How to Create SQLContext in Spark Using Scala?

Scala stands for scalable language. It was developed in 2003 by Martin Odersky. It is an object-oriented language that provides support for functional programming approach as well. Everything in Scala is an object. It is a statically typed language although unlike other statically typed languages like C, C++, or Java, it doesn’t require type information while writing the code. The type verification is done at the compile time. Static typing allows to building of safe systems by default. Smart built-in checks and actionable error messages, combined with thread-safe data structures and collections, prevent many tricky bugs before the program first runs.

This article focuses on discussing steps to create SQLContext in Spark using Scala.

Table of Content

  • What is SQLContext?
  • Creating SQLContext
    • 1. Using SparkContext
    • 2. Using Existing SQLContext Object
    • 3. Using SparkSession

What is SQLContext?

The official definition in the documentation of Spark is:

The entry point for running relational queries using Spark. Allows the creation of SchemaRDD objects and the execution of SQL queries.

The purpose of SQLContext is to introduce processing on structured data in Spark. Before it, spark only had RDDs to manipulate data. RDDs are simply a collection of rows (notice the absence of columns) that can be manipulated using lambda functions and other functionalities. SQLContext introduced objects that would add schema (like column name and data type column) to the data to make it similar to relational databases. The additional information about data would also open the gate to optimizations for data processing.

Looking more at the documentation, it shows that the SQLContext is a class introduced in version 1.0.0 and provides a set of functions that allow creating and manipulating a SchemaRDD object. Here is the list of functions:

  • cacheTable
  • createParquetFile
  • createSchemaRDD
  • logicalPlanToSparkQuery
  • parquetFile
  • registerRDDAsTable
  • sparkContext
  • sql
  • table
  • uncacheTable

The APIs revolve around inter-transformation of Parquet files and SchemaRDD objects. SchemaRDD objects are an RDD of Row objects that has an associated schema. In addition to standard RDD functions, SchemaRDDs can be used in relational queries, like as below:

Scala
// One method for defining the schema of an RDD is to make a case class with the desired column
  // names and types.
  case class Record(key: Int, value: String)

  val sc: SparkContext // An existing spark context.
  val sqlContext = new SQLContext(sc)

  // Importing the SQL context gives access to all the SQL functions and implicit conversions.
  import sqlContext._

  val rdd = sc.parallelize((1 to 100).map(i => Record(i, s"val_$i")))
  // Any RDD containing case classes can be registered as a table.  The schema of the table is
  // automatically inferred using scala reflection.
  rdd.registerAsTable("records")

  val results: SchemaRDD = sql("SELECT * FROM records")

The above code would not run on the latest versions of Spark because SchemaRDDs are now obsolete.

Currently, SQLContext is itslef not used and instead SparkSession is used to create a unified interface for many such different contexts like SQLContext, SparkContext HiveContext and others. Inside SparkSession, the SQLContext is still present. Also, instead of SchemaRDDs spark now uses DataSets and DataFrames to denote structured data.

Creating SQLContext

1. Using SparkContext

We can create an SQLContext from a sparkcontext. The constructor is as follows:

public SQLContext(SparkContext sparkContext)

We can create a simple sparkcontext object with “master” (the cluster url) being set to “local” (just use the current machine) and “appName” to “createSQLContext”. We can then supply this sparkcontext to the SQLContext constructor.

Scala
import org.apache.spark.SparkContext
import org.apache.spark.sql.SQLContext

object createSQLContext {
  def main(args: Array[String]): Unit = {
    val sc = new SparkContext("local[*]", "createSQLContext")
    val sqlc = new SQLContext(sc)
    println(sqlc)
  }
}

Output:

The SQLContext Object

Explanation:

As you can see above we have created a new SQLContext object. Although we were successful but this method is deprecated and SQLContext is replaced with SparkSession. SQLContext is kept in newer versions only for backward compatibility.

2. Using Existing SQLContext Object

We can also use an existing SQLContext object to create a new SQLContext object. Every SQLContext provides a newSession API to create a new object based on the same SparkContext object. The API is as follows:

def newSession(): SQLContext

// Returns a SQLContext as new session, with separated SQL configurations, temporary tables, registered functions, but sharing the same SparkContext, cached data and other things

Below is the Scala program to implement the approach:

Scala
import org.apache.spark.SparkContext
import org.apache.spark.sql.SQLContext

object createSQLContext {
  def main(args: Array[String]): Unit = {
    val sc = new SparkContext("local[*]", "createSQLContext")
    val sqlc = new SQLContext(sc)
    val nsqlc = sqlc.newSession()
    println(nsqlc)
  }
}

Output:

The SQLContext Object

Explanation:

As you can see above we have created a new SQLContext object. Although we were successful but this method is deprecated and SQLContext is replaced with SparkSession. SQLContext is kept in newer versions only for backward compatibility.

3. Using SparkSession

The latest way (as of version 3.5.0) is to use SparkSession object. The SparkSession is a culmination of various previous contexts and provides a unified interface for all of them. We can create a SparkSession object using the builder API and then access the SQLContext object from it as follows:

Scala
import org.apache.spark.SparkContext
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.SparkSession

object createSQLContext {
  def main(args: Array[String]): Unit = {
    val spark = SparkSession
      .builder()
      .appName("createSQLContext")
      .master("local[*]")
      .getOrCreate()
    println(spark.sqlContext)
  }
}

Output:

The SQLContext Object

Explanation:

As you can see we accessed the SQLContext object from inside the SparkSession object.



Contact Us