Skip to content

nickyhof/spark-csv

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

67 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Spark SQL CSV Library

A library for parsing and querying CSV data with Spark SQL.

Build Status

Requirements

This library requires Spark 1.3+

Linking

You can link against this library in your program at the following coordiates:

groupId: com.databricks
artifactId: spark-csv_2.10
version: 1.0.0

The spark-csv assembly jar file can also be added to a Spark using the --jars command line option. For example, to include it when starting the spark shell:

$ bin/spark-shell --jars spark-csv-assembly-1.0.0.jar

Features

This package allows reading CSV files in local or distributed filesystem as Spark DataFrames. When reading files the API accepts several options:

  • path: location of files. Similar to Spark can be a wildcard.
  • header: when set to true the first line of files will be used to name columns and will not be included in data. All types will be assumed string.
  • delimiter: by default lines are delimited using ',', but delimiter can be set to any character
  • quote: by default the quote character is '"', but can be set to any character. Delimiters inside quotes are ignored

The package also support saving simple (non-nested) DataFrame. When saving you can specify the delimiter and whether we should generate a header row for the table (each header has name c$i where $i is column index). See following examples for more details.

These examples use a CSV file available for download here:

$ wget https://github.com/databricks/spark-csv/raw/master/src/test/resources/cars.csv

SQL API

CSV data can be queried in pure SQL by registering the data as a (temporary) table.

CREATE TABLE cars
USING com.databricks.spark.csv
OPTIONS (path "cars.csv", header "true")

You can also specify column names and types in DDL.

CREATE TABLE cars (yearMade double, carMake string, carModel string, comments string, blank string)
USING com.databricks.spark.csv
OPTIONS (path "cars.csv", header "true")

Scala API

The recommended way to load CSV data is using the load/save functions in SQLContext.

import org.apache.spark.sql.SQLContext
import com.databricks.spark.csv._

val sqlContext = new SQLContext(sc)
val df = sqlContext.load("cars.csv", Map("header" -> "true"))
df.select("year", "model").save("newcars.csv", Map("header" -> "false", "delimiter" -> "\t"))

You can also use the implicits from com.databricks.spark.csv._.

import org.apache.spark.sql.SQLContext
import com.databricks.spark.csv._

val sqlContext = new SQLContext(sc)

val cars = sqlContext.csvFile("cars.csv")
cars.select("year", "model").saveAsCsvFile("newcars.tsv", Map("header" -> "false", "delimiter" -> "\t"))

Java API

Similar to Scala, we recommend load/save functions in SQLContext.

import org.apache.spark.sql.SQLContext
import com.databricks.spark.csv._

SQLContext sqlContext = new SQLContext(sc);

HashMap<String, String> options = new HashMap<String, String>();
options.put("header", "true");

DataFrame df = sqlContext.load("cars.csv", options);
df.select("name", "age").save("newcars.csv", options);

In Java (as well as Scala) CSV files can be read using functions in CsvParser.

import com.databricks.spark.csv.CsvParser;
SQLContext sqlContext = new org.apache.spark.sql.SQLContext(sc);

DataFrame cars = (new CsvParser()).withUseHeader(true).csvFile(sqlContext, "cars.csv");

Python API

In Python you can read and save CSV files using load/save functions.

from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)

df = sqlContext.load("cars.csv", header = True)
df.select("year", "model").save("newcars.csv", header = False, delimiter = "\t")

Building From Source

This library is built with SBT, which is automatically downloaded by the included shell script. To build a JAR file simply run sbt/sbt assembly from the project root.

About

CSV Data Source for Apache Spark 1.x

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Scala 80.3%
  • Python 12.5%
  • Shell 6.1%
  • Java 1.1%