Skip to content

Commit

Permalink
Test against Spark 1.6.0
Browse files Browse the repository at this point in the history
Since [Spark 1.6.0](http://spark.apache.org/docs/1.6.0/) is released and this still stays as a third-party library, I think it would be better to test against Spark 1.6.0.
Fortunately, this looks testing fine with this version.

Author: hyukjinkwon <[email protected]>

Closes databricks#227 from HyukjinKwon/spark-version-up.
  • Loading branch information
HyukjinKwon authored and falaki committed Jan 6, 2016
1 parent 4a5ee81 commit 44964a2
Show file tree
Hide file tree
Showing 4 changed files with 12 additions and 5 deletions.
7 changes: 7 additions & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,13 @@ matrix:
- jdk: openjdk7
scala: 2.11.7
env: TEST_SPARK_VERSION="1.5.0"
# Spark 1.6.0
- jdk: openjdk7
scala: 2.10.5
env: TEST_SPARK_VERSION="1.6.0"
- jdk: openjdk7
scala: 2.11.7
env: TEST_SPARK_VERSION="1.6.0"
script:
- sbt -Dspark.testVersion=$TEST_SPARK_VERSION ++$TRAVIS_SCALA_VERSION coverage test
- sbt ++$TRAVIS_SCALA_VERSION assembly
Expand Down
2 changes: 1 addition & 1 deletion build.sbt
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ spName := "databricks/spark-csv"

crossScalaVersions := Seq("2.10.5", "2.11.7")

sparkVersion := "1.5.0"
sparkVersion := "1.6.0"

val testSparkVersion = settingKey[String]("The version of Spark to test against.")

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,6 @@ private class StringIteratorReader(val iter: Iterator[String]) extends java.io.R
}
}
}

n
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,8 @@ import org.apache.spark.sql.types._
private[csv] object InferSchema {

/**
* Similar to the JSON schema inference. [[org.apache.spark.sql.json.InferSchema]]
* Similar to the JSON schema inference.
* [[org.apache.spark.sql.execution.datasources.json.InferSchema]]
* 1. Infer type of each row
* 2. Merge row types to find common type
* 3. Replace any null types with string type
Expand All @@ -35,11 +36,11 @@ private[csv] object InferSchema {
val startType: Array[DataType] = Array.fill[DataType](header.length)(NullType)
val rootTypes: Array[DataType] = tokenRdd.aggregate(startType)(inferRowType, mergeRowTypes)

val stuctFields = header.zip(rootTypes).map { case (thisHeader, rootType) =>
val structFields = header.zip(rootTypes).map { case (thisHeader, rootType) =>
StructField(thisHeader, rootType, nullable = true)
}

StructType(stuctFields)
StructType(structFields)
}

private def inferRowType(rowSoFar: Array[DataType], next: Array[String]): Array[DataType] = {
Expand Down

0 comments on commit 44964a2

Please sign in to comment.