Skip to content

Commit

Permalink
[SPARK-17711][TEST-HADOOP2.2] Fix hadoop2.2 compilation error
Browse files Browse the repository at this point in the history
## What changes were proposed in this pull request?

Fix hadoop2.2 compilation error.

## How was this patch tested?

Existing tests.

cc tdas zsxwing

Author: Yu Peng <[email protected]>

Closes apache#15537 from loneknightpy/fix-17711.
  • Loading branch information
loneknightpy authored and zsxwing committed Oct 19, 2016
1 parent 5f20ae0 commit 2629cd7
Showing 1 changed file with 2 additions and 3 deletions.
5 changes: 2 additions & 3 deletions core/src/main/scala/org/apache/spark/util/Utils.scala
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,6 @@ import scala.util.control.{ControlThrowable, NonFatal}
import com.google.common.cache.{CacheBuilder, CacheLoader, LoadingCache}
import com.google.common.io.{ByteStreams, Files => GFiles}
import com.google.common.net.InetAddresses
import org.apache.commons.io.IOUtils
import org.apache.commons.lang3.SystemUtils
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs.{FileSystem, FileUtil, Path}
Expand Down Expand Up @@ -1486,10 +1485,10 @@ private[spark] object Utils extends Logging {
val gzInputStream = new GZIPInputStream(new FileInputStream(file))
val bufSize = 1024
val buf = new Array[Byte](bufSize)
var numBytes = IOUtils.read(gzInputStream, buf)
var numBytes = ByteStreams.read(gzInputStream, buf, 0, bufSize)
while (numBytes > 0) {
fileSize += numBytes
numBytes = IOUtils.read(gzInputStream, buf)
numBytes = ByteStreams.read(gzInputStream, buf, 0, bufSize)
}
fileSize
} catch {
Expand Down

0 comments on commit 2629cd7

Please sign in to comment.