To build SparkR on Windows, the following steps are required
-
Install R (>= 3.1) and Rtools. Make sure to include Rtools and R in
PATH
. -
Install JDK8 and set
JAVA_HOME
in the system environment variables. -
Download and install Maven. Also include the
bin
directory in Maven inPATH
. -
Set
MAVEN_OPTS
as described in Building Spark. -
Open a command shell (
cmd
) in the Spark directory and build Spark with Maven and include the-Psparkr
profile to build the R package. For example to use the default Hadoop versions you can runmvn.cmd -DskipTests -Psparkr package
.\build\mvn
is a shell script somvn.cmd
should be used directly on Windows.
To run the SparkR unit tests on Windows, the following steps are required —assuming you are in the Spark root directory and do not have Apache Hadoop installed already:
-
Create a folder to download Hadoop related files for Windows. For example,
cd ..
andmkdir hadoop
. -
Download the relevant Hadoop bin package from steveloughran/winutils. While these are not official ASF artifacts, they are built from the ASF release git hashes by a Hadoop PMC member on a dedicated Windows VM. For further reading, consult Windows Problems on the Hadoop wiki.
-
Install the files into
hadoop\bin
; make sure thatwinutils.exe
andhadoop.dll
are present. -
Set the environment variable
HADOOP_HOME
to the full path to the newly createdhadoop
directory. -
Run unit tests for SparkR by running the command below. You need to install the needed packages following the instructions under Running R Tests first:
.\bin\spark-submit2.cmd --conf spark.hadoop.fs.defaultFS="file:///" R\pkg\tests\run-all.R