title | description | services | ms.service | author | ms.author | ms.reviewer | ms.custom | ms.topic | ms.date |
---|---|---|---|---|---|---|---|---|---|
Submit jobs from R Tools for Visual Studio - Azure HDInsight |
Submit R jobs from your local Visual Studio machine to an HDInsight cluster. |
hdinsight |
hdinsight |
maxluk |
maxluk |
jasonh |
hdinsightactive |
conceptual |
06/27/2018 |
R Tools for Visual Studio (RTVS) is a free, open-source extension for the Community (free), Professional, and Enterprise editions of both Visual Studio 2017, and Visual Studio 2015 Update 3 or higher.
RTVS enhances your R workflow by offering tools such as the R Interactive window (REPL), intellisense (code completion), plot visualization through R libraries such as ggplot2 and ggviz, R code debugging, and more.
-
Install R Tools for Visual Studio.
-
Select the Data science and analytical applications workload, then select the R language support, Runtime support for R development, and Microsoft R Client options.
-
You need to have public and private keys for SSH authentication.
-
Install ML Server on your machine. ML Server provides the
RevoScaleR
andRxSpark
functions. -
Install PuTTY to provide a compute context to run
RevoScaleR
functions from your local client to your HDInsight cluster. -
You have the option to apply the Data Science Settings to your Visual Studio environment, which provides a new layout for your workspace for the R tools.
-
To save your current Visual Studio settings, use the Tools > Import and Export Settings command, then select Export selected environment settings and specify a file name. To restore those settings, use the same command and select Import selected environment settings.
-
Go to the R Tools menu item, then select Data Science Settings....
[!NOTE] Using the approach in step 1, you can also save and restore your personalized data scientist layout, rather than repeating the Data Science Settings command.
-
-
Create your HDInsight ML Services cluster.
-
Install the RTVS extension.
-
Download the samples zip file.
-
Open
examples/Examples.sln
to launch the solution in Visual Studio. -
Open the
1-Getting Started with R.R
file in theA first look at R
solution folder. -
Starting at the top of the file, press Ctrl+Enter to send each line, one at a time, to the R Interactive window. Some lines might take a while as they install packages.
-
After running all the lines in the script, you should see an output similar to this:
Using a Microsoft ML Server/Microsoft R Client from a Windows computer equipped with PuTTY, you can create a compute context that will run distributed RevoScaleR
functions from your local client to your HDInsight cluster. Use RxSpark
to create the compute context, specifying your username, the Apache Hadoop cluster's edge node, SSH switches, and so forth.
-
To find your edge node's host name, open your HDInsight ML Services cluster pane on Azure, then select Secure Shell (SSH) on the top menu of the Overview pane.
-
Copy the Edge node host name value.
-
Paste the following code into the R Interactive window in Visual Studio, altering the values of the setup variables to match your environment.
# Setup variables that connect the compute context to your HDInsight cluster mySshHostname <- 'r-cluster-ed-ssh.azurehdinsight.net ' # HDI secure shell hostname mySshUsername <- 'sshuser' # HDI SSH username mySshClientDir <- "C:\\Program Files (x86)\\PuTTY" mySshSwitches <- '-i C:\\Users\\azureuser\\r.ppk' # Path to your private ssh key myHdfsShareDir <- paste("/user/RevoShare", mySshUsername, sep = "/") myShareDir <- paste("/var/RevoShare", mySshUsername, sep = "/") mySshProfileScript <- "/usr/lib64/microsoft-r/3.3/hadoop/RevoHadoopEnvVars.site" # Create the Spark Cluster compute context mySparkCluster <- RxSpark( sshUsername = mySshUsername, sshHostname = mySshHostname, sshSwitches = mySshSwitches, sshProfileScript = mySshProfileScript, consoleOutput = TRUE, hdfsShareDir = myHdfsShareDir, shareDir = myShareDir, sshClientDir = mySshClientDir ) # Set the current compute context as the Spark compute context defined above rxSetComputeContext(mySparkCluster)
-
Execute the following commands in the R Interactive window:
rxHadoopCommand("version") # should return version information rxHadoopMakeDir("/user/RevoShare/newUser") # creates a new folder in your storage account rxHadoopCopy("/example/data/people.json", "/user/RevoShare/newUser") # copies file to new folder
You should see an output similar to the following:
-
Verify that the
rxHadoopCopy
successfully copied thepeople.json
file from the example data folder to the newly created/user/RevoShare/newUser
folder:-
From your HDInsight ML Services cluster pane in Azure, select Storage accounts from the left-hand menu.
-
Select the default storage account for your cluster, making note of the container/directory name.
-
Select Containers from the left-hand menu on your storage account pane.
-
Select your cluster's container name, browse to the user folder (you might have to click Load more at the bottom of the list), then select RevoShare, then newUser. The
people.json
file should be displayed in thenewUser
folder.
-
-
After you are finished using the current Apache Spark context, you must stop it. You cannot run multiple contexts at once.
rxStopEngine(mySparkCluster)
- Compute context options for ML Services on HDInsight
- Combining ScaleR and SparkR provides an example of airline flight delay predictions.