Skip to content

Latest commit

 

History

History
224 lines (151 loc) · 14.4 KB

machine-learning-walkthrough-5-publish-web-service.md

File metadata and controls

224 lines (151 loc) · 14.4 KB
title description services documentationcenter author manager editor ms.assetid ms.service ms.workload ms.tgt_pltfrm ms.devlang ms.topic ms.date ms.author
Step 5: Deploy the Machine Learning Web service | Microsoft Docs
Step 5 of the Develop a predictive solution walkthrough: Deploy a predictive experiment in Machine Learning Studio as a Web service.
machine-learning
garyericson
jhubbard
cgronlun
3fca74a3-c44b-4583-a218-c14c46ee5338
machine-learning
data-services
na
na
article
12/16/2016
garye

Walkthrough Step 5: Deploy the Azure Machine Learning Web service

This is the fifth step of the walkthrough, Develop a predictive analytics solution in Azure Machine Learning

  1. Create a Machine Learning workspace
  2. Upload existing data
  3. Create a new experiment
  4. Train and evaluate the models
  5. Deploy the Web service
  6. Access the Web service

To give others a chance to use the predictive model we've developed in this walkthrough, we can deploy it as a Web service on Azure.

Up to this point we've been experimenting with training our model. But the deployed service is no longer going to do training - it generates predictions by scoring the user's input based on our model. So we're going to do some preparation to convert this experiment from a training experiment to a predictive experiment.

This is a two-step process:

  1. Convert the training experiment we've created into a predictive experiment
  2. Deploy the predictive experiment as a Web service

But first, we need to trim this experiment down a little. We currently have two different models in the experiment, but we only want one model when we deploy this as a Web service.

Let's say we've decided that the boosted tree model performed better than the SVM model. So the first thing to do is remove the Two-Class Support Vector Machine module and the modules that were used for training it. You may want to make a copy of the experiment first by clicking Save As at the bottom of the experiment canvas.

We need to delete the following modules:

Select each module and press the Delete key, or right-click the module and select Delete.

Now we're ready to deploy this model using the Two-Class Boosted Decision Tree.

Convert the training experiment to a predictive experiment

Converting to a predictive experiment involves three steps:

  1. Save the model we've trained and then replace our training modules
  2. Trim the experiment to remove modules that were only needed for training
  3. Define where the Web service will accept input and where it generates the output

We could do this manually, but fortunately all three steps can be accomplished by clicking Set Up Web service at the bottom of the experiment canvas (and selecting the Predictive Web service option).

Tip

If you want more details on what happens when you convert a training experiment to a predictive experiment, see Convert a Machine Learning training experiment to a predictive experiment.

When you click Set Up Web service, several things happen:

  • The trained model is converted to a single Trained Model module and stored in the module palette to the left of the experiment canvas (you can find it under Trained Models)
  • Modules that were used for training are removed; specifically:
  • The saved trained model is added back into the experiment
  • Web service input and Web service output modules are added (these identify where the user's data will enter the model, and what data is returned, when the Web service is accessed)

Note

You can see that the experiment is saved in two parts under tabs that have been added at the top of the experiment canvas. The original training experiment is under the tab Training experiment, and the newly created predictive experiment is under Predictive experiment. The predictive experiment is the one we'll deploy as a Web service.

We need to take one additional step with this particular experiment. We added two Execute R Script modules to provide a weighting function to the data. That was just a trick we needed for training and testing, so we can take those modules out in the final model.

Machine Learning Studio removed one Execute R Script module when it removed the Split module. Now we can remove the other and connect Metadata Editor directly to Score Model.

Our experiment should now look like this:

Scoring the trained model

Note

You may be wondering why we left the UCI German Credit Card Data dataset in the predictive experiment. The service is going to score the user's data, not the original dataset, so why leave the original dataset in the model?

It's true that the service doesn't need the original credit card data. But it does need the schema for that data, which includes information such as how many columns there are and which columns are numeric. This schema information is necessary to interpret the user's data. We leave these components connected so that the scoring module has the dataset schema when the service is running. The data isn't used, just the schema.

Run the experiment one last time (click Run.) If you want to verify that the model is still working, click the output of the Score Model module and select View Results. You'll see that the original data is displayed, along with the credit risk value ("Scored Labels") and the scoring probability value ("Scored Probabilities".)

Deploy the Web service

You can deploy the experiment as either a classic Web service or a new Web service that's based on Azure Resource Manager.

Deploy as a classic Web service

To deploy a classic Web service derived from our experiment, click Deploy Web Service below the canvas and select Deploy Web Service [Classic]. Machine Learning Studio deploys the experiment as a Web service and takes you to the dashboard for that Web service. From here, you can return to the experiment (View snapshot or View latest) and run a simple test of the Web service (See Test the Web service below). There is also information here for creating applications that can access the Web service (more on that in the next step of this walkthrough).

Web service dashboard

You can configure the service by clicking the CONFIGURATION tab. Here you can modify the service name (it's given the experiment name by default) and give it a description. You can also give more friendly labels for the input and output data.

Configure the Web service

Deploy as a New Web service

To deploy a New Web service derived from our experiment:

  1. Click Deploy Web Service below the canvas and select Deploy Web Service [New]. Machine Learning Studio transfers you to the Azure Machine Learning Web services Deploy Experiment page.

  2. Enter a name for the Web service.

  3. For Price Plan, you can select an existing pricing plan, or select "Create new" and give the new plan a name and select the monthly plan option. The plan tiers default to the plans for your default region and your Web service is deployed to that region.

  4. Click Deploy.

After a few minutes, the Quickstart page for your Web service opens.

You can configure the service by clicking the Configure tab. Here you can modify the service title and give it a description.

To test the Web service select, click the Test tab (see Test the Web service below). For information on creating applications that can access the Web service, click the Consume tab (the next step in this walkthrough will go into more detail).

Tip

You can update the Web service after you've deployed it. For example, if you want to change your model, then you can edit the training experiment, tweak the model parameters, and click Deploy Web Service, selecting Deploy Web Service [Classic] or Deploy Web Service [New]. When you deploy the experiment again, it replaces the Web service, now using your updated model.

Test the Web service

When the Web service is accessed, the user's data enters through the Web service input module where it's passed to the Score Model module and scored. The way we've set up the predictive experiment, the model expects data in the same format as the original credit risk dataset.

The results are then returned to the user from the Web service through the Web service output module.

Tip

The way we have the predictive experiment configured, the entire results from the Score Model module are returned. This includes all the input data plus the credit risk value and the scoring probability. If you wanted to return something different - for example, only the credit risk value - then you could insert a Project Columns module between Score Model and the Web service output to eliminate columns you don't want the Web service to return.

Test a classic Web service

You can test the Web service in Machine Learning Studio or in the Azure Machine Learning Web Services portal. Testing in the Azure Machine Learning Web Services portal has the advantage of allowing you to enable

Test in Machine Learning Studio

  1. On the DASHBOARD page, click the Test button under Default Endpoint. A dialog pops up and asks you for the input data for the service. These are the same columns that appeared in the original credit risk dataset.

  2. Enter a set of data and then click OK.

Test in the Azure Machine Learning Web Services portal

  1. On the DASHBOARD page, click the Test preview link under Default Endpoint. The test page in the Azure Machine Learning Web Services portal for the Web service endpoint opens and asks you for the input data for the service. These are the same columns that appeared in the original credit risk dataset.

  2. Click Test Request-Response.

Test a new Web service

  1. In the Azure Machine Learning Web services portal, click Test at the top of the page. The Test page opens and you can input data for the service. The input fields displayed correspond to the columns that appeared in the original credit risk dataset.

  2. Enter a set of data and then click Test Request-Response.

The results of the test will display on the right hand side of the page in the output column.

Tip

When testing in the Azure Machine Learning Web Services portal, you can have the portal create sample data that you can use to test the Request-Response service. On the Configure page, select "Yes" for Sample Data Enabled?. When you open the the Request-Response tab on the Test page, the portal will fill in sample data taken from the original credit risk dataset.

Manage the Web service

Manage a Classic Web service in the Azure classic portal

Once you've deployed your Classic Web service, you can manage it from the Azure classic portal.

  1. Sign in to the Azure classic portal
  2. In the Microsoft Azure services panel, click MACHINE LEARNING
  3. Click your workspace
  4. Click the Web services tab
  5. Click the Web service we created
  6. Click the "default" endpoint

From here, you can do things like monitor how the Web service is doing and make performance tweaks by changing how many concurrent calls the service can handle.

For more details, see:

Manage a Web service in the Azure Machine Learning Web Services portal

Once you've deployed your Web service, whether Classic or New, you can manage it from the Azure Machine Learning Web services portal.

To monitor the performance of your Web service:

  1. Sign in to the Azure Machine Learning Web services portal
  2. Click Web services
  3. Click your Web service
  4. Click the Dashboard

Next: Access the Web service