Skip to content

Commit

Permalink
Various doc enhancements from bug bash (microsoft#3228)
Browse files Browse the repository at this point in the history
* Close microsoft#3208 resolving Customise VA doc for csharp/typescript

* Close microsoft#3211 DLS tutorial adds BF Emulator and updates Cognitive Services client

* Sync Calendar Skill doc with latest from next branch

* Close microsoft#3221 clarifying running the meeting room script
  • Loading branch information
ryanisgrig authored Mar 25, 2020
1 parent 190078e commit 11100fd
Show file tree
Hide file tree
Showing 10 changed files with 41 additions and 25 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,19 @@
layout: tutorial
category: Clients and Channels
subcategory: Extend to Direct Line Speech
title: Build speech sample app
title: Select a Direct Line Speech client
order: 4
---

# Tutorial: {{page.subcategory}}

## Integrating with the Speech Channel
## Option A: Using the Bot Framework
1. Download the [latest release from the Bot Framework Emulator repository](https://github.com/Microsoft/botframework-emulator/).
![Bot Framework Emulator with Direct Line Speech Configuration]({{site.baseurl}}/assets/images/dlspeech_emulator.png)

1. Download the [latest release from the Direct Line Speech Client repository](https://github.com/Azure-Samples/Cognitive-Services-Direct-Line-Speech-Client/releases).
1. Follow the [quickstart instructions](https://github.com/Azure-Samples/Cognitive-Services-Direct-Line-Speech-Client#quickstart) to set up your environment and connect to your Virtual Assistant.

## Option B: Using a sample Cognitive Services Voice Assistant client
1. Download the [latest release from the Cognitive Services Voice Assistant repository](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/releases).
1. Follow the [instructions on getting started](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/tree/master/samples/clients/csharp-wpf#getting-started) to set up your environment and connect to your Virtual Assistant.

![Direct Line Speech Client Configuration]({{site.baseurl}}/assets/images/dlspeechclient.png)
27 changes: 19 additions & 8 deletions docs/_docs/skills/samples/calendar.md
Original file line number Diff line number Diff line change
Expand Up @@ -172,9 +172,9 @@ To use a Google account follow these steps:

The Calendar skill provides additional support to search and book meeting rooms. Due to search limitations in Microsoft Graph limiting the experience we leverage Azure Search to provide fuzzy meeting room name matching, floor level, etc.

1. To simplify the process of extracting your meeting room data and inserting into Azure Search we have provided an example PowerShell script. However, you should ensure that `displayName`, `emailAddress`, `building` and `floorNumber` are populated within your Office 365 tenant (example below)). You can do this through the [Graph Explorer]() using the query shown below, this information is required for the Meeting Room booking expeirence.
1. To simplify the process of extracting your meeting room data and inserting into Azure Search we have provided an example PowerShell script. However, you should ensure that `displayName`, `emailAddress`, `building` and `floorNumber` are populated within your Office 365 tenant (example below)). You can do this through the [Graph Explorer](https://developer.microsoft.com/en-us/graph/graph-explorer/preview) using the query shown below, this information is required for the Meeting Room booking experience.

`https://graph.microsoft.com/beta/me/findrooms`
`https://graph.microsoft.com/beta/places/microsoft.graph.room`
```json
{
"value": [
Expand All @@ -196,16 +196,26 @@ The Calendar skill provides additional support to search and book meeting rooms.
}
```

2. Configure the settings of your registered app in Azure App Registration portal
- Make sure your account has permission to access your tenant's meeting room data, testing the previous query will validate this.
- In Authentication, set "Treat application as a public client" as "Yes"
- In API Permissions, add Scope: **Place.Read.All**
1. In the **Azure Portal**, Configure the settings of your registered Calendar Skill app at **Azure Active Directory** > **App registrations**
- This app will request the permission for **Place.Read.All** scope. There are two ways to grant the consent:
1. In the **API permissions** tab, add a permission for **Place.Read.All** scope, and grant admin consent for your organization.
2. Make sure your account has permission to access your tenant's meeting room data so that you can consent on behalf of your organization in the login step, testing the previous query will validate this.
- In the **Authentication** tab
- Toggle **Default client type** > **Treat application as a public client** to "Yes"
- Set **Supported account types** according to your own requirements

1. Run the following command to install the module:
```powershell
Install-Module -Name CosmosDB
```

3. Run the following command:
1. Run the following command:
```powershell
./Deployment/Scripts/enable_findmeetingroom.ps1
```

![A successful run of the Meeting Room script]({{site.baseurl}}/assets/images/calendar-meeting-room-script.png)

### What do these parameters mean?

|Parameter|Description|Required|
Expand All @@ -214,6 +224,7 @@ The Calendar skill provides additional support to search and book meeting rooms.
|cosmosDbAccount | The account name of an existing CosmosDb deployment where the meeting room data will be stored, this will then be used as a data source by Azure Search. | Yes |
|primaryKey | The primary key of the CosmosDB deployment | Yes |
|appId | AppId of an authorised Azure AD application which can access Meeting room data | Yes |
|tenantId | The tenantId corresponding to the application. If you have set "Supported account types" as "Multitenant" and your account has a differet tenant, please use "common"| Yes|

You can access all the required parameters from the [Deployment](#deployment) step. <br>
**Note:** When running the script, you will be asked to sign in with your account which can access the meeting room data in the MSGraph.
Expand All @@ -227,4 +238,4 @@ Learn how to use [events]({{site.baseurl}}/virtual-assistant/handbook/events) to
## Download a transcript
{:.toc}

<a class="btn btn-primary" href="{{site.baseurl}}/assets/transcripts/skills-calendar.transcript">Download</a>
<a class="btn btn-primary" href="{{site.baseurl}}/assets/transcripts/skills-calendar.transcript">Download</a>
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ order: 2

The assistant's greeting uses an [Adaptive Card](https://adaptivecards.io/), an open framework that lets you describe your content as you see fit and deliver it beautifully wherever your customers are.

1. Copy and paste the following JSON payload to demonstrate how you can start customizing the look and feel of your Assistant. Note the inline references (`@{NewUserIntroCardTitle()}`) to other LG elements to further adapt the contents at runtime.
1. Copy and paste the following JSON payload to demonstrate how you can start customizing the look and feel of your Assistant. Note the inline references (`${NewUserIntroCardTitle()}`) to other LG elements to further adapt the contents at runtime.

```json
{
Expand Down Expand Up @@ -43,7 +43,7 @@ The assistant's greeting uses an [Adaptive Card](https://adaptivecards.io/), an
"size": "Large",
"weight": "Bolder",
"color": "Light",
"text": "@{NewUserIntroCardTitle()}",
"text": "${NewUserIntroCardTitle()}",
"wrap": true
},
{
Expand Down Expand Up @@ -78,7 +78,7 @@ The assistant's greeting uses an [Adaptive Card](https://adaptivecards.io/), an
],
"$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
"version": "1.0",
"speak": "@{NewUserIntroCardTitle()}"
"speak": "${NewUserIntroCardTitle()}"
}
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ Within `Startup.cs` in your project root directory add the newly created LG file
var localizedTemplates = new Dictionary<string, List<string>>();
var templateFiles = new List<string>() { "MainResponses", "OnboardingResponses" };
var supportedLocales = new List<string>() { "en-us", "de-de", "es-es", "fr-fr", "it-it", "zh-cn" };
```

## Multiple Responses

Expand All @@ -38,12 +39,12 @@ Within MainResponses we provide an example of occasionally using the Users name
# ConfusedMessage
- I'm sorry, I didn’t understand that. Can you give me some more information?
- Sorry, I didn't get that. Can you tell me more?
- Sorry@{RandomName()}, I didn't get that. Can you tell me more?
- Sorry ${RandomName()}, I didn't get that. Can you tell me more?
- Apologies, I didn't quite understand. Can you give me more information?

# RandomName
- IF: @{Name && rand(0, 1000) > 500}
- @{concat(' ', Name)}
- IF: ${Name && rand(0, 1000) > 500}
- ${concat(' ', Name)}
- ELSE:
-
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ You may wish to add an additional [QnA Maker](https://www.qnamaker.ai/) knowledg
}
```

The `kbID`, `hostName` and `endpoint key` can all be found within the **Publish** page on the [QnA Maker portal](https://qnamaker.ai). The subscription key is available from your QnA resource in the Azure Portal.
The `kbID`, `hostname` and `endpointKey` can all be found within the **Publish** page on the [QnA Maker portal](https://qnamaker.ai). The subscription key is available from your QnA resource in the Azure Portal.

1. The final step is to update your dispatch model and associated strongly typed class (LuisGen). We have provided the `update_cognitive_models.ps1` script to simplify this for you. The optional `-RemoteToLocal` parameter will generate the matching LU file on disk for your new knowledgebase (if you created using portal). The script will then refresh the dispatcher.

Expand All @@ -59,6 +59,6 @@ As you build out your assistant you will likely update the LUIS models and QnA M

Run the following command from within Powershell (pwsh.exe) within your **project directory**.

```shell
./Deployment/Scripts/update_cognitive_models.ps1 -RemoteToLocal
```
```shell
./Deployment/Scripts/update_cognitive_models.ps1 -RemoteToLocal
```
Original file line number Diff line number Diff line change
Expand Up @@ -28,14 +28,14 @@ You may wish to add an additional [QnA Maker](https://www.qnamaker.ai/) knowledg
}
```

The `kbID`, `hostName` and `endpoint key` can all be found within the Publish page on the [QnAMaker portal](https://qnamaker.ai). Subscription Key is available from your QnA resource in the Azure Portal.
The `kbID`, `hostname` and `endpointKey` can all be found within the Publish page on the [QnAMaker portal](https://qnamaker.ai). Subscription Key is available from your QnA resource in the Azure Portal.

1. The final step is to update your Dispatcher and associated strongly typed class (LuisGen). We have provided the `update_cognitive_models.ps1` script to simplify this for you. The optional `-RemoteToLocal` parameter will generate the matching LU file on disk for your new knowledgebase (if you created using portal). The script will then refresh the dispatcher.

Run the following command from within Powershell (pwsh.exe) within your **project directory**.

```shell
.\Deployment\Scripts\update_cognitive_models.ps1 -RemoteToLocal
./Deployment/Scripts/update_cognitive_models.ps1 -RemoteToLocal
```

1. Update the `./src/dialogs/mainDialog.ts` file to include the corresponding Dispatch intent for your new QnA source following the example provided.
Expand All @@ -47,5 +47,5 @@ As you build out your assistant you will likely update the LUIS models and QnAMa
Run the following command from within Powershell (pwsh.exe) within your **project directory**.

```shell
.\Deployment\Scripts\update_cognitive_models.ps1 -RemoteToLocal
./Deployment/Scripts/update_cognitive_models.ps1 -RemoteToLocal
```
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/images/dlspeech_emulator.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/assets/images/dlspeechclient.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/assets/images/dlspeechclientsettings.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 11100fd

Please sign in to comment.