Skip to content

Commit

Permalink
architecture and detailed overview
Browse files Browse the repository at this point in the history
  • Loading branch information
Darren Jefford committed Sep 23, 2018
1 parent 9568504 commit 637d33b
Show file tree
Hide file tree
Showing 3 changed files with 28 additions and 13 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,11 @@ npm install -g botdispatch chatdown ludown luis-apis luisgen msbot qnamaker
az extension add -n botservice
```

- Retrieve your LUIS Authoring Key
- Review [this](https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-reference-regions) documentation page for the correct LUIS portal for the region you plan to deploy to.
- Once signed in click on your name in the top right hand corner.
- Choose Settings and make a note of the Authoring Key for the next step.

### Clone the Repo

The first step is to clone the [Microsoft Conversational AI GitHub Repo](https://github.com/Microsoft/AI). You'll find the Virtual Assistant solution within the `solutions\Virtual-Assistant` folder.
Expand Down Expand Up @@ -74,6 +79,8 @@ Your Virtual Assistant project has a deployment recipe enabling the `msbot clone

To deploy your Virtual Assistant including all dependencies - e.g. CosmosDb, Application Insights, etc. run the following command from a command prompt within your project folder. Ensure you update the authoring key from the previous step and choose the Azure datacenter location you wish to use.

> Ensure the LUIS authoring key retrieved on the previous step is for the region you specify below.
```shell
msbot clone services --name "MyCustomAssistantName" --luisAuthoringKey "YOUR_AUTHORING_KEY" --folder "DeploymentScripts\msbotClone" --location "westus"
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,24 @@

## Custom Assistant Architecture

Bot
Web-Service/ASP.NET Core, plugin existing sevices, databases

## Azure Capabilities

-

## Skills

## Deployment

## Edge

An Architecture diagram of the Virtual Assistant is shown below.

![Virtual Assistant Architecture](./media/virtualassistant-architecture.jpg)

- End-Users can make use of the Virtual Assistant through the support Azure Bot Service Channels or through the Direct Line API that provides the ability to integrate your assistant directly into a device, mobile app or any other client experience.
- Device integration requires creation of a lightweight host app that runs on the device. We have successfully built native apps across multiple platforms along with HTML5 apps. This app is responsible for the following
- Open and closing the microphone has indicated through the InputHint on messages returned by the Assistant
- Audio playback of responses created by the Text-to-Speech service
- Rendering of Adaptive Cards on the device through a broad range of Renderers supplied with the Adaptive Cards SDK
- Processing events received from the Assistant, often to perform on device operations (e.g. change navigation destination)
- Accessing the on-device secret store to store and retrieve a token for communication with the assistant
- Integration with the Unified Speech SDK where on-device speech capabilities are required
- The Assistant makes use of a number of Middleware Components to process incoming messages
- Telemetry Middleware leverages Application Insights to store telemetry for incoming messages, LUIS evaluation and QNA activities. PowerBI can then use this data to surface conversational insights.
Event Processing Middleware processes events sent by the device
Content Moderator Middleware uses the Content Moderator Cognitive Service to detect inappropriate / PII content
The Dispatcher is trained on a variety of Natural Language data sources to provide a unified NLU powered dispatch capability. LUIS models from the Assistant, each configured Skill and questions from QnAMaker are all ingested. THe Dispatcher then recommends the component that should handle a given utterance. When a dialog is active the Dispatcher model is only used to identify top level intents such as Cancel for interruption.
- Dialogs represent conversational topics that the Assistant can handle, the `SkillDialog` is provided with the Virtual Assistant to handle the invocation of Skills based on the Dispatcher identifying an utterance should be passed to a skill. Subsequent messages are routed to the Active dialog for processing until the dialog has ended.
- The Assistant and Skills can then make use of any APIs or Data Sources in the same way any web-page or service would do.
- Skills can request Authentication tokens for a given user when they are activated, this request is passed an event to the Assistant which then uses the Azure Bot Service authentication capability to surface an authentication request to the user if a token isn't found in the secure store.
- Linked Accounts is an example web application that shows how a user can link their Assistant to their digital properties (e.g. Office 365, Google, etc.) on a companion device (mobile phone or website). This would be done as part of the on-boarding process and avoids authentication prompts during voice scenarios.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 637d33b

Please sign in to comment.