Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
…t-pr into localMaster

Conflicts:
	README.md
  • Loading branch information
v-aljenk committed Feb 7, 2014
2 parents 9854c84 + 83da162 commit 5f1e214
Show file tree
Hide file tree
Showing 35 changed files with 239 additions and 1,135 deletions.
38 changes: 38 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ Learn the basics of contributing articles to the azure-content repository for Wi

Thank you for your interest in Windows Azure documentation. Before we can accept your pull request, we need you to sign a Contribution License Agreement (CLA). Full details are available at [http://windowsazure.github.io/guidelines.html#cla](http://windowsazure.github.io/guidelines.html#cla). Please email a copy of the signed CLA to [[email protected]](mailto:[email protected]).

<<<<<<< HEAD
###Who needs a CLA?
* Members of the Microsoft Open Technologies group.
* Contributors who don't work for Microsoft.
Expand Down Expand Up @@ -45,6 +46,43 @@ Use the following syntax to reference an include file in your article:

## Using GitHub, Git and this repository

=======
##Repository organization

The content in the azure-content repository follows the organization of documentation on WindowsAzure.com. This repository contains two root folders:

### \articles

The *\articles* folder contains the documentation articles formatted as markdown files with an *.md* extension. Articles are published to WindowsAzure.com in the path *http://www.windowsazure.com/en-us/documentation/articles/{article-name-without-md}/*.

* **Article filenames:** Begin with the service name, such as *hdinsight*, and include the development language and a description of the subject matter. Use all lowercase letters and dashes (-) to separate the words.

* **Media subfolders:** The *\articles* folder contains the *\media* folder, inside which are subfolders with the images for each article. The article image folders are named identically to the article file, minus the *.md* file extension.

### \includes

Content authors can create reusable content sections to be included into one or more articles. An include file is simple markdown (.md) file that can contain any valid markdown content including text, links, and images. All include markdown files must contained in the *\includes* directory in the root of this repository.

* **Media subfolders:** The *\includes* folder contains a *\media* folder, inside which are folders for the images in each include. The includes image folders are named identically to the include file, minus the *.md* file extension.

## Working with Windows Azure articles

### Article template

*Information to come.*

### Referencing include files

Use the following syntax to reference an include file in your article:

[WACOM.INCLUDE [include-short-name](../includes/include-file-name.md)]


**Note:** An include file cannot reference other includes.

## Using GitHub, Git and this repository

>>>>>>> 83da1629f1e2c238afd4d4bfa156758a941d2d8a
**Note:** Most of the information in this section can be found in [GitHub Help] [] articles. If you are familiar with Git and GitHub, skip to the "Contribut and edit content" section for the particulars of the code/content flow of this repository.

### Setting up your fork of the repository
Expand Down
14 changes: 8 additions & 6 deletions articles/architecture-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,37 +5,39 @@ Learn how to implement common design patterns in Windows Azure.

##Design patterns

###[Competing Consumers](http://msdn.microsoft.com/en-us/library/windowsazure/dn568101.aspx)
###[Competing Consumers](http://msdn.microsoft.com/en-us/library/dn568101.aspx)

![Competing Consumers][competing_consumers]

Enable multiple concurrent consumers to process messages received on the same messaging channel. This pattern enables a system to process multiple messages concurrently to optimize throughput, to improve scalability and availability, and to balance the workload.

###[Command and Query Responsibility Segregation](http://msdn.microsoft.com/en-us/library/windowsazure/dn568103.aspx)
###[Command and Query Responsibility Segregation](http://msdn.microsoft.com/en-us/library/dn568103.aspx)

![Command and Query Responsibility Segregation][cqrs]

Segregate operations that read data from operations that update data by using separate interfaces. This pattern can maximize performance, scalability, and security; support evolution of the system over time through higher flexibility; and prevent update commands from causing merge conflicts at the domain level.

###[Leader Election](http://msdn.microsoft.com/en-us/library/windowsazure/dn568104.aspx)
###[Leader Election](http://msdn.microsoft.com/en-us/library/dn568104.aspx)

![Leader Election][leader_election]

Coordinate the actions performed by a collection of collaborating task instances in a distributed application by electing one instance as the leader that assumes responsibility for managing the other instances. This pattern can help to ensure that task instances do not conflict with each other, cause contention for shared resources, or inadvertently interfere with the work that other task instances are performing.

###[Pipes and Filters](http://msdn.microsoft.com/en-us/library/windowsazure/dn568100.aspx)
###[Pipes and Filters](http://msdn.microsoft.com/en-us/library/dn568100.aspx)

![Pipes and Filters][pipes_and_filters]

Decompose a task that performs complex processing into a series of discrete elements that can be reused. This pattern can improve performance, scalability, and reusability by allowing task elements that perform the processing to be deployed and scaled independently.

###[Valet Key](http://msdn.microsoft.com/en-us/library/windowsazure/dn568102.aspx)
###[Valet Key](http://msdn.microsoft.com/en-us/library/dn568102.aspx)

![Valet Key][valet_key]

Use a token or key that provides clients with restricted direct access to a specific resource or service in order to offload data transfer operations from the application code. This pattern is particularly useful in applications that use cloud-hosted storage systems or queues, and can minimize cost and maximize scalability and performance.

###[See all](http://msdn.microsoft.com/en-us/library/windowsazure/dn568099.aspx)
### Additional Guidance

For information on more common design patterns in Windows Azure, see [Cloud Design Patterns](http://msdn.microsoft.com/en-us/library/dn568099.aspx).


[competing_consumers]: ./media/architecture-overview/CompetingConsumers.png
Expand Down
2 changes: 1 addition & 1 deletion articles/cloud-services-continuous-delivery-use-vso.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

# Continuous delivery to Windows Azure using Visual Studio Online

Visual Studio Online (previously Team Foundation Service) is a cloud-hosted service version of Microsoft's popular Team Foundation Server (TFS) software that provides highly customizable source code and build management, agile development and team process workflow, issue and work item tracking, and more. You can configure your Visual Studio Online team projects to automatically build and deploy to Windows Azure web sites or cloud services. For information on how to set up a continuous build and deploy system using an on-premises Team Foundation Server, see [Continuous Delivery for Cloud Applications in Windows Azure](http://www.windowsazure.com/en-us/develop/net/common-tasks/continuous-delivery/).
Visual Studio Online (previously Team Foundation Service) is a cloud-hosted service version of Microsoft's popular Team Foundation Server (TFS) software that provides highly customizable source code and build management, agile development and team process workflow, issue and work item tracking, and more. You can configure your Visual Studio Online team projects to automatically build and deploy to Windows Azure web sites or cloud services. For information on how to set up a continuous build and deploy system using an on-premises Team Foundation Server, see [Continuous Delivery for Cloud Applications in Windows Azure](../cloud-services-continuous-delivery-use-vso).

This tutorial assumes you have Visual Studio 2013 and the Windows Azure SDK installed. If you don't already have Visual Studio 2013, download it [here](http://www.microsoft.com/visualstudio/eng/downloads). Install the Windows Azure SDK from [here](http://go.microsoft.com/fwlink/?LinkId=239540).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -895,7 +895,7 @@ In the [next tutorial][tut2], you'll download the sample project, configure your

For links to additional resources for working with Windows Azure Storage tables, queues, and blobs, see the end of [the last tutorial in this series][tut5].

<div><a href="../2-download-and-run/" class="site-arrowboxcta download-cta">Tutorial 2</a></div>
<div><a href="/en-us/develop/net/tutorials/multi-tier-web-site/2-download-and-run/" class="site-arrowboxcta download-cta">Tutorial 2</a></div>

[tut2]: /en-us/develop/net/tutorials/multi-tier-web-site/2-download-and-run/
[tut3]: /en-us/develop/net/tutorials/multi-tier-web-site/3-web-role/
Expand Down
2 changes: 1 addition & 1 deletion articles/command-line-tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -562,7 +562,7 @@ Some systems impose per-process file descriptor limits. If this limit is exceede

This command allows you to upload a vm disk

~$ azure vm disk upload ???http://sourcestorage.blob.core.windows.net/vhds/sample.vhd??? ???http://destinationstorage.blob.core.windows.net/vhds/sample.vhd??? ???DESTINATIONSTORAGEACCOUNTKEY???
~$ azure vm disk upload "http://sourcestorage.blob.core.windows.net/vhds/sample.vhd" "http://destinationstorage.blob.core.windows.net/vhds/sample.vhd" "DESTINATIONSTORAGEACCOUNTKEY"
info: Executing command vm disk upload
info: Uploading 12351.5 KB
info: vm disk upload command OK
Expand Down
29 changes: 23 additions & 6 deletions articles/hdinsight-administer-use-management-portal.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

# Administer HDInsight clusters using Management Portal

In this topic, you will learn how to use Windows Azure Management Portal to create an HDInsight cluster, and how to access the Hadoop command console on the cluster. There are also other tools available for administrating HDInsight in addition to the portal.
Using the Windows Azure management portal, you can provision HDInsight clusters, change the Hadoop user password, and enable RDP so you can access the Hadoop command console on the cluster. There are also other tools available for administrating HDInsight in addition to the portal.

- For more information on administering HDInsight using the Cross-platform Command-line Tools, see [Administer HDInsight Using Cross-platform Command-line Interface][hdinsight-admin-cross-platform].

Expand All @@ -19,17 +19,18 @@ Before you begin this article, you must have the following:

##In this article

* [Create an HDInsight cluster](#create)
* [Provision HDInsight clusters](#create)
* [Change HDInsight cluster username and password](#password)
* [Enable remote desktop access](#enablerdp)
* [Open Hadoop command console](#hadoopcmd)
* [Next steps](#nextsteps)

##<a id="create"></a> Create an HDInsight cluster
##<a id="create"></a> Provision HDInsight clusters

An HDInsight cluster uses a Windows Azure Blob Storage container as the default file system. For more information about how Windows Azure Blob Storage provides a seamless experience with HDInsight clusters, see [Use Windows Azure Blob Storage with HDInsight][hdinsight-storage].


**To create an HDInsight cluster**
**To provision an HDInsight cluster**

1. Sign in to the [Windows Azure Management Portal][azure-management-portal].
2. Click **+ NEW** on the bottom of the page, click **DATA SERVICES**, click **HDINSIGHT**, and then click **QUICK CREATE**.
Expand All @@ -38,7 +39,7 @@ An HDInsight cluster uses a Windows Azure Blob Storage container as the default

![HDI.QuickCreate][image-cluster-quickcreate]

When using the Quick Create option to create a cluster, the default username for the administrator account is *admin*. To give the account a different username, you must use the Custom Create option instead of Quick Create option.
When using the Quick Create option to create a cluster, the default username for the administrator account is *admin*. To give the account a different username, you can use the Custom Create option instead of Quick Create option. Or change the account name after it is provisioned.

When using the Quick Create option to create a cluster, a new container with the name of the HDInsight cluster is created automatically in the storage account specified. If you want to customize the name of the container to be used by default by the cluster, you must use the custom create option.

Expand All @@ -51,6 +52,22 @@ An HDInsight cluster uses a Windows Azure Blob Storage container as the default

![HDI.ClusterLanding][image-cluster-landing]

##<a id="password"></a> Change the HDInsight cluster username and password
An HDInsight cluster can have two user accounts. The HDInsight cluster user account is created during the provision processs. You can also create a RDP user account for accessing the cluster via RDP. See [Enable remote desktop](#enablerdp).

**To change HDInsight cluster username and password**

1. Sign in to the [Windows Azure Management Portal][azure-management-portal].
2. Click **HDINSIGHT** on the left pane. You will see a list of deployed HDInsight clusters.
3. Click the HDInsight cluster that you want to reset the username and password.
4. From the top of the page, click **CONFIGURATION**.
5. Click **OFF** next to **HADOOP SERVICES**.
6. Click **SAVE** on the bottom of the page, and wait for the disabling to complete.
7. After the service has been disabled, click **ON** next to **HADOOP SERVICES**.
8. Enter **USER NAME** and **NEW PASSWORD**. These will be the new username and password for the cluster.
8. Click **SAVE**.



##<a id="enablerdp"></a> Enable remote desktop

Expand All @@ -59,7 +76,7 @@ The credentials for the cluster that you provided at its creation give access to
**To enable remote desktop**

1. Sign in to the [Windows Azure Management Portal][azure-management-portal].
2. Click **HDINSIGHT** on the left pane. You will see a list of deployed Hadoop clusters.
2. Click **HDINSIGHT** on the left pane. You will see a list of deployed HDInsight clusters.
3. Click the HDInsight cluster that you want to connect to.
4. From the top of the page, click **CONFIGURATION**.
5. From the bottom of the page, click **ENABLE REMOTE**.
Expand Down
2 changes: 1 addition & 1 deletion articles/hdinsight-develop-deploy-java-mapreduce.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
<properties linkid="manage-services-hdinsight-develop-deploy-Java-MapReduce-program" urlDisplayName="HDInsight Tutorials" pageTitle="Develop and deploy Java MapReduce jobs | Windows Azure" metaKeywords="hdinsight, hdinsight development, hadoop development, hdinsight deployment, development, deployment, tutorial, MapReduce, Java" description="Develop MapReduce jobs on HDInsight emulator, and deploy them to Windows Azure HDInsight." title="Develop and deploy Java MapReduce jobs to HDInsight" umbracoNaviHide="0" disqusComments="1" writer="jgao" editor="cgronlun" manager="paulettm" />
<properties linkid="manage-services-hdinsight-develop-deploy-Java-MapReduce-program" urlDisplayName="HDInsight Tutorials" pageTitle="Develop and deploy Java MapReduce jobs | Windows Azure" metaKeywords="hdinsight, hdinsight development, hadoop development, hdinsight deployment, development, deployment, tutorial, MapReduce, Java" description="Follow this end-to-end scenario to learn how to develop and test a word-counting MapReduce job on HDInsight Emulator, and then deploy and run it on HDInsight." title="Develop and deploy Java MapReduce jobs to HDInsight" umbracoNaviHide="0" disqusComments="1" writer="jgao" editor="cgronlun" manager="paulettm" />

# Develop and deploy Java MapReduce jobs to HDInsight
This tutorial walks you through an end-to-end scenario from developing and testing a word counting MapReduce job on HDInsight Emulator, to deploying and running it on Windows Azure HDInsight.
Expand Down
2 changes: 1 addition & 1 deletion articles/hdinsight-use-blob-storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ HDInsight provides access to the distributed file system that is locally attache

In addition, HDInsight provides the ability to access data stored in Blob storage. The syntax to access Blob storage is:

WASB
WASB[S]://<containername>@<accountname>.blob.core.windows.net/<path>


Hadoop supports a notion of default file system. The default file system implies a default scheme and authority; it can also be used to resolve relative paths. During the HDInsight provision process, user must specify a Blob storage container used as the default file system.
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 5f1e214

Please sign in to comment.