Skip to content

Commit

Permalink
[stg] article owner tamram > marsma
Browse files Browse the repository at this point in the history
  • Loading branch information
mmacy committed Dec 8, 2016
1 parent ff0c9f9 commit 931503f
Show file tree
Hide file tree
Showing 18 changed files with 48 additions and 43 deletions.
2 changes: 1 addition & 1 deletion articles/storage/storage-dotnet-how-to-use-blobs.md
Original file line number Diff line number Diff line change
Expand Up @@ -192,7 +192,7 @@ When you call **ListBlobs** on the _photos_ container (as in the above sample),
Directory: https://<accountname>.blob.core.windows.net/photos/2011/
Block blob of length 505623: https://<accountname>.blob.core.windows.net/photos/photo1.jpg

Optionally, you can set the **UseFlatBlobListing** parameter of of the **ListBlobs** method to
Optionally, you can set the **UseFlatBlobListing** parameter of the **ListBlobs** method to
**true**. In this case, every blob in the container is returned as a **CloudBlockBlob** object. The call to **ListBlobs** to return a flat listing looks like this:

```csharp
Expand Down
2 changes: 1 addition & 1 deletion articles/storage/storage-dotnet-how-to-use-tables.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ You can use Table storage to store flexible datasets, such as user data for web
### About this tutorial
This tutorial shows how to write .NET code for some common scenarios using Azure Table storage, including creating and deleting a table and inserting, updating, deleting, and querying table data.

**Prerequisities:**
**Prerequisites:**

* [Microsoft Visual Studio](https://www.visualstudio.com/en-us/visual-studio-homepage-vs.aspx)
* [Azure Storage Client Library for .NET](https://www.nuget.org/packages/WindowsAzure.Storage/)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ Additionally, you will need to use a SAS to authenticate the source object in a
## Types of shared access signatures
Version 2015-04-05 of Azure Storage introduces a new type of shared access signature, the account SAS. You can now create either of two types of shared access signatures:

* **Account SAS.** The account SAS delegates access to resources in one or more of the storage services. All of the operations available via a service SAS are also available via an account SAS. Additionally, with the account SAS, you can delegate access to operations that apply to a given service, such as **Get/Set Service Properties** and **Get Service Stats**. You can also delegate access to read, write, and delete operations on blob containers, tables, queues, and file shares that are not permitted with a service SAS. See [Constructing an Account SAS](https://msdn.microsoft.com/library/mt584140.aspx) for in-depth information about about constructing the account SAS token.
* **Account SAS.** The account SAS delegates access to resources in one or more of the storage services. All of the operations available via a service SAS are also available via an account SAS. Additionally, with the account SAS, you can delegate access to operations that apply to a given service, such as **Get/Set Service Properties** and **Get Service Stats**. You can also delegate access to read, write, and delete operations on blob containers, tables, queues, and file shares that are not permitted with a service SAS. See [Constructing an Account SAS](https://msdn.microsoft.com/library/mt584140.aspx) for in-depth information about constructing the account SAS token.
* **Service SAS.** The service SAS delegates access to a resource in just one of the storage services: the Blob, Queue, Table, or File service. See [Constructing a Service SAS](https://msdn.microsoft.com/library/dn140255.aspx) and [Service SAS Examples](https://msdn.microsoft.com/library/dn140256.aspx) for in-depth information about constructing the service SAS token.

## How a shared access signature works
Expand All @@ -78,7 +78,7 @@ The account SAS and service SAS tokens include some common parameters, and also

### Parameters common to account SAS and service SAS tokens
* **Api version** An optional parameter that specifies the storage service version to use to execute the request.
* **Service version** A required parameter that specifies the storage service version to use to authenticate the request.
* **Service version** A required parameter that specifies the storage service version to use to authenticate the request.
* **Start time.** This is the time at which the SAS becomes valid. The start time for a shared access signature is optional; if omitted, the SAS is effective immediately. Must be expressed in UTC (Coordinated Universal Time), with a special UTC designator ("Z") i.e. 1994-11-05T13:15:30Z.
* **Expiry time.** This is the time after which the SAS is no longer valid. Best practices recommend that you either specify an expiry time for a SAS, or associate it with a stored access policy. Must be expressed in UTC (Coordinated Universal Time), with a special UTC designator ("Z") i.e. 1994-11-05T13:15:30Z (see more below).
* **Permissions.** The permissions specified on the SAS indicate what operations the client can perform against the storage resource using the SAS. Available permissions differ for an account SAS and a service SAS.
Expand Down Expand Up @@ -208,9 +208,9 @@ When you use shared access signatures in your applications, you need to be aware
The following recommendations for using shared access signatures will help balance these risks:

1. **Always use HTTPS** to create a SAS or to distribute a SAS. If a SAS is passed over HTTP and intercepted, an attacker performing a man-in-the-middle attack will be able to read the SAS and then use it just as the intended user could have, potentially compromising sensitive data or allowing for data corruption by the malicious user.
2. **Reference stored access policies where possible.** Stored access policies give you the option to revoke permissions without having to regenerate the storage account keys. Set the expiration on these to be a very long time (or infinite) and make sure that it is regularly updated to move it farther into the future.
2. **Reference stored access policies where possible.** Stored access policies give you the option to revoke permissions without having to regenerate the storage account keys. Set the expiration on these to be a very long time (or infinite) and make sure that it is regularly updated to move it farther into the future.
3. **Use near-term expiration times on an ad hoc SAS.** In this way, even if a SAS is compromised unknowingly, it will only be viable for a short time duration. This practice is especially important if you cannot reference a stored access policy. This practice also helps limit the amount of data that can be written to a blob by limiting the time available to upload to it.
4. **Have clients automatically renew the SAS if necessary.** Clients should renew the SAS well before the expected expiration, in order to allow time for retries if the service providing the SAS is unavailable. If your SAS is meant to be used for a small number of immediate, short-lived operations, which are expected to be completed within the expiration time given, then this may not be necessary, as the SAS is not expected be renewed. However, if you have client that is routinely making requests via SAS, then the possibility of expiration comes into play. The key consideration is to balance the need for the SAS to be short-lived (as stated above) with the need to ensure that the client is requesting renewal early enough to avoid disruption due to the SAS expiring prior to successful renewal.
4. **Have clients automatically renew the SAS if necessary.** Clients should renew the SAS well before the expiration, in order to allow time for retries if the service providing the SAS is unavailable. If your SAS is meant to be used for a small number of immediate, short-lived operations that are expected to be completed within the expiration period, then this may be unnecessary as the SAS is not expected to be renewed. However, if you have client that is routinely making requests via SAS, then the possibility of expiration comes into play. The key consideration is to balance the need for the SAS to be short-lived (as stated above) with the need to ensure that the client is requesting renewal early enough to avoid disruption due to the SAS expiring prior to successful renewal.
5. **Be careful with SAS start time.** If you set the start time for a SAS to **now**, then due to clock skew (differences in current time according to different machines), failures may be observed intermittently for the first few minutes. In general, set the start time to be at least 15 minutes ago, or don't set it at all, which will make it valid immediately in all cases. The same generally applies to expiry time as well - remember that you may observe up to 15 minutes of clock skew in either direction on any request. Note for clients using a REST version prior to 2012-02-12, the maximum duration for a SAS that does not reference a stored access policy is 1 hour, and any policies specifying longer term than that will fail.
6. **Be specific with the resource to be accessed.** A typical security best practice is to provide a user with the minimum required privileges. If a user only needs read access to a single entity, then grant them read access to that single entity, and not read/write/delete access to all entities. This also helps mitigate the threat of the SAS being compromised, as the SAS has less power in the hands of an attacker.
7. **Understand that your account will be billed for any usage, including that done with SAS.** If you provide write access to a blob, a user may choose to upload a 200GB blob. If you've given them read access as well, they may choose do download it 10 times, incurring 2TB in egress costs for you. Again, provide limited permissions, to help mitigate the potential of malicious users. Use short-lived SAS to reduce this threat (but be mindful of clock skew on the end time).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ In this tutorial, we'll focus on creating shared access signatures for container
## Part 1: Create a Console Application to Generate Shared Access Signatures
First, ensure that you have the Azure Storage Client Library for .NET installed. You can install the [NuGet package](http://nuget.org/packages/WindowsAzure.Storage/ "NuGet package") containing the most up-to-date assemblies for the client library; this is the recommended method for ensuring that you have the most recent fixes. You can also download the client library as part of the most recent version of the [Azure SDK for .NET](https://azure.microsoft.com/downloads/).

In Visual Studio, create a new Windows console application and name it **GenerateSharedAccessSignatures**. Add references to **Microsoft.WindowsAzure.Configuration.dll** and **Microsoft.WindowsAzure.Storage.dll**, using either of the following approaches:
In Visual Studio, create a new Windows console application and name it **GenerateSharedAccessSignatures**. Add references to **Microsoft.WindowsAzure.Configuration.dll** and **Microsoft.WindowsAzure.Storage.dll**, using either of the following approaches:

* If you want to install the NuGet package, first install the [NuGet Client](https://docs.nuget.org/consume/installing-nuget). In Visual Studio, select **Project | Manage NuGet Packages**, search online for **Azure Storage**, and follow the instructions to install.
* Alternatively, locate the assemblies in your installation of the Azure SDK and add references to them.
Expand Down
4 changes: 2 additions & 2 deletions articles/storage/storage-getting-started-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ You may want to review the source code before running the application. To review

Next, run the sample application:

1. In Visual Studio, select **Solution Explorer** on the **View** menu. Open the App.config file and comment out the connection string for the Azure storage emulator:
1. In Visual Studio, select **Solution Explorer** on the **View** menu. Open the App.config file and comment out the connection string for the Azure storage emulator:

`<!--<add key="StorageConnectionString" value = "UseDevelopmentStorage=true;"/>-->`

Expand All @@ -79,7 +79,7 @@ To try it, let’s create a simple Azure Storage application using one of the Az

![Azure Quick Starts][Image1]

4. In Visual Studio, select **Solution Explorer** on the **View** menu. Open the App.config file and comment out the connection string for your Azure storage account if you have already added one. Then uncomment the connection string for the Azure storage emulator:
4. In Visual Studio, select **Solution Explorer** on the **View** menu. Open the App.config file and comment out the connection string for your Azure storage account if you have already added one. Then uncomment the connection string for the Azure storage emulator:

`<add key="StorageConnectionString" value = "UseDevelopmentStorage=true;"/>`

Expand Down
2 changes: 1 addition & 1 deletion articles/storage/storage-introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ A storage account can contain any number of queues. A queue can contain any numb
## File Storage
Azure File storage offers cloud-based SMB file shares, so that you can migrate legacy applications that rely on file shares to Azure quickly and without costly rewrites. With Azure File storage, applications running in Azure virtual machines or cloud services can mount a file share in the cloud, just as a desktop application mounts a typical SMB share. Any number of application components can then mount and access the File storage share simultaneously.

Since a File storage share is a standard SMB file share, applications running in Azure can access data in the share via file sytem I/O APIs. Developers can therefore leverage their existing code and skills to migrate existing applications. IT Pros can use PowerShell cmdlets to create, mount, and manage File storage shares as part of the administration of Azure applications.
Since a File storage share is a standard SMB file share, applications running in Azure can access data in the share via file system I/O APIs. Developers can therefore leverage their existing code and skills to migrate existing applications. IT Pros can use PowerShell cmdlets to create, mount, and manage File storage shares as part of the administration of Azure applications.

Like the other Azure storage services, File storage exposes a REST API for accessing data in a share. On-premise applications can call the File storage REST API to access data in a file share. This way, an enterprise can choose to migrate some legacy applications to Azure and continue running others from within their own organization. Note that mounting a file share is only possible for applications running in Azure; an on-premises application may only access the file share via the REST API.

Expand Down
5 changes: 2 additions & 3 deletions articles/storage/storage-java-how-to-use-blob-storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,8 +51,7 @@ import com.microsoft.azure.storage.blob.*;
```

## Set up an Azure Storage connection string
An Azure Storage client uses a storage connection string to store
endpoints and credentials for accessing data management services. When running in a client application, you must provide the storage connection string in the following format, using the name of your storage account and the Primary access key for the storage account listed in the [Azure Portal](https://portal.azure.com) for the *AccountName* and *AccountKey* values. The following example shows how you can declare a static field to hold the connection string.
An Azure Storage client uses a storage connection string to store endpoints and credentials for accessing data management services. When running in a client application, you must provide the storage connection string in the following format, using the name of your storage account and the Primary access key for the storage account listed in the [Azure portal](https://portal.azure.com) for the *AccountName* and *AccountKey* values. The following example shows how you can declare a static field to hold the connection string.

```java
// Define the connection-string with your values
Expand All @@ -62,7 +61,7 @@ public static final String storageConnectionString =
"AccountKey=your_storage_account_key";
```

In an application running within a role in Microsoft Azure, this string can be stored in the service configuration file, *ServiceConfiguration.cscfg*, and can be accessed with a call to the **RoleEnvironment.getConfigurationSettings** method. The followng example gets the connection string from a **Setting** element named *StorageConnectionString* in the service configuration file.
In an application running within a role in Microsoft Azure, this string can be stored in the service configuration file, *ServiceConfiguration.cscfg*, and can be accessed with a call to the **RoleEnvironment.getConfigurationSettings** method. The following example gets the connection string from a **Setting** element named *StorageConnectionString* in the service configuration file.

```java
// Retrieve storage account from connection-string.
Expand Down
4 changes: 2 additions & 2 deletions articles/storage/storage-java-how-to-use-table-storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,8 +46,8 @@ import com.microsoft.azure.storage.table.*;
import com.microsoft.azure.storage.table.TableQuery.*;
```

## Setup an Azure storage connection string
An Azure storage client uses a storage connection string to store endpoints and credentials for accessing data management services. When running in a client application, you must provide the storage connection string in the following format, using the name of your storage account and the Primary access key for the storage account listed in the [Azure Portal](https://portal.azure.com) for the *AccountName* and *AccountKey* values. This example shows how you can declare a static field to hold the connection string:
## Set up an Azure storage connection string
An Azure storage client uses a storage connection string to store endpoints and credentials for accessing data management services. When running in a client application, you must provide the storage connection string in the following format, using the name of your storage account and the Primary access key for the storage account listed in the [Azure portal](https://portal.azure.com) for the *AccountName* and *AccountKey* values. This example shows how you can declare a static field to hold the connection string:

```java
// Define the connection-string with your values.
Expand Down
12 changes: 6 additions & 6 deletions articles/storage/storage-manage-access-to-resources.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ description: Learn how to make containers and blobs available for anonymous acce
services: storage
documentationcenter: ''
author: mmacy
manager: carmonm
manager: timlt
editor: tysonn

ms.assetid: a2cffee6-3224-4f2a-8183-66ca23b2d2d7
Expand All @@ -13,7 +13,7 @@ ms.workload: storage
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: article
ms.date: 11/18/2016
ms.date: 12/08/2016
ms.author: marsma

---
Expand All @@ -34,18 +34,18 @@ Containers provide the following options for managing container access:

You can set container permissions in the following ways:

* From the [Azure Portal](https://portal.azure.com).
* From the [Azure portal](https://portal.azure.com).
* Programmatically, by using the storage client library or the REST API.
* By using PowerShell. To learn about setting container permissions from Azure PowerShell, see [Using Azure PowerShell with Azure Storage](storage-powershell-guide-full.md#how-to-manage-azure-blobs).

### Setting container permissions from the Azure Portal
To set container permissions from the [Azure Portal](https://portal.azure.com), follow these steps:
### Setting container permissions from the Azure portal
To set container permissions from the [Azure portal](https://portal.azure.com), follow these steps:

1. Navigate to the dashboard for your storage account.
2. Select the container name from the list. Clicking the name exposes the blobs in the chosen container
3. Select **Access policy** from the toolbar.
4. In the **Access type** field, select your desired level of permissions as shown in the screenshot below.

![Edit Container Metadata dialog](./media/storage-manage-access-to-resources/storage-manage-access-to-resources-0.png)

### Setting container permissions programmatically using .NET
Expand Down
Loading

0 comments on commit 931503f

Please sign in to comment.