title | description | services | author | ms.custom | ms.service | ms.topic | ms.date | ms.author |
---|---|---|---|---|---|---|---|---|
How to create a blob in Azure Storage using the Node.js SDK v2 |
Create a storage account and a container in object (Blob) storage. Then use the Azure Storage client library for Node.js v2 to upload a blob to Azure Storage, download a blob, and list the blobs in a container. |
storage |
tamram |
mvc |
storage |
conceptual |
11/14/2018 |
tamram |
In this how-to guide, you learn how to use Node.js to upload, download, and list blobs and manage containers with Azure Blob storage.
If you don't have an Azure subscription, create a free account before you begin.
Create an Azure storage account in the Azure portal. For help creating the account, see Create a storage account.
The sample application is a simple Node.js console application. To begin, clone the repository to your machine using the following command:
git clone https://github.com/Azure-Samples/storage-blobs-node-quickstart.git
To open the application, look for the storage-blobs-node-quickstart folder and open it in your favorite code editing environment.
[!INCLUDE storage-copy-connection-string-portal]
Before running the application, you must provide the connection string for your storage account. The sample repository includes a file named .env.example. You can rename this file by removing the .example extension, which results in a file named .env. Inside the .env file, add your connection string value after the AZURE_STORAGE_CONNECTION_STRING key.
In the application directory, run npm install to install the required packages for the application.
npm install
Now that the dependencies are installed, you can run the sample by issuing the following command:
npm start
The output from the script will be similar to the following:
Containers:
- container-one
- container-two
Container "demo" is created
Blob "quickstart.txt" is uploaded
Local file "./readme.md" is uploaded
Blobs in "demo" container:
- quickstart.txt
- readme.md
Blob downloaded blob content: "hello Blob SDK"
Blob "quickstart.txt" is deleted
Container "demo" is deleted
Done
If you are using a new storage account for this example, then you may not see any container names listed under the label "Containers".
The first expression is used to load values into environment variables.
if (process.env.NODE_ENV !== 'production') {
require('dotenv').load();
}
The dotenv module loads environment variables when running the app locally for debugging. Values are defined in a file named .env and loaded into the current execution context. In production contexts, the server configuration provides these values and that is why this code is only run when the script is not running under a "production" context.
const path = require('path');
const storage = require('azure-storage');
The purpose of the modules is as follows:
file named .env into the current execution context
- path is required in order to determine the absolute file path of the file to upload to blob storage
- azure-storage is the Azure Storage SDK module for Node.js
Next, the blobService variable is initialized as a new instance of the Azure Blob service.
const blobService = storage.createBlobService();
In the following implementation, each of the blobService functions is wrapped in a Promise, which allows access to JavaScript's async function and await operator to streamline the callback nature of the Azure Storage API. When a successful response returns for each function, the promise resolves with relevant data along with a message specific to the action.
The listContainers function calls listContainersSegmented which returns collections of containers in groups.
const listContainers = async () => {
return new Promise((resolve, reject) => {
blobService.listContainersSegmented(null, (err, data) => {
if (err) {
reject(err);
} else {
resolve({ message: `${data.entries.length} containers`, containers: data.entries });
}
});
});
};
The size of the groups is configurable via ListContainersOptions. Calling listContainersSegmented returns blob metadata as an array of ContainerResult instances. Results are returned in 5,000 increment batches (segments). If there are more than 5,000 blobs in a container, then the results include a value for continuationToken. To list subsequent segments from the blob container, you can pass the continuation token back into listContainersSegment as the second argument.
The createContainer function calls createContainerIfNotExists and sets the appropriate access level for the blob.
const createContainer = async (containerName) => {
return new Promise((resolve, reject) => {
blobService.createContainerIfNotExists(containerName, { publicAccessLevel: 'blob' }, err => {
if (err) {
reject(err);
} else {
resolve({ message: `Container '${containerName}' created` });
}
});
});
};
The second parameter (options) for createContainerIfNotExists accepts a value for publicAccessLevel. The value blob for publicAccessLevel specifies that specific blob data is exposed to the public. This setting is in contrast to container level access, which grants the ability to list the contents of the container.
The use of createContainerIfNotExists allows the application to run the createContainer command multiple times without returning errors when the container already exists. In a production environment, you often only call createContainerIfNotExists once as the same container is used throughout the application. In these cases, you can create the container ahead of time through the portal or via the Azure CLI.
The uploadString function calls createBlockBlobFromText to write (or overwrite) an arbitrary string to the blob container.
const uploadString = async (containerName, blobName, text) => {
return new Promise((resolve, reject) => {
blobService.createBlockBlobFromText(containerName, blobName, text, err => {
if (err) {
reject(err);
} else {
resolve({ message: `Text "${text}" is written to blob storage` });
}
});
});
};
The uploadLocalFile function uses createBlockBlobFromLocalFile to upload and write (or overwrite) a file from the file system into blob storage.
const uploadLocalFile = async (containerName, filePath) => {
return new Promise((resolve, reject) => {
const fullPath = path.resolve(filePath);
const blobName = path.basename(filePath);
blobService.createBlockBlobFromLocalFile(containerName, blobName, fullPath, err => {
if (err) {
reject(err);
} else {
resolve({ message: `Local file "${filePath}" is uploaded` });
}
});
});
};
Other approaches available to upload content into blobs include working with text and streams. To verify the file is uploaded to your blob storage, you can use the Azure Storage Explorer to view the data in your account.
The listBlobs function calls the listBlobsSegmented method to return a list of blob metadata in a container.
const listBlobs = async (containerName) => {
return new Promise((resolve, reject) => {
blobService.listBlobsSegmented(containerName, null, (err, data) => {
if (err) {
reject(err);
} else {
resolve({ message: `${data.entries.length} blobs in '${containerName}'`, blobs: data.entries });
}
});
});
};
Calling listBlobsSegmented returns blob metadata as an array of BlobResult instances. Results are returned in 5,000 increment batches (segments). If there are more than 5,000 blobs in a container, then the results include a value for continuationToken. To list subsequent segments from the blob container, you can pass the continuation token back into listBlobSegmented as the second argument.
The downloadBlob function uses getBlobToText to download the contents of the blob to the given absolute file path.
const downloadBlob = async (containerName, blobName) => {
const dowloadFilePath = path.resolve('./' + blobName.replace('.txt', '.downloaded.txt'));
return new Promise((resolve, reject) => {
blobService.getBlobToText(containerName, blobName, (err, data) => {
if (err) {
reject(err);
} else {
resolve({ message: `Blob downloaded "${data}"`, text: data });
}
});
});
};
The implementation shown here changes the source returns the contents of the blob as a string. You can also download the blob as a stream as well as directly to a local file.
The deleteBlob function calls the deleteBlobIfExists function. As the name implies, this function does not return an error if the blob is already deleted.
const deleteBlob = async (containerName, blobName) => {
return new Promise((resolve, reject) => {
blobService.deleteBlobIfExists(containerName, blobName, err => {
if (err) {
reject(err);
} else {
resolve({ message: `Block blob '${blobName}' deleted` });
}
});
});
};
Containers are deleted by calling the deleteContainer method off the blob service and passing in the container name.
const deleteContainer = async (containerName) => {
return new Promise((resolve, reject) => {
blobService.deleteContainer(containerName, err => {
if (err) {
reject(err);
} else {
resolve({ message: `Container '${containerName}' deleted` });
}
});
});
};
In order to support JavaScript's async/await syntax, all the calling code is wrapped in a function named execute. Then execute is called and handled as a promise.
async function execute() {
// commands
}
execute().then(() => console.log("Done")).catch((e) => console.log(e));
All of the following code runs inside the execute function where the // commands
comment is placed.
First, the relevant variables are declared to assign names, sample content and to point to the local file to upload to Blob storage.
const containerName = "demo";
const blobName = "quickstart.txt";
const content = "hello Node SDK";
const localFilePath = "./readme.md";
let response;
To list the containers in the storage account, the listContainers function is called and the returned list of containers is logged to the output window.
console.log("Containers:");
response = await listContainers();
response.containers.forEach((container) => console.log(` - ${container.name}`));
Once the list of containers is available, then you can use the Array findIndex method to see if the container you want to create already exists. If the container does not exist then the container is created.
const containerDoesNotExist = response.containers.findIndex((container) => container.name === containerName) === -1;
if (containerDoesNotExist) {
await createContainer(containerName);
console.log(`Container "${containerName}" is created`);
}
Next, a string and a local file is uploaded to Blob storage.
await uploadString(containerName, blobName, content);
console.log(`Blob "${blobName}" is uploaded`);
response = await uploadLocalFile(containerName, localFilePath);
console.log(response.message);
The process to list blobs is the same as listing containers. The call to listBlobs returns an array of blobs in the container and are logged to the output window.
console.log(`Blobs in "${containerName}" container:`);
response = await listBlobs(containerName);
response.blobs.forEach((blob) => console.log(` - ${blob.name}`));
To download a blob, the response is captured and used to access the value of the blob. From the response readableStreamBody is converted to a string and logged out to the output window.
response = await downloadBlob(containerName, blobName);
console.log(`Downloaded blob content: "${response.text}"`);
Finally, the blob and container are deleted from the storage account.
await deleteBlob(containerName, blobName);
console.log(`Blob "${blobName}" is deleted`);
await deleteContainer(containerName);
console.log(`Container "${containerName}" is deleted`);
All data written to the storage account is automatically deleted at the end of the code sample.
See these additional resources for Node.js development with Blob storage:
- View and install the Node.js client library source code for Azure Storage on GitHub.
- See the Node.js API reference for more information about the Node.js client library.
- Explore Blob storage samples written using the Node.js client library.
This article demonstrates how to upload a file between a local disk and Azure Blob storage using Node.js. To learn more about working with Blob storage, continue to the GitHub repository.
[!div class="nextstepaction"] Azure Storage SDK for JavaScript repository