title | description | services | documentationcenter | author | manager | ms.reviewer | ms.service | ms.workload | ms.topic | ms.date | ms.author |
---|---|---|---|---|---|---|---|---|---|---|---|
Copy data from/to a file system by using Azure Data Factory |
Learn how to copy data from file system to supported sink data stores (or) from supported source data stores to file system by using Azure Data Factory. |
data-factory |
linda33wj |
shwang |
douglasl |
data-factory |
data-services |
conceptual |
06/12/2020 |
jingwang |
[!div class="op_single_selector" title1="Select the version of Data Factory service you are using:"]
- Version 1
- Current version [!INCLUDEappliesto-adf-asa-md]
This article outlines how to copy data to and from file system. To learn about Azure Data Factory, read the introductory article.
This file system connector is supported for the following activities:
- Copy activity with supported source/sink matrix
- Lookup activity
- GetMetadata activity
- Delete activity
Specifically, this file system connector supports:
- Copying files from/to local machine or network file share. To use a Linux file share, install Samba on your Linux server.
- Copying files using Windows authentication.
- Copying files as-is or parsing/generating files with the supported file formats and compression codecs.
[!INCLUDE data-factory-v2-integration-runtime-requirements]
[!INCLUDE data-factory-v2-connector-get-started]
The following sections provide details about properties that are used to define Data Factory entities specific to file system.
The following properties are supported for file system linked service:
Property | Description | Required |
---|---|---|
type | The type property must be set to: FileServer. | Yes |
host | Specifies the root path of the folder that you want to copy. Use the escape character "" for special characters in the string. See Sample linked service and dataset definitions for examples. | Yes |
userid | Specify the ID of the user who has access to the server. | Yes |
password | Specify the password for the user (userid). Mark this field as a SecureString to store it securely in Data Factory, or reference a secret stored in Azure Key Vault. | Yes |
connectVia | The Integration Runtime to be used to connect to the data store. Learn more from Prerequisites section. If not specified, it uses the default Azure Integration Runtime. | No |
Scenario | "host" in linked service definition | "folderPath" in dataset definition |
---|---|---|
Local folder on Integration Runtime machine: Examples: D:\* or D:\folder\subfolder\* |
In JSON: D:\\ On UI: D:\ |
In JSON: .\\ or folder\\subfolder On UI: .\ or folder\subfolder |
Remote shared folder: Examples: \\myserver\share\* or \\myserver\share\folder\subfolder\* |
In JSON: \\\\myserver\\share On UI: \\myserver\share |
In JSON: .\\ or folder\\subfolder On UI: .\ or folder\subfolder |
Note
When authoring via UI, you don't need to input double backslash (\\
) to escape like you do via JSON, specify single backslash.
Example:
{
"name": "FileLinkedService",
"properties": {
"type": "FileServer",
"typeProperties": {
"host": "<host>",
"userid": "<domain>\\<user>",
"password": {
"type": "SecureString",
"value": "<password>"
}
},
"connectVia": {
"referenceName": "<name of Integration Runtime>",
"type": "IntegrationRuntimeReference"
}
}
}
For a full list of sections and properties available for defining datasets, see the Datasets article.
[!INCLUDE data-factory-v2-file-formats]
The following properties are supported for file system under location
settings in format-based dataset:
Property | Description | Required |
---|---|---|
type | The type property under location in dataset must be set to FileServerLocation. |
Yes |
folderPath | The path to folder. If you want to use wildcard to filter folder, skip this setting and specify in activity source settings. | No |
fileName | The file name under the given folderPath. If you want to use wildcard to filter files, skip this setting and specify in activity source settings. | No |
Example:
{
"name": "DelimitedTextDataset",
"properties": {
"type": "DelimitedText",
"linkedServiceName": {
"referenceName": "<File system linked service name>",
"type": "LinkedServiceReference"
},
"schema": [ < physical schema, optional, auto retrieved during authoring > ],
"typeProperties": {
"location": {
"type": "FileServerLocation",
"folderPath": "root/folder/subfolder"
},
"columnDelimiter": ",",
"quoteChar": "\"",
"firstRowAsHeader": true,
"compressionCodec": "gzip"
}
}
}
For a full list of sections and properties available for defining activities, see the Pipelines article. This section provides a list of properties supported by file system source and sink.
[!INCLUDE data-factory-v2-file-formats]
The following properties are supported for file system under storeSettings
settings in format-based copy source:
Property | Description | Required |
---|---|---|
type | The type property under storeSettings must be set to FileServerReadSettings. |
Yes |
Locate the files to copy: | ||
OPTION 1: static path |
Copy from the given folder/file path specified in the dataset. If you want to copy all files from a folder, additionally specify wildcardFileName as * . |
|
OPTION 2: server side filter - fileFilter |
File server side native filter which provides better performance than OPTION 3 wildcard filter. Use * to match zero or more characters and ? to match zero or single character. Learn more about the syntax and notes from the Remarks under this section. |
No |
OPTION 3: client side filter - wildcardFolderPath |
The folder path with wildcard characters to filter source folders. Such filter happens on ADF side, ADF enumerate the folders/files under the given path then apply the wildcard filter. Allowed wildcards are: * (matches zero or more characters) and ? (matches zero or single character); use ^ to escape if your actual folder name has wildcard or this escape char inside. See more examples in Folder and file filter examples. |
No |
OPTION 3: client side filter - wildcardFileName |
The file name with wildcard characters under the given folderPath/wildcardFolderPath to filter source files. Such filter happens on ADF side, ADF enumerate the files under the given path then apply the wildcard filter. Allowed wildcards are: * (matches zero or more characters) and ? (matches zero or single character); use ^ to escape if your actual folder name has wildcard or this escape char inside.See more examples in Folder and file filter examples. |
Yes |
OPTION 3: a list of files - fileListPath |
Indicates to copy a given file set. Point to a text file that includes a list of files you want to copy, one file per line, which is the relative path to the path configured in the dataset. When using this option, do not specify file name in dataset. See more examples in File list examples. |
No |
Additional settings: | ||
recursive | Indicates whether the data is read recursively from the subfolders or only from the specified folder. Note that when recursive is set to true and the sink is a file-based store, an empty folder or subfolder isn't copied or created at the sink. Allowed values are true (default) and false. This property doesn't apply when you configure fileListPath . |
No |
deleteFilesAfterCompletion | Indicates whether the binary files will be deleted from source store after successfully moving to the destination store. The file deletion is per file, so when copy activity fails, you will see some files have already been copied to the destination and deleted from source, while others are still remaining on source store. This property is only valid in binary copy scenario, where data source stores are Blob, ADLS Gen1, ADLS Gen2, S3, Google Cloud Storage, File, Azure File, SFTP, or FTP. The default value: false. |
No |
modifiedDatetimeStart | Files filter based on the attribute: Last Modified. The files will be selected if their last modified time is within the time range between modifiedDatetimeStart and modifiedDatetimeEnd . The time is applied to UTC time zone in the format of "2018-12-01T05:00:00Z". The properties can be NULL, which means no file attribute filter will be applied to the dataset. When modifiedDatetimeStart has datetime value but modifiedDatetimeEnd is NULL, it means the files whose last modified attribute is greater than or equal with the datetime value will be selected. When modifiedDatetimeEnd has datetime value but modifiedDatetimeStart is NULL, it means the files whose last modified attribute is less than the datetime value will be selected.This property doesn't apply when you configure fileListPath . |
No |
modifiedDatetimeEnd | Same as above. | No |
maxConcurrentConnections | The number of the connections to connect to storage store concurrently. Specify only when you want to limit the concurrent connection to the data store. | No |
Example:
"activities":[
{
"name": "CopyFromFileSystem",
"type": "Copy",
"inputs": [
{
"referenceName": "<Delimited text input dataset name>",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "<output dataset name>",
"type": "DatasetReference"
}
],
"typeProperties": {
"source": {
"type": "DelimitedTextSource",
"formatSettings":{
"type": "DelimitedTextReadSettings",
"skipLineCount": 10
},
"storeSettings":{
"type": "FileServerReadSettings",
"recursive": true,
"wildcardFolderPath": "myfolder*A",
"wildcardFileName": "*.csv"
}
},
"sink": {
"type": "<sink type>"
}
}
}
]
[!INCLUDE data-factory-v2-file-formats]
The following properties are supported for file system under storeSettings
settings in format-based copy sink:
Property | Description | Required |
---|---|---|
type | The type property under storeSettings must be set to FileServerWriteSettings. |
Yes |
copyBehavior | Defines the copy behavior when the source is files from a file-based data store. Allowed values are: - PreserveHierarchy (default): Preserves the file hierarchy in the target folder. The relative path of source file to source folder is identical to the relative path of target file to target folder. - FlattenHierarchy: All files from the source folder are in the first level of the target folder. The target files have autogenerated names. - MergeFiles: Merges all files from the source folder to one file. If the file name is specified, the merged file name is the specified name. Otherwise, it's an autogenerated file name. |
No |
maxConcurrentConnections | The number of the connections to connect to the data store concurrently. Specify only when you want to limit the concurrent connection to the data store. | No |
Example:
"activities":[
{
"name": "CopyToFileSystem",
"type": "Copy",
"inputs": [
{
"referenceName": "<input dataset name>",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "<Parquet output dataset name>",
"type": "DatasetReference"
}
],
"typeProperties": {
"source": {
"type": "<source type>"
},
"sink": {
"type": "ParquetSink",
"storeSettings":{
"type": "FileServerWriteSettings",
"copyBehavior": "PreserveHierarchy"
}
}
}
}
]
This section describes the resulting behavior of the folder path and file name with wildcard filters.
folderPath | fileName | recursive | Source folder structure and filter result (files in bold are retrieved) |
---|---|---|---|
Folder* |
(empty, use default) | false | FolderA File1.csv File2.json Subfolder1 File3.csv File4.json File5.csv AnotherFolderB File6.csv |
Folder* |
(empty, use default) | true | FolderA File1.csv File2.json Subfolder1 File3.csv File4.json File5.csv AnotherFolderB File6.csv |
Folder* |
*.csv |
false | FolderA File1.csv File2.json Subfolder1 File3.csv File4.json File5.csv AnotherFolderB File6.csv |
Folder* |
*.csv |
true | FolderA File1.csv File2.json Subfolder1 File3.csv File4.json File5.csv AnotherFolderB File6.csv |
This section describes the resulting behavior of using file list path in copy activity source.
Assuming you have the following source folder structure and want to copy the files in bold:
Sample source structure | Content in FileListToCopy.txt | ADF configuration |
---|---|---|
root FolderA File1.csv File2.json Subfolder1 File3.csv File4.json File5.csv Metadata FileListToCopy.txt |
File1.csv Subfolder1/File3.csv Subfolder1/File5.csv |
In dataset: - Folder path: root/FolderA In copy activity source: - File list path: root/Metadata/FileListToCopy.txt The file list path points to a text file in the same data store that includes a list of files you want to copy, one file per line with the relative path to the path configured in the dataset. |
This section describes the resulting behavior of the Copy operation for different combinations of recursive and copyBehavior values.
recursive | copyBehavior | Source folder structure | Resulting target |
---|---|---|---|
true | preserveHierarchy | Folder1 File1 File2 Subfolder1 File3 File4 File5 |
The target folder Folder1 is created with the same structure as the source: Folder1 File1 File2 Subfolder1 File3 File4 File5. |
true | flattenHierarchy | Folder1 File1 File2 Subfolder1 File3 File4 File5 |
The target Folder1 is created with the following structure: Folder1 autogenerated name for File1 autogenerated name for File2 autogenerated name for File3 autogenerated name for File4 autogenerated name for File5 |
true | mergeFiles | Folder1 File1 File2 Subfolder1 File3 File4 File5 |
The target Folder1 is created with the following structure: Folder1 File1 + File2 + File3 + File4 + File 5 contents are merged into one file with autogenerated file name |
false | preserveHierarchy | Folder1 File1 File2 Subfolder1 File3 File4 File5 |
The target folder Folder1 is created with the following structure Folder1 File1 File2 Subfolder1 with File3, File4, and File5 are not picked up. |
false | flattenHierarchy | Folder1 File1 File2 Subfolder1 File3 File4 File5 |
The target folder Folder1 is created with the following structure Folder1 autogenerated name for File1 autogenerated name for File2 Subfolder1 with File3, File4, and File5 are not picked up. |
false | mergeFiles | Folder1 File1 File2 Subfolder1 File3 File4 File5 |
The target folder Folder1 is created with the following structure Folder1 File1 + File2 contents are merged into one file with autogenerated file name. autogenerated name for File1 Subfolder1 with File3, File4, and File5 are not picked up. |
To learn details about the properties, check Lookup activity.
To learn details about the properties, check GetMetadata activity
To learn details about the properties, check Delete activity
Note
The following models are still supported as-is for backward compatibility. You are suggested to use the new model mentioned in above sections going forward, and the ADF authoring UI has switched to generating the new model.
Property | Description | Required |
---|---|---|
type | The type property of the dataset must be set to: FileShare | Yes |
folderPath | Path to the folder. Wildcard filter is supported, allowed wildcards are: * (matches zero or more characters) and ? (matches zero or single character); use ^ to escape if your actual folder name has wildcard or this escape char inside. Examples: rootfolder/subfolder/, see more examples in Sample linked service and dataset definitions and Folder and file filter examples. |
No |
fileName | Name or wildcard filter for the file(s) under the specified "folderPath". If you don't specify a value for this property, the dataset points to all files in the folder. For filter, allowed wildcards are: * (matches zero or more characters) and ? (matches zero or single character).- Example 1: "fileName": "*.csv" - Example 2: "fileName": "???20180427.txt" Use ^ to escape if your actual file name has wildcard or this escape char inside.When fileName isn't specified for an output dataset and preserveHierarchy isn't specified in the activity sink, the copy activity automatically generates the file name with the following pattern: "Data.[activity run ID GUID].[GUID if FlattenHierarchy].[format if configured].[compression if configured]", for example "Data.0a405f8a-93ff-4c6f-b3be-f69616f1df7a.txt.gz"; if you copy from tabular source using table name instead of query, the name pattern is "[table name].[format].[compression if configured]", for example "MyTable.csv". |
No |
modifiedDatetimeStart | Files filter based on the attribute: Last Modified. The files will be selected if their last modified time is within the time range between modifiedDatetimeStart and modifiedDatetimeEnd . The time is applied to UTC time zone in the format of "2018-12-01T05:00:00Z". Be aware the overall performance of data movement will be impacted by enabling this setting when you want to do file filter from huge amounts of files. The properties can be NULL, which means no file attribute filter will be applied to the dataset. When modifiedDatetimeStart has datetime value but modifiedDatetimeEnd is NULL, it means the files whose last modified attribute is greater than or equal with the datetime value will be selected. When modifiedDatetimeEnd has datetime value but modifiedDatetimeStart is NULL, it means the files whose last modified attribute is less than the datetime value will be selected. |
No |
modifiedDatetimeEnd | Files filter based on the attribute: Last Modified. The files will be selected if their last modified time is within the time range between modifiedDatetimeStart and modifiedDatetimeEnd . The time is applied to UTC time zone in the format of "2018-12-01T05:00:00Z". Be aware the overall performance of data movement will be impacted by enabling this setting when you want to do file filter from huge amounts of files. The properties can be NULL, which means no file attribute filter will be applied to the dataset. When modifiedDatetimeStart has datetime value but modifiedDatetimeEnd is NULL, it means the files whose last modified attribute is greater than or equal with the datetime value will be selected. When modifiedDatetimeEnd has datetime value but modifiedDatetimeStart is NULL, it means the files whose last modified attribute is less than the datetime value will be selected. |
No |
format | If you want to copy files as-is between file-based stores (binary copy), skip the format section in both input and output dataset definitions. If you want to parse or generate files with a specific format, the following file format types are supported: TextFormat, JsonFormat, AvroFormat, OrcFormat, ParquetFormat. Set the type property under format to one of these values. For more information, see Text Format, Json Format, Avro Format, Orc Format, and Parquet Format sections. |
No (only for binary copy scenario) |
compression | Specify the type and level of compression for the data. For more information, see Supported file formats and compression codecs. Supported types are: GZip, Deflate, BZip2, and ZipDeflate. Supported levels are: Optimal and Fastest. |
No |
Tip
To copy all files under a folder, specify folderPath only.
To copy a single file with a given name, specify folderPath with folder part and fileName with file name.
To copy a subset of files under a folder, specify folderPath with folder part and fileName with wildcard filter.
Note
If you were using "fileFilter" property for file filter, it is still supported as-is, while you are suggested to use the new filter capability added to "fileName" going forward.
Example:
{
"name": "FileSystemDataset",
"properties": {
"type": "FileShare",
"linkedServiceName":{
"referenceName": "<file system linked service name>",
"type": "LinkedServiceReference"
},
"typeProperties": {
"folderPath": "folder/subfolder/",
"fileName": "*",
"modifiedDatetimeStart": "2018-12-01T05:00:00Z",
"modifiedDatetimeEnd": "2018-12-01T06:00:00Z",
"format": {
"type": "TextFormat",
"columnDelimiter": ",",
"rowDelimiter": "\n"
},
"compression": {
"type": "GZip",
"level": "Optimal"
}
}
}
}
Property | Description | Required |
---|---|---|
type | The type property of the copy activity source must be set to: FileSystemSource | Yes |
recursive | Indicates whether the data is read recursively from the subfolders or only from the specified folder. Note when recursive is set to true and sink is file-based store, empty folder/sub-folder will not be copied/created at sink. Allowed values are: true (default), false |
No |
maxConcurrentConnections | The number of the connections to connect to storage store concurrently. Specify only when you want to limit the concurrent connection to the data store. | No |
Example:
"activities":[
{
"name": "CopyFromFileSystem",
"type": "Copy",
"inputs": [
{
"referenceName": "<file system input dataset name>",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "<output dataset name>",
"type": "DatasetReference"
}
],
"typeProperties": {
"source": {
"type": "FileSystemSource",
"recursive": true
},
"sink": {
"type": "<sink type>"
}
}
}
]
Property | Description | Required |
---|---|---|
type | The type property of the copy activity sink must be set to: FileSystemSink | Yes |
copyBehavior | Defines the copy behavior when the source is files from file-based data store. Allowed values are: - PreserveHierarchy (default): preserves the file hierarchy in the target folder. The relative path of source file to source folder is identical to the relative path of target file to target folder. - FlattenHierarchy: all files from the source folder are in the first level of target folder. The target files have autogenerated name. - MergeFiles: merges all files from the source folder to one file. No record deduplication is performed during the merge. If the File Name is specified, the merged file name would be the specified name; otherwise, would be autogenerated file name. |
No |
maxConcurrentConnections | The number of the connections to connect to storage store concurrently. Specify only when you want to limit the concurrent connection to the data store. | No |
Example:
"activities":[
{
"name": "CopyToFileSystem",
"type": "Copy",
"inputs": [
{
"referenceName": "<input dataset name>",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "<file system output dataset name>",
"type": "DatasetReference"
}
],
"typeProperties": {
"source": {
"type": "<source type>"
},
"sink": {
"type": "FileSystemSink",
"copyBehavior": "PreserveHierarchy"
}
}
}
]
For a list of data stores supported as sources and sinks by the copy activity in Azure Data Factory, see supported data stores.