BULK INSERT (Transact-SQL)

Applies to: yesSQL Server (all supported versions) YesAzure SQL Database

Imports a data file into a database tabular array or view in a user-specified format in SQL Server

Topic link icon Transact-SQL Syntax Conventions

Syntax

              Bulk INSERT    { database_name.schema_name.table_or_view_name | schema_name.table_or_view_name | table_or_view_name }       FROM 'data_file'      [ WITH     (    [ [ , ] BATCHSIZE = batch_size ]    [ [ , ] CHECK_CONSTRAINTS ]    [ [ , ] CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | 'code_page' } ]    [ [ , ] DATAFILETYPE =       { 'char' | 'native'| 'widechar' | 'widenative' } ]    [ [ , ] DATA_SOURCE = 'data_source_name' ]    [ [ , ] ERRORFILE = 'file_name' ]    [ [ , ] ERRORFILE_DATA_SOURCE = 'errorfile_data_source_name' ]    [ [ , ] FIRSTROW = first_row ]    [ [ , ] FIRE_TRIGGERS ]    [ [ , ] FORMATFILE_DATA_SOURCE = 'data_source_name' ]    [ [ , ] KEEPIDENTITY ]    [ [ , ] KEEPNULLS ]    [ [ , ] KILOBYTES_PER_BATCH = kilobytes_per_batch ]    [ [ , ] LASTROW = last_row ]    [ [ , ] MAXERRORS = max_errors ]    [ [ , ] Order ( { cavalcade [ ASC | DESC ] } [ ,...n ] ) ]    [ [ , ] ROWS_PER_BATCH = rows_per_batch ]    [ [ , ] ROWTERMINATOR = 'row_terminator' ]    [ [ , ] TABLOCK ]     -- input file format options    [ [ , ] FORMAT = 'CSV' ]    [ [ , ] FIELDQUOTE = 'quote_characters']    [ [ , ] FORMATFILE = 'format_file_path' ]    [ [ , ] FIELDTERMINATOR = 'field_terminator' ]    [ [ , ] ROWTERMINATOR = 'row_terminator' ]     )]                          

Arguments

database_name

Is the database name in which the specified table or view resides. If not specified, database_name is the current database.

schema_name

Is the name of the table or view schema. schema_name is optional if the default schema for the user performing the majority-import functioning is schema of the specified table or view. If schema is not specified and the default schema of the user performing the majority-import operation is unlike from the specified table or view, SQL Server returns an fault message, and the majority-import operation is canceled.

table_name

Is the name of the table or view to bulk import data into. Only views in which all columns refer to the aforementioned base of operations table can exist used. For more than information most the restrictions for loading data into views, see INSERT (Transact-SQL).

FROM ' data_file '

Is the total path of the information file that contains data to import into the specified table or view. BULK INSERT tin can import data from a disk or Azure Blob storage (including network, floppy disk, hard disk, and then on).

data_file must specify a valid path from the server on which SQL Server is running. If data_file is a remote file, specify the Universal Naming Convention (UNC) proper noun. A UNC name has the form \\Systemname\ShareName\Path\FileName. For instance:

              BULK INSERT Sales.Orders FROM '\\SystemX\DiskZ\Sales\data\orders.dat';                          

Applies to: SQL Server 2017 (xiv.x) CTP 1.1 and Azure SQL Database. Beginning with SQL Server 2017 (14.x) CTP1.ane, the data_file can be in Azure blob storage. In that case, y'all need to specify data_source_name pick. For an instance, run across Importing data from a file in Azure blob storage.

Important

Azure SQL Database only supports reading from Azure Blob Storage.

BATCHSIZE = batch_size

Specifies the number of rows in a batch. Each batch is copied to the server equally ane transaction. If this fails, SQL Server commits or rolls back the transaction for every batch. By default, all data in the specified data file is one batch. For information well-nigh performance considerations, see "Remarks," afterwards in this article.

CHECK_CONSTRAINTS

Specifies that all constraints on the target table or view must exist checked during the majority-import operation. Without the CHECK_CONSTRAINTS option, whatever Check and Strange Cardinal constraints are ignored, and subsequently the performance, the constraint on the tabular array is marked equally not-trusted.

Note

UNIQUE, and PRIMARY Cardinal constraints are ever enforced. When importing into a character column that is defined with a Non NULL constraint, Bulk INSERT inserts a blank string when there is no value in the text file.

At some signal, you lot must examine the constraints on the whole table. If the tabular array was non-empty before the bulk-import operation, the cost of revalidating the constraint may exceed the cost of applying CHECK constraints to the incremental data.

A situation in which you might want constraints disabled (the default behavior) is if the input data contains rows that violate constraints. With Cheque constraints disabled, you lot tin import the data and and then employ Transact-SQL statements to remove the invalid data.

Notation

The MAXERRORS option does not utilize to constraint checking.

CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | ' code_page ' }

Specifies the code page of the data in the data file. CODEPAGE is relevant only if the data contains char, varchar, or text columns with character values greater than 127 or less than 32. For an case, meet Specifying a lawmaking page.

Of import

CODEPAGE is not a supported option on Linux for SQL Server 2017 (14.ten). For SQL Server 2019 (15.10), only the 'RAW' choice is allowed for CODEPAGE.

Notation

Microsoft recommends that you specify a collation name for each cavalcade in a format file.

CODEPAGE value Description
ACP Columns of char, varchar, or text data type are converted from the ANSI/Microsoft Windows code folio (ISO 1252) to the SQL Server lawmaking page.
OEM (default) Columns of char, varchar, or text data type are converted from the system OEM lawmaking page to the SQL Server lawmaking page.
RAW No conversion from i lawmaking page to another occurs; this is the fastest option.
code_page Specific code folio number, for example, 850.

** Important ** Versions prior to SQL Server 2016 (13.x) do not support code page 65001 (UTF-8 encoding).

DATAFILETYPE = { 'char' | 'native' | 'widechar' | 'widenative' }

Specifies that Majority INSERT performs the import functioning using the specified data-file type value.

DATAFILETYPE value All data represented in:
char (default) Character format.

For more than data, come across Use Character Format to Import or Export Data (SQL Server).

native Native (database) data types. Create the native data file by bulk importing data from SQL Server using the bcp utility.

The native value offers a college functioning culling to the char value. Native format is recommended when you lot bulk transfer information between multiple instances of SQL Server using a data file that does not contain any extended/double-byte grapheme gear up (DBCS) characters.

For more information, see Use Native Format to Import or Export Information (SQL Server).

widechar Unicode characters.

For more data, see Utilize Unicode Character Format to Import or Export Information (SQL Server).

widenative Native (database) data types, except in char, varchar, and text columns, in which data is stored as Unicode. Create the widenative data file by bulk importing information from SQL Server using the bcp utility.

The widenative value offers a higher performance alternative to widechar. If the data file contains ANSI extended characters, specify widenative.

For more information, see Utilise Unicode Native Format to Import or Export Information (SQL Server).

DATA_SOURCE = ' data_source_name '

Applies to: SQL Server 2017 (14.x) CTP 1.1 and Azure SQL Database. Is a named external information source pointing to the Azure Blob storage location of the file that will be imported. The external data source must be created using the TYPE = BLOB_STORAGE option added in SQL Server 2017 (xiv.x) CTP 1.ane. For more information, run into CREATE EXTERNAL DATA SOURCE. For an instance, see Importing data from a file in Azure blob storage.

ERRORFILE =' error_file_path '

Specifies the file used to collect rows that accept formatting errors and cannot be converted to an OLE DB rowset. These rows are copied into this error file from the data file "as is."

The mistake file is created when the command is executed. An error occurs if the file already exists. Additionally, a control file that has the extension .ERROR.txt is created. This references each row in the error file and provides fault diagnostics. As presently as the errors have been corrected, the data can exist loaded. Applies to: SQL Server 2017 (14.x) CTP 1.1. Starting time with SQL Server 2017 (14.x), the error_file_path can exist in Azure blob storage.

ERRORFILE_DATA_SOURCE

'errorfile_data_source_name' Applies to: SQL Server 2017 (xiv.x) CTP ane.one. Is a named external data source pointing to the Azure Blob storage location of the error file that will contain errors constitute during the import. The external data source must be created using the Blazon = BLOB_STORAGE option added in SQL Server 2017 (14.x) CTP ane.1. For more information, run across CREATE EXTERNAL Data SOURCE.

FIRSTROW = first_row

Specifies the number of the first row to load. The default is the first row in the specified data file. FIRSTROW is i-based.

Note

The FIRSTROW attribute is not intended to skip column headers. Skipping headers is not supported by the BULK INSERT statement. When skipping rows, the SQL Server Database Engine looks simply at the field terminators, and does not validate the data in the fields of skipped rows.

FIRE_TRIGGERS

Specifies that any insert triggers defined on the destination table execute during the bulk-import operation. If triggers are defined for INSERT operations on the target table, they are fired for every completed batch.

If FIRE_TRIGGERS is not specified, no insert triggers execute.

FORMATFILE_DATA_SOURCE = 'data_source_name'

Applies to: SQL Server 2017 (xiv.x) i.1. Is a named external data source pointing to the Azure Blob storage location of the format file that will define the schema of imported information. The external data source must be created using the TYPE = BLOB_STORAGE option added in SQL Server 2017 (14.10) CTP 1.1. For more than information, see CREATE EXTERNAL Information SOURCE.

KEEPIDENTITY

Specifies that identity value or values in the imported information file are to be used for the identity column. If KEEPIDENTITY is not specified, the identity values for this column are verified merely not imported and SQL Server automatically assigns unique values based on the seed and increase values specified during table cosmos. If the data file does not incorporate values for the identity column in the table or view, employ a format file to specify that the identity cavalcade in the table or view is to be skipped when importing data; SQL Server automatically assigns unique values for the column. For more information, see DBCC CHECKIDENT (Transact-SQL).

For more information, run across about keeping place values see Go along Identity Values When Bulk Importing Data (SQL Server).

KEEPNULLS

Specifies that empty columns should retain a null value during the bulk-import operation, instead of having any default values for the columns inserted. For more information, see Proceed Nulls or Utilise Default Values During Bulk Import (SQL Server).

KILOBYTES_PER_BATCH = kilobytes_per_batch

Specifies the approximate number of kilobytes (KB) of data per batch equally kilobytes_per_batch. Past default, KILOBYTES_PER_BATCH is unknown. For information about performance considerations, see "Remarks," later in this article.

LASTROW = last_row

Specifies the number of the final row to load. The default is 0, which indicates the terminal row in the specified data file.

MAXERRORS = max_errors

Specifies the maximum number of syntax errors allowed in the data earlier the bulk-import performance is canceled. Each row that cannot be imported past the bulk-import operation is ignored and counted as 1 error. If max_errors is not specified, the default is 10.

Note

The MAX_ERRORS choice does not employ to constraint checks or to converting money and bigint data types.

ORDER ( { column [ ASC | DESC ] } [ ,... n ] )

Specifies how the data in the information file is sorted. Bulk import performance is improved if the data being imported is sorted according to the clustered alphabetize on the tabular array, if any. If the data file is sorted in a different order, that is other than the order of a clustered index key or if there is no clustered index on the table, the ORDER clause is ignored. The column names supplied must be valid column names in the destination table. By default, the bulk insert operation assumes the data file is unordered. For optimized majority import, SQL Server likewise validates that the imported data is sorted.

n Is a placeholder that indicates that multiple columns tin can be specified.

ROWS_PER_BATCH = rows_per_batch

Indicates the approximate number of rows of data in the data file.

By default, all the information in the information file is sent to the server equally a unmarried transaction, and the number of rows in the batch is unknown to the query optimizer. If you specify ROWS_PER_BATCH (with a value > 0) the server uses this value to optimize the bulk-import functioning. The value specified for ROWS_PER_BATCH should approximately the same as the actual number of rows. For information near functioning considerations, meet "Remarks," subsequently in this article.

TABLOCK

Specifies that a table-level lock is acquired for the duration of the bulk-import operation. A table tin can be loaded meantime by multiple clients if the table has no indexes and TABLOCK is specified. Past default, locking behavior is determined by the tabular array selection table lock on bulk load. Holding a lock for the duration of the bulk-import operation reduces lock contention on the table, in some cases can significantly meliorate performance. For information about performance considerations, come across "Remarks," subsequently in this commodity.

For columnstore index. the locking beliefs is different because information technology is internally divided into multiple rowsets. Each thread loads data exclusively into each rowset by taking an X lock on the rowset assuasive parallel data load with concurrent data load sessions. The apply of TABLOCK pick will cause thread to accept an X lock on the table (different BU lock for traditional rowsets) which volition prevent other concurrent threads to load information concurrently.

Input file format options

FORMAT = 'CSV'

Applies to: SQL Server 2017 (fourteen.x) CTP 1.1. Specifies a comma-separated values file compliant to the RFC 4180 standard.

              Bulk INSERT Sales.Orders FROM '\\SystemX\DiskZ\Sales\data\orders.csv' WITH ( FORMAT='CSV');                          

FIELDQUOTE = 'field_quote'

Applies to: SQL Server 2017 (14.x) CTP i.i. Specifies a character that will be used as the quote character in the CSV file. If not specified, the quote character (") volition exist used as the quote grapheme as defined in the RFC 4180 standard.

FORMATFILE = 'format_file_path'

Specifies the full path of a format file. A format file describes the data file that contains stored responses created past using the bcp utility on the same tabular array or view. The format file should exist used if:

  • The information file contains greater or fewer columns than the table or view.
  • The columns are in a unlike order.
  • The column delimiters vary.
  • There are other changes in the data format. Format files are typically created by using the bcp utility and modified with a text editor as needed. For more than data, see bcp Utility and Create a format file.

Applies to: SQL Server 2017 (14.x) CTP 1.1 and Azure SQL Database. Beginning with SQL Server 2017 (xiv.x) CTP 1.1, the format_file_path can be in Azure blob storage.

FIELDTERMINATOR =' field_terminator '

Specifies the field terminator to be used for char and widechar data files. The default field terminator is \t (tab character). For more information, come across Specify Field and Row Terminators (SQL Server).

ROWTERMINATOR =' row_terminator '

Specifies the row terminator to be used for char and widechar data files. The default row terminator is \r\n (newline grapheme). For more data, run into Specify Field and Row Terminators (SQL Server).

Compatibility

Majority INSERT enforces strict data validation and data checks of information read from a file that could cause existing scripts to fail when they are executed on invalid data. For instance, Bulk INSERT verifies that:

  • The native representations of float or existent information types are valid.
  • Unicode data has an fifty-fifty-byte length.

Data Types

String-to-Decimal Data Type Conversions

The string-to-decimal data type conversions used in BULK INSERT follow the aforementioned rules equally the Transact-SQL Convert function, which rejects strings representing numeric values that use scientific notation. Therefore, Majority INSERT treats such strings as invalid values and reports conversion errors.

To work around this behavior, employ a format file to majority import scientific notation bladder information into a decimal column. In the format file, explicitly describe the cavalcade as existent or float data. For more data well-nigh these data types, see float and real (Transact-SQL).

Example of Importing a Numeric Value that Uses Scientific Annotation

This example uses the following tabular array in the bulktest database:

              CREATE TABLE dbo.t_float(c1 Bladder, c2 DECIMAL (5,iv));                          

The user wants to bulk import data into the t_float tabular array. The data file, C:\t_float-c.dat, contains scientific notation bladder data; for example:

              8.0000000000000002E-two 8.0000000000000002E-two                          

When copying this sample, be aware of different text editors and encodings that salvage tabs characters (\t) every bit spaces. A tab graphic symbol is expected later in this sample.

However, Majority INSERT cannot import this information directly into t_float, because its second column, c2, uses the decimal data type. Therefore, a format file is necessary. The format file must map the scientific note float information to the decimal format of column c2.

The following format file uses the SQLFLT8 information blazon to map the 2d data field to the second column:

              <?xml version="one.0"?> <BCPFORMAT xmlns="http://schemas.microsoft.com/sqlserver/2004/bulkload/format" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <RECORD> <FIELD ID="1" xsi:blazon="CharTerm" TERMINATOR="\t" MAX_LENGTH="30"/> <FIELD ID="2" xsi:type="CharTerm" TERMINATOR="\r\n" MAX_LENGTH="xxx"/> </RECORD> <ROW> <Cavalcade SOURCE="1" Proper noun="c1" xsi:type="SQLFLT8"/> <Column SOURCE="2" NAME="c2" xsi:type="SQLFLT8"/> </ROW> </BCPFORMAT>                          

To use this format file (using the file name C:\t_floatformat-c-xml.xml) to import the examination data into the examination table, issue the following Transact-SQL statement:

              BULK INSERT bulktest.dbo.t_float FROM 'C:\t_float-c.dat' WITH (FORMATFILE='C:\t_floatformat-c-xml.xml');                          

Important

Azure SQL Database only supports reading from Azure Blob Storage.

Data Types for Bulk Exporting or Importing SQLXML Documents

To bulk consign or import SQLXML data, use one of the following information types in your format file:

Data type Effect
SQLCHAR or SQLVARCHAR The data is sent in the client code page or in the code page implied past the collation). The effect is the aforementioned as specifying the DATAFILETYPE ='char' without specifying a format file.
SQLNCHAR or SQLNVARCHAR The data is sent every bit Unicode. The effect is the aforementioned as specifying the DATAFILETYPE = 'widechar' without specifying a format file.
SQLBINARY or SQLVARBIN The data is sent without any conversion.

General Remarks

For a comparison of the BULK INSERT argument, the INSERT ... SELECT * FROM OPENROWSET(BULK...) statement, and the bcp command, meet Bulk Import and Export of Data (SQL Server).

For information about preparing data for bulk import, encounter Prepare Data for Bulk Export or Import (SQL Server).

The Majority INSERT statement tin be executed within a user-defined transaction to import information into a table or view. Optionally, to use multiple matches for bulk importing data, a transaction can specify the BATCHSIZE clause in the BULK INSERT statement. If a multiple-batch transaction is rolled back, every batch that the transaction has sent to SQL Server is rolled back.

Interoperability

Importing Information from a CSV file

Showtime with SQL Server 2017 (14.10) CTP 1.1, Bulk INSERT supports the CSV format, as does Azure SQL Database. Before SQL Server 2017 (14.x) CTP 1.1, comma-separated value (CSV) files are not supported past SQL Server majority-import operations. However, in some cases, a CSV file can be used every bit the information file for a bulk import of data into SQL Server. For data about the requirements for importing data from a CSV data file, see Set up Information for Majority Export or Import (SQL Server).

Logging Behavior

For information about when row-insert operations that are performed past bulk import into SQL Server are logged in the transaction log, see Prerequisites for Minimal Logging in Bulk Import. Minimal logging is non supported in Azure SQL Database.

Restrictions

When using a format file with BULK INSERT, you can specify upwardly to 1024 fields only. This is same as the maximum number of columns allowed in a table. If you employ a format file with Majority INSERT with a data file that contains more than 1024 fields, BULK INSERT generates the 4822 error. The bcp utility does not have this limitation, and so for information files that contain more than 1024 fields, use Majority INSERT without a format file or employ the bcp command.

Performance Considerations

If the number of pages to be flushed in a single batch exceeds an internal threshold, a full scan of the buffer puddle might occur to identify which pages to affluent when the batch commits. This full scan can hurt bulk-import performance. A likely instance of exceeding the internal threshold occurs when a big buffer puddle is combined with a slow I/O subsystem. To avoid buffer overflows on large machines, either do not use the TABLOCK hint (which will remove the bulk optimizations) or use a smaller batch size (which preserves the bulk optimizations).

Because computers vary, we recommend that you lot test various batch sizes with your information load to detect out what works best for you.

With Azure SQL Database, consider temporarily increasing the performance level of the database or instance prior to the import if you are importing a big book of data.

Security

Security Account Delegation (Impersonation)

If a user uses a SQL Server login, the security profile of the SQL Server process account is used. A login using SQL Server authentication cannot be authenticated outside of the Database Engine. Therefore, when a BULK INSERT command is initiated past a login using SQL Server authentication, the connexion to the data is fabricated using the security context of the SQL Server process account (the business relationship used by the SQL Server Database Engine service). To successfully read the source data you must grant the account used by the SQL Server Database Engine, access to the source data. In contrast, if a SQL Server user logs on by using Windows Authentication, the user tin can read but those files that tin be accessed past the user account, regardless of the security profile of the SQL Server procedure.

When executing the BULK INSERT argument by using sqlcmd or osql, from one computer, inserting data into SQL Server on a second computer, and specifying a data_file on third calculator by using a UNC path, you may receive a 4861 error.

To resolve this error, use SQL Server Hallmark and specify a SQL Server login that uses the security profile of the SQL Server process business relationship, or configure Windows to enable security business relationship delegation. For data most how to enable a user account to be trusted for delegation, see Windows Help.

For more data almost this and other security considerations for using Majority INSERT, see Import Bulk Information by Using Bulk INSERT or OPENROWSET(Majority...) (SQL Server).

When importing from Azure Blob storage and the data is not public (bearding access), create a DATABASE SCOPED CREDENTIAL based on a SAS central encrypted with a Principal KEY, and then create an external database source for utilise in your BULK INSERT command. For an example, see Importing data from a file in Azure blob storage.

Permissions

Requires INSERT and ADMINISTER Majority OPERATIONS permissions. In Azure SQL Database, INSERT and ADMINISTER DATABASE Bulk OPERATIONS permissions are required. ADMINISTER BULK OPERATIONS permissions or the bulkadmin role is not supported for SQL Server on Linux. Simply the sysadmin can perform bulk inserts for SQL Server on Linux.

Additionally, Change TABLE permission is required if one or more of the post-obit is true:

  • Constraints exist and the CHECK_CONSTRAINTS option is not specified.

    Note

    Disabling constraints is the default behavior. To check constraints explicitly, use the CHECK_CONSTRAINTS choice.

  • Triggers exist and the FIRE_TRIGGER option is not specified.

    Note

    By default, triggers are not fired. To burn triggers explicitly, use the FIRE_TRIGGER option.

  • You apply the KEEPIDENTITY option to import identity value from information file.

Examples

A. Using pipes to import data from a file

The following example imports lodge detail information into the AdventureWorks2012.Sales.SalesOrderDetail table from the specified data file by using a pipe (|) every bit the field terminator and |\due north as the row terminator.

              BULK INSERT AdventureWorks2012.Sales.SalesOrderDetail    FROM 'f:\orders\lineitem.tbl'    WITH       (            FIELDTERMINATOR =' |'          , ROWTERMINATOR =' |\northward'       );                          

Of import

Azure SQL Database only supports reading from Azure Blob Storage.

B. Using the FIRE_TRIGGERS statement

The following example specifies the FIRE_TRIGGERS argument.

              BULK INSERT AdventureWorks2012.Sales.SalesOrderDetail    FROM 'f:\orders\lineitem.tbl'    WITH      (          FIELDTERMINATOR =' |'          , ROWTERMINATOR = ':\northward'          , FIRE_TRIGGERS       );                          

Of import

Azure SQL Database but supports reading from Azure Blob Storage.

C. Using line feed as a row terminator

The following example imports a file that uses the line feed every bit a row terminator such as a UNIX output:

              DECLARE @bulk_cmd VARCHAR(g); Ready @bulk_cmd = 'BULK INSERT AdventureWorks2012.Sales.SalesOrderDetail FROM ''<drive>:\<path>\<filename>'' WITH (ROWTERMINATOR = '''+CHAR(ten)+''')'; EXEC(@bulk_cmd);                          

Note

Due to how Microsoft Windows treats text files (\due north automatically gets replaced with \r\n).

Important

Azure SQL Database but supports reading from Azure Blob Storage.

D. Specifying a code folio

The following instance shows how to specify a code folio.

              Majority INSERT MyTable FROM 'D:\data.csv' WITH ( CODEPAGE = '65001'    , DATAFILETYPE = 'char'    , FIELDTERMINATOR = ',' );                          

Important

Azure SQL Database only supports reading from Azure Blob Storage.

E. Importing data from a CSV file

The following example shows how to specify a CSV file, skipping the header (beginning row), using ; as field terminator and 0x0a as line terminator:

              BULK INSERT Sales.Invoices FROM '\\share\invoices\inv-2016-07-25.csv' WITH (FORMAT = 'CSV'       , FIRSTROW=2       , FIELDQUOTE = '\'       , FIELDTERMINATOR = ';'       , ROWTERMINATOR = '0x0a');                          

Of import

Azure SQL Database only supports reading from Azure Blob Storage.

F. Importing data from a file in Azure blob storage

The following instance shows how to load data from a csv file in an Azure Blob storage location on which you have created a SAS cardinal. The Azure Blob storage location is configured as an external data source. This requires a database scoped credential using a shared access signature that is encrypted using a main key in the user database.

              --> Optional - a Primary KEY is not required if a DATABASE SCOPED CREDENTIAL is non required because the blob is configured for public (bearding) access! CREATE Master KEY ENCRYPTION Past PASSWORD = 'YourStrongPassword1'; Go --> Optional - a DATABASE SCOPED CREDENTIAL is not required considering the blob is configured for public (anonymous) access! CREATE DATABASE SCOPED CREDENTIAL MyAzureBlobStorageCredential  WITH IDENTITY = 'SHARED Access SIGNATURE',  SECRET = '******srt=sco&sp=rwac&se=2017-02-01T00:55:34Z&st=2016-12-29T16:55:34Z***************';   -- Note: Make sure that you don't have a leading ? in SAS token, and  -- that you have at least read permission on the object that should exist loaded srt=o&sp=r, and  -- that expiration menstruum is valid (all dates are in UTC time)  CREATE EXTERNAL Data SOURCE MyAzureBlobStorage WITH ( Type = BLOB_STORAGE,           LOCATION = 'https://****************.blob.core.windows.internet/invoices'           , CREDENTIAL= MyAzureBlobStorageCredential --> CREDENTIAL is not required if a blob is configured for public (anonymous) access! );  BULK INSERT Sales.Invoices FROM 'inv-2017-12-08.csv' WITH (DATA_SOURCE = 'MyAzureBlobStorage');                          

Important

Azure SQL only supports reading from Azure Blob Storage.

G. Importing data from a file in Azure blob storage and specifying an error file

The following instance shows how to load data from a csv file in an Azure blob storage location, which has been configured as an external data source and also specifying an mistake file. This requires a database scoped credential using a shared admission signature. Note that if running on Azure SQL Database, ERRORFILE pick should be accompanied by ERRORFILE_DATA_SOURCE otherwise the import might neglect with permissions error. The file specified in ERRORFILE should not exist in the container.

              BULK INSERT Sales.Invoices FROM 'inv-2017-12-08.csv' WITH (          DATA_SOURCE = 'MyAzureInvoices'          , FORMAT = 'CSV'          , ERRORFILE = 'MyErrorFile'          , ERRORFILE_DATA_SOURCE = 'MyAzureInvoices');                          

For complete BULK INSERT examples including configuring the credential and external information source, see Examples of Majority Access to Data in Azure Blob Storage.

Additional Examples

Other BULK INSERT examples are provided in the following manufactures:

  • Examples of Majority Import and Export of XML Documents (SQL Server)
  • Keep Identity Values When Bulk Importing Information (SQL Server)
  • Go on Nulls or Use Default Values During Bulk Import (SQL Server)
  • Specify Field and Row Terminators (SQL Server)
  • Use a Format File to Bulk Import Data (SQL Server)
  • Use Graphic symbol Format to Import or Export Data (SQL Server)
  • Use Native Format to Import or Consign Data (SQL Server)
  • Use Unicode Grapheme Format to Import or Export Data (SQL Server)
  • Employ Unicode Native Format to Import or Export Data (SQL Server)
  • Use a Format File to Skip a Tabular array Column (SQL Server)
  • Use a Format File to Map Table Columns to Data-File Fields (SQL Server)

Run across Also

  • Bulk Import and Export of Data (SQL Server)
  • bcp Utility
  • Format Files for Importing or Exporting Information (SQL Server)
  • INSERT (Transact-SQL)
  • OPENROWSET (Transact-SQL)
  • Prepare Data for Bulk Export or Import (SQL Server)
  • sp_tableoption (Transact-SQL)