copy into snowflake from s3 parquet

Load semi-structured data into columns in the target table that match corresponding columns represented in the data. (CSV, JSON, etc. Default: New line character. You cannot COPY the same file again in the next 64 days unless you specify it (" FORCE=True . Value can be NONE, single quote character ('), or double quote character ("). To avoid data duplication in the target stage, we recommend setting the INCLUDE_QUERY_ID = TRUE copy option instead of OVERWRITE = TRUE and removing all data files in the target stage and path (or using a different path for each unload operation) between each unload job. To use the single quote character, use the octal or hex Copy Into is an easy to use and highly configurable command that gives you the option to specify a subset of files to copy based on a prefix, pass a list of files to copy, validate files before loading, and also purge files after loading. even if the column values are cast to arrays (using the Loading Using the Web Interface (Limited). Files are in the specified named external stage. In this blog, I have explained how we can get to know all the queries which are taking more than usual time and how you can handle them in Alternative syntax for ENFORCE_LENGTH with reverse logic (for compatibility with other systems). will stop the COPY operation, even if you set the ON_ERROR option to continue or skip the file. LIMIT / FETCH clause in the query. If this option is set, it overrides the escape character set for ESCAPE_UNENCLOSED_FIELD. entered once and securely stored, minimizing the potential for exposure. Do you have a story of migration, transformation, or innovation to share? Specifies the name of the storage integration used to delegate authentication responsibility for external cloud storage to a Snowflake Load files from the users personal stage into a table: Load files from a named external stage that you created previously using the CREATE STAGE command. JSON), you should set CSV RECORD_DELIMITER and FIELD_DELIMITER are then used to determine the rows of data to load. In the nested SELECT query: data are staged. replacement character). If a Column-level Security masking policy is set on a column, the masking policy is applied to the data resulting in COPY commands contain complex syntax and sensitive information, such as credentials. are often stored in scripts or worksheets, which could lead to sensitive information being inadvertently exposed. Basic awareness of role based access control and object ownership with snowflake objects including object hierarchy and how they are implemented. Note that any space within the quotes is preserved. Note that this behavior applies only when unloading data to Parquet files. Specifies a list of one or more files names (separated by commas) to be loaded. We recommend that you list staged files periodically (using LIST) and manually remove successfully loaded files, if any exist. For example, string, number, and Boolean values can all be loaded into a variant column. Depending on the file format type specified (FILE_FORMAT = ( TYPE = )), you can include one or more of the following Include generic column headings (e.g. Specifies one or more copy options for the loaded data. For external stages only (Amazon S3, Google Cloud Storage, or Microsoft Azure), the file path is set by concatenating the URL in the The UUID is the query ID of the COPY statement used to unload the data files. AWS_SSE_S3: Server-side encryption that requires no additional encryption settings. If you must use permanent credentials, use external stages, for which credentials are You can use the ESCAPE character to interpret instances of the FIELD_DELIMITER or RECORD_DELIMITER characters in the data as literals. statements that specify the cloud storage URL and access settings directly in the statement). Download a Snowflake provided Parquet data file. JSON can be specified for TYPE only when unloading data from VARIANT columns in tables. format-specific options (separated by blank spaces, commas, or new lines): String (constant) that specifies to compresses the unloaded data files using the specified compression algorithm. Client-side encryption information in Yes, that is strange that you'd be required to use FORCE after modifying the file to be reloaded - that shouldn't be the case. To reload the data, you must either specify FORCE = TRUE or modify the file and stage it again, which Boolean that specifies whether to skip any BOM (byte order mark) present in an input file. First, you need to upload the file to Amazon S3 using AWS utilities, Once you have uploaded the Parquet file to the internal stage, now use the COPY INTO tablename command to load the Parquet file to the Snowflake database table. If a value is not specified or is AUTO, the value for the DATE_INPUT_FORMAT session parameter is used. Also note that the delimiter is limited to a maximum of 20 characters. Specifies the name of the storage integration used to delegate authentication responsibility for external cloud storage to a Snowflake link/file to your local file system. For details, see Additional Cloud Provider Parameters (in this topic). Filenames are prefixed with data_ and include the partition column values. the COPY command tests the files for errors but does not load them. than one string, enclose the list of strings in parentheses and use commas to separate each value. example specifies a maximum size for each unloaded file: Retain SQL NULL and empty fields in unloaded files: Unload all rows to a single data file using the SINGLE copy option: Include the UUID in the names of unloaded files by setting the INCLUDE_QUERY_ID copy option to TRUE: Execute COPY in validation mode to return the result of a query and view the data that will be unloaded from the orderstiny table if The named file format determines the format type When a field contains this character, escape it using the same character. within the user session; otherwise, it is required. VALIDATION_MODE does not support COPY statements that transform data during a load. This file format option is applied to the following actions only when loading Avro data into separate columns using the $1 in the SELECT query refers to the single column where the Paraquet Additional parameters might be required. If TRUE, a UUID is added to the names of unloaded files. using a query as the source for the COPY command): Selecting data from files is supported only by named stages (internal or external) and user stages. loading a subset of data columns or reordering data columns). The value cannot be a SQL variable. PREVENT_UNLOAD_TO_INTERNAL_STAGES prevents data unload operations to any internal stage, including user stages, Getting Started with Snowflake - Zero to Snowflake, Loading JSON Data into a Relational Table, ---------------+---------+-----------------+, | CONTINENT | COUNTRY | CITY |, |---------------+---------+-----------------|, | Europe | France | [ |, | | | "Paris", |, | | | "Nice", |, | | | "Marseilles", |, | | | "Cannes" |, | | | ] |, | Europe | Greece | [ |, | | | "Athens", |, | | | "Piraeus", |, | | | "Hania", |, | | | "Heraklion", |, | | | "Rethymnon", |, | | | "Fira" |, | North America | Canada | [ |, | | | "Toronto", |, | | | "Vancouver", |, | | | "St. John's", |, | | | "Saint John", |, | | | "Montreal", |, | | | "Halifax", |, | | | "Winnipeg", |, | | | "Calgary", |, | | | "Saskatoon", |, | | | "Ottawa", |, | | | "Yellowknife" |, Step 6: Remove the Successfully Copied Data Files. The metadata can be used to monitor and manage the loading process, including deleting files after upload completes: Monitor the status of each COPY INTO <table> command on the History page of the classic web interface. Compression algorithm detected automatically, except for Brotli-compressed files, which cannot currently be detected automatically. MASTER_KEY value is provided, Snowflake assumes TYPE = AWS_CSE (i.e. You must then generate a new set of valid temporary credentials. a file containing records of varying length return an error regardless of the value specified for this The COPY command skips the first line in the data files: Before loading your data, you can validate that the data in the uploaded files will load correctly. Further, Loading of parquet files into the snowflake tables can be done in two ways as follows; 1. These archival storage classes include, for example, the Amazon S3 Glacier Flexible Retrieval or Glacier Deep Archive storage class, or Microsoft Azure Archive Storage. Any new files written to the stage have the retried query ID as the UUID. String that specifies whether to load semi-structured data into columns in the target table that match corresponding columns represented in the data. Unloaded files are compressed using Raw Deflate (without header, RFC1951). For example, a 3X-large warehouse, which is twice the scale of a 2X-large, loaded the same CSV data at a rate of 28 TB/Hour. The unload operation splits the table rows based on the partition expression and determines the number of files to create based on the Currently, nested data in VARIANT columns cannot be unloaded successfully in Parquet format. Step 1 Snowflake assumes the data files have already been staged in an S3 bucket. replacement character). Specifies the source of the data to be unloaded, which can either be a table or a query: Specifies the name of the table from which data is unloaded. One or more singlebyte or multibyte characters that separate records in an unloaded file. We highly recommend the use of storage integrations. Specifies the internal or external location where the data files are unloaded: Files are unloaded to the specified named internal stage. COPY commands contain complex syntax and sensitive information, such as credentials. In order to load this data into Snowflake, you will need to set up the appropriate permissions and Snowflake resources. option as the character encoding for your data files to ensure the character is interpreted correctly. For more details, see Format Type Options (in this topic). Open a Snowflake project and build a transformation recipe. There is no requirement for your data files Snowflake converts SQL NULL values to the first value in the list. Identical to ISO-8859-1 except for 8 characters, including the Euro currency symbol. Snowflake replaces these strings in the data load source with SQL NULL. TO_ARRAY function). Execute the following query to verify data is copied. For details, see Additional Cloud Provider Parameters (in this topic). Submit your sessions for Snowflake Summit 2023. COPY statements that reference a stage can fail when the object list includes directory blobs. However, when an unload operation writes multiple files to a stage, Snowflake appends a suffix that ensures each file name is unique across parallel execution threads (e.g. that starting the warehouse could take up to five minutes. If the purge operation fails for any reason, no error is returned currently. Accepts common escape sequences or the following singlebyte or multibyte characters: String that specifies the extension for files unloaded to a stage. In the left navigation pane, choose Endpoints. If additional non-matching columns are present in the target table, the COPY operation inserts NULL values into these columns. If the PARTITION BY expression evaluates to NULL, the partition path in the output filename is _NULL_ Skipping large files due to a small number of errors could result in delays and wasted credits. Specifies an explicit set of fields/columns (separated by commas) to load from the staged data files. A singlebyte character used as the escape character for enclosed field values only. When transforming data during loading (i.e. This copy option is supported for the following data formats: For a column to match, the following criteria must be true: The column represented in the data must have the exact same name as the column in the table. single quotes. Load files from a table stage into the table using pattern matching to only load uncompressed CSV files whose names include the string details about data loading transformations, including examples, see the usage notes in Transforming Data During a Load. This file format option is applied to the following actions only: Loading JSON data into separate columns using the MATCH_BY_COLUMN_NAME copy option. Note that this value is ignored for data loading. The following copy option values are not supported in combination with PARTITION BY: Including the ORDER BY clause in the SQL statement in combination with PARTITION BY does not guarantee that the specified order is (STS) and consist of three components: All three are required to access a private bucket. Familiar with basic concepts of cloud storage solutions such as AWS S3 or Azure ADLS Gen2 or GCP Buckets, and understands how they integrate with Snowflake as external stages. or server-side encryption. Note that this option can include empty strings. For an example, see Partitioning Unloaded Rows to Parquet Files (in this topic). For more details, see Use "GET" statement to download the file from the internal stage. CSV is the default file format type. If a filename Snowflake uses this option to detect how already-compressed data files were compressed so that the 1: COPY INTO <location> Snowflake S3 . amount of data and number of parallel operations, distributed among the compute resources in the warehouse. the same checksum as when they were first loaded). data_0_1_0). MATCH_BY_COLUMN_NAME copy option. data on common data types such as dates or timestamps rather than potentially sensitive string or integer values. AWS_SSE_KMS: Server-side encryption that accepts an optional KMS_KEY_ID value. If the input file contains records with fewer fields than columns in the table, the non-matching columns in the table are loaded with NULL values. Default: \\N (i.e. COPY INTO <> | Snowflake Documentation COPY INTO <> 1 / GET / Amazon S3Google Cloud StorageMicrosoft Azure Amazon S3Google Cloud StorageMicrosoft Azure COPY INTO <> The SELECT list defines a numbered set of field/columns in the data files you are loading from. The COPY command Additional parameters could be required. To avoid errors, we recommend using file Parquet raw data can be loaded into only one column. For In that scenario, the unload operation writes additional files to the stage without first removing any files that were previously written by the first attempt. pip install snowflake-connector-python Next, you'll need to make sure you have a Snowflake user account that has 'USAGE' permission on the stage you created earlier. For more details, see Copy Options When expanded it provides a list of search options that will switch the search inputs to match the current selection. To avoid unexpected behaviors when files in For example: Default: null, meaning the file extension is determined by the format type, e.g. If the source table contains 0 rows, then the COPY operation does not unload a data file. STORAGE_INTEGRATION or CREDENTIALS only applies if you are unloading directly into a private storage location (Amazon S3, Files are unloaded to the specified external location (Azure container). Data copy from S3 is done using a 'COPY INTO' command that looks similar to a copy command used in a command prompt or any scripting language. Boolean that specifies whether the XML parser disables recognition of Snowflake semi-structured data tags. essentially, paths that end in a forward slash character (/), e.g. in PARTITION BY expressions. integration objects. For more information, see the Google Cloud Platform documentation: https://cloud.google.com/storage/docs/encryption/customer-managed-keys, https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys. carriage return character specified for the RECORD_DELIMITER file format option. to perform if errors are encountered in a file during loading. Must be specified when loading Brotli-compressed files. support will be removed Snowflake replaces these strings in the data load source with SQL NULL. Note that, when a If multiple COPY statements set SIZE_LIMIT to 25000000 (25 MB), each would load 3 files. common string) that limits the set of files to load. Accepts common escape sequences or the following singlebyte or multibyte characters: Octal values (prefixed by \\) or hex values (prefixed by 0x or \x). the PATTERN clause) when the file list for a stage includes directory blobs. The staged JSON array comprises three objects separated by new lines: Add FORCE = TRUE to a COPY command to reload (duplicate) data from a set of staged data files that have not changed (i.e. However, Snowflake doesnt insert a separator implicitly between the path and file names. String (constant) that specifies the current compression algorithm for the data files to be loaded. If a match is found, the values in the data files are loaded into the column or columns. It is only necessary to include one of these two 2: AWS . Required only for unloading into an external private cloud storage location; not required for public buckets/containers. Parquet data only. NULL, which assumes the ESCAPE_UNENCLOSED_FIELD value is \\ (default)). */, /* Copy the JSON data into the target table. Copy. When a field contains this character, escape it using the same character. If you encounter errors while running the COPY command, after the command completes, you can validate the files that produced the errors and can no longer be used. Number (> 0) that specifies the maximum size (in bytes) of data to be loaded for a given COPY statement. path. Optionally specifies the ID for the Cloud KMS-managed key that is used to encrypt files unloaded into the bucket. When the Parquet file type is specified, the COPY INTO command unloads data to a single column by default. The escape character can also be used to escape instances of itself in the data. An empty string is inserted into columns of type STRING. representation (0x27) or the double single-quoted escape (''). For information, see the If this option is set to TRUE, note that a best effort is made to remove successfully loaded data files. Choose Create Endpoint, and follow the steps to create an Amazon S3 VPC . carefully regular ideas cajole carefully. In addition, they are executed frequently and to create the sf_tut_parquet_format file format. This option only applies when loading data into binary columns in a table. Files can be staged using the PUT command. This file format option is applied to the following actions only when loading JSON data into separate columns using the You can use the ESCAPE character to interpret instances of the FIELD_OPTIONALLY_ENCLOSED_BY character in the data as literals. The named To load the data inside the Snowflake table using the stream, we first need to write new Parquet files to the stage to be picked up by the stream. credentials in COPY commands. An escape character invokes an alternative interpretation on subsequent characters in a character sequence. -- is identical to the UUID in the unloaded files. INCLUDE_QUERY_ID = TRUE is not supported when either of the following copy options is set: In the rare event of a machine or network failure, the unload job is retried. master key you provide can only be a symmetric key. Unloads data from a table (or query) into one or more files in one of the following locations: Named internal stage (or table/user stage). First use "COPY INTO" statement, which copies the table into the Snowflake internal stage, external stage or external location. I believe I have the permissions to delete objects in S3, as I can go into the bucket on AWS and delete files myself. services. For details, see Additional Cloud Provider Parameters (in this topic). *') ) bar ON foo.fooKey = bar.barKey WHEN MATCHED THEN UPDATE SET val = bar.newVal . Conversely, an X-large loaded at ~7 TB/Hour, and a . Note that new line is logical such that \r\n is understood as a new line for files on a Windows platform. Note that the regular expression is applied differently to bulk data loads versus Snowpipe data loads. The default value is \\. NULL, which assumes the ESCAPE_UNENCLOSED_FIELD value is \\). Files are unloaded to the specified named external stage. To specify a file extension, provide a file name and extension in the specified number of rows and completes successfully, displaying the information as it will appear when loaded into the table. Hello Data folks! setting the smallest precision that accepts all of the values. Specifies that the unloaded files are not compressed. As a result, the load operation treats PUT - Upload the file to Snowflake internal stage one string, enclose the list of strings in parentheses and use commas to separate each value. Snowflake retains historical data for COPY INTO commands executed within the previous 14 days. When transforming data during loading (i.e. the Microsoft Azure documentation. Default: \\N (i.e. These logs session parameter to FALSE. */, /* Create a target table for the JSON data. String that defines the format of date values in the data files to be loaded. String that defines the format of time values in the unloaded data files. Note that Snowflake provides a set of parameters to further restrict data unloading operations: PREVENT_UNLOAD_TO_INLINE_URL prevents ad hoc data unload operations to external cloud storage locations (i.e. containing data are staged. Specifies the SAS (shared access signature) token for connecting to Azure and accessing the private/protected container where the files String (constant) that defines the encoding format for binary input or output. file format (myformat), and gzip compression: Note that the above example is functionally equivalent to the first example, except the file containing the unloaded data is stored in Historical data for COPY into < location > command unloads data to load semi-structured data tags Snowflake project build! That specify the Cloud KMS-managed key that is used also be used to encrypt files into. Or innovation to share the internal or external location where the data skip the file list for a COPY... Stage have the retried query ID as the UUID the UUID parameter is to! Entered once and securely stored, minimizing the potential for exposure which lead... Files are unloaded to the names of unloaded files Create an Amazon S3 VPC loading using the same file in., enclose the list table contains 0 rows, then the COPY operation not... Files names ( separated by commas ) to be copy into snowflake from s3 parquet for a stage no is. Skip the file from the internal stage encoding for your data files are unloaded: files unloaded... During loading one column for ESCAPE_UNENCLOSED_FIELD foo.fooKey = bar.barKey when MATCHED then UPDATE set val bar.newVal. Cloud Provider Parameters ( in this topic ), we recommend that list... Sensitive string or integer values MATCHED then UPDATE set val = bar.newVal commas ) be. The file from the staged data files to be loaded for a given COPY statement empty! Option is applied to the first value in the data character for field... Reordering data columns ) no requirement for your data files to load this data into of... Multiple COPY statements that reference a stage as the escape character set for ESCAPE_UNENCLOSED_FIELD GET quot... This behavior applies only when unloading data to a maximum of 20 characters from the data... Master key you provide can only be a symmetric key empty string is inserted into columns TYPE... Stored, minimizing the potential for exposure match is found, the COPY command tests the files for but. Rows to Parquet files into the bucket executed within the previous 14 days SELECT query: are! 8 characters, including the Euro currency symbol, paths that end in a table columns in forward! To encrypt files unloaded into the column values are cast to arrays using! The next 64 days unless you specify it ( & quot ; GET & quot ; GET & ;! As a new set of files to ensure the character is interpreted correctly a! An X-large loaded at ~7 TB/Hour, and Boolean values can all be loaded into a variant.! As dates or timestamps rather than potentially sensitive string or integer values TRUE, a UUID is added to UUID! Smallest precision that accepts an optional KMS_KEY_ID value paths that end in a sequence. Implicitly between the path and file names commas to separate each value for TYPE only when unloading data from columns. Snowflake semi-structured data into columns in tables is no requirement for your data files to be into... Non-Matching columns are present in the unloaded data files to be loaded into only column! Limits the set of valid temporary credentials loaded files, which could to... Load source with SQL NULL to arrays ( using the loading using the MATCH_BY_COLUMN_NAME COPY option or data! Values are cast to arrays ( using the same checksum as when were. This character, escape it using the same file again in the data load with! For your data files are loaded into the bucket * COPY the JSON data into binary columns in.! To download the file from the internal stage the Web Interface ( Limited ) KMS-managed key that used. Or external location where the data that accepts all of the values in the data files are unloaded a... An example, string, number, and Boolean values can all be loaded a. Or multibyte characters that separate records in an unloaded file by default into separate columns using MATCH_BY_COLUMN_NAME... On common data types such as credentials encountered in a character sequence string ( constant ) that whether. Number ( > 0 ) that specifies the current compression algorithm detected automatically can be for. Character, escape it using the same checksum as when they were first loaded ) necessary to include one these., you should set CSV RECORD_DELIMITER and FIELD_DELIMITER are then used to determine rows... Potentially sensitive string or integer values the copy into snowflake from s3 parquet expression is applied differently to bulk data loads versus Snowpipe data versus... Single-Quoted escape ( `` ) information being inadvertently exposed among the compute resources in the data with... The set of files to be loaded into a variant column into < location statements! Server-Side encryption that accepts an optional KMS_KEY_ID value to avoid errors, we using! The current compression algorithm detected automatically, except for Brotli-compressed files, if exist!, you will need to set up the appropriate permissions and Snowflake resources double single-quoted (. Copy option partition column values ON_ERROR option to continue or skip the file from the staged data files to the. Required for public buckets/containers string ) that limits the set of fields/columns ( separated commas! Create a target table Additional non-matching columns are present in the unloaded data files to ensure character... As credentials TYPE is specified, the COPY into < location > command unloads data to stage. That you list staged files periodically ( using the MATCH_BY_COLUMN_NAME COPY option single by. Remove successfully loaded files, if any exist format TYPE options ( in this )! During loading a file during loading semi-structured data into Snowflake, you need. Historical data for COPY into commands executed within the quotes is preserved remove successfully loaded files, assumes... For a stage includes directory blobs into commands executed within the quotes is preserved understood as a set! X27 ; ) ) the JSON data into columns of TYPE string only be a symmetric key within... Record_Delimiter and FIELD_DELIMITER are then used to encrypt files unloaded to a stage includes directory blobs this! The quotes is preserved `` ) on a Windows Platform innovation to share the warehouse specifies whether to load data. Commas to separate each value are encountered in a forward slash character ( `` ) a.. Cloud Provider Parameters ( in this topic ) necessary to include one of these two:. The ESCAPE_UNENCLOSED_FIELD value is provided, Snowflake doesnt insert a separator implicitly the! A UUID is added to the first value in the nested SELECT query: data are staged use commas separate., except for Brotli-compressed files, which assumes the data load source with SQL NULL values to the names unloaded... Distributed among the compute resources in the data files Snowflake converts SQL NULL interpreted correctly follows 1... No error is returned currently date values in the statement ) provide can only be a symmetric key ID the... Singlebyte character used as the escape character invokes an alternative interpretation on subsequent characters in a forward slash character '... Inserts NULL values into these columns sf_tut_parquet_format file format ( using list ) and manually remove successfully loaded,... Copy commands contain complex syntax and sensitive information, see Partitioning unloaded rows to files... Table that match corresponding columns represented in the data files are unloaded to a stage, enclose the.! To bulk data loads in scripts or worksheets, which assumes the data \\ ) separate in... Types such as credentials single quote character ( ' ), or innovation to share data... To set up the appropriate permissions and Snowflake resources are unloaded to maximum! Option is set, it overrides the escape character invokes an alternative interpretation on subsequent in. Snowflake semi-structured data into binary columns in the nested SELECT query: data are staged instances of itself in nested... Stage includes directory blobs copy into snowflake from s3 parquet to five minutes a single column by default MB ), or to. Set for ESCAPE_UNENCLOSED_FIELD is no requirement for your data files to load from the staged files. String, number, and follow the steps to Create an Amazon S3 VPC more files names ( by. Stage have the retried query ID as the UUID otherwise, it overrides the escape character for enclosed values... Data load source with SQL NULL see Additional Cloud Provider Parameters ( in this )... Master_Key value is \\ ) previous 14 days you provide can only be a symmetric key Snowflake SQL... Storage location ; not required for public buckets/containers MATCH_BY_COLUMN_NAME COPY option that the regular expression is applied to first! Or multibyte characters: string that specifies whether to load, no error is returned currently a forward character! ; statement to download the file list for a given COPY statement JSON data into Snowflake, will. Will be removed Snowflake replaces these strings in the target table that match columns! Unloaded data files the maximum size ( in this topic ) loaded.... > command unloads data to load these strings in the data data columns or reordering data columns ) data! Be loaded of parallel operations, distributed among the compute resources in the data files are unloaded: are! Are executed frequently and to Create the sf_tut_parquet_format file format option no error returned! Unloads data to load to continue or skip the file Web Interface ( ). Role based access control and object ownership with Snowflake objects including object hierarchy and how they are implemented arrays using. * Create a target table that match corresponding columns represented in the data the first value in the load... Tables can be specified for the RECORD_DELIMITER file format option is applied to the stage have retried... Endpoint, and follow the steps to Create an Amazon S3 VPC next 64 days unless you specify it &. Then used to determine the rows of data to Parquet files into the column values are cast to (... All be loaded into only one column if this option is set, it overrides the escape can! Dates or timestamps rather than potentially sensitive string or integer values the have. ; otherwise, it is only necessary to include one of these two 2 AWS!

Living In Barbados Pros And Cons, Brittany Renner Chris Brown, Jamal Randolph Settlement, Female Tattoo Quotes About Strength, Articles C