Snowflake (S3 Staging)
Send processed telemetry data to Snowflake using Amazon S3 as the staging location.
Synopsis
The Snowflake S3 target stages telemetry files to Amazon S3, then executes COPY INTO commands on Snowflake to load data into tables.
Schema
targets:
- name: <string>
type: amazonsnowflake
properties:
account: <string>
username: <string>
password: <string>
database: <string>
schema: <string>
warehouse: <string>
role: <string>
staging_bucket: <string>
staging_prefix: <string>
region: <string>
key: <string>
secret: <string>
session: <string>
table: <string>
schema: <string>
name: <string>
format: <string>
compression: <string>
extension: <string>
tables: <array>
batch_size: <integer>
max_size: <integer>
timeout: <integer>
part_size: <integer>
field_format: <string>
debug:
status: <boolean>
dont_send_logs: <boolean>
Configuration
Base Target Fields
| Field | Type | Required | Description |
|---|---|---|---|
name | string | Y | Unique identifier for this target |
description | string | N | Human-readable description |
type | string | Y | Must be amazonsnowflake |
pipelines | array | N | Pipeline names to apply before sending |
status | boolean | N | Enable (true) or disable (false) this target |
Snowflake Connection
| Field | Type | Required | Description |
|---|---|---|---|
account | string | Y | Snowflake account identifier (e.g., abc123.us-east-1) |
username | string | Y | Snowflake username |
password | string | Y | Snowflake password |
database | string | Y | Snowflake database name |
schema | string | N | Snowflake schema name. Default: PUBLIC |
warehouse | string | N | Snowflake virtual warehouse name |
role | string | N | Snowflake role name |
S3 Staging Configuration
| Field | Type | Required | Description |
|---|---|---|---|
staging_bucket | string | Y | S3 bucket name for staging files |
staging_prefix | string | N | S3 prefix path. Default: snowflake-staging/ |
region | string | Y | AWS region for S3 bucket |
key | string | N | AWS access key ID (uses default credentials chain if omitted) |
secret | string | N | AWS secret access key |
session | string | N | AWS session token for temporary credentials |
Table Configuration
| Field | Type | Required | Description |
|---|---|---|---|
table | string | Y* | Catch-all table name for all events |
schema | string | Y* | Avro/Parquet schema definition |
name | string | Y* | File naming template. Default: vmetric.{{.Timestamp}}.{{.Extension}} |
format | string | N | File format (csv, json, avro, orc, parquet, xml). Default: parquet |
compression | string | N | Compression algorithm |
extension | string | N | File extension override |
tables | array | N | Multiple table configurations (see below) |
tables.table | string | Y | Target table name |
tables.schema | string | Y* | Avro/Parquet schema definition for this table |
tables.name | string | Y | File naming template for this table |
tables.format | string | N | File format for this table |
tables.compression | string | N | Compression algorithm for this table |
tables.extension | string | N | File extension override for this table |
* At least one of table (catch-all) or tables (multiple) must be configured. For Avro/Parquet formats, schema is required.
Batch Configuration
| Field | Type | Required | Description |
|---|---|---|---|
batch_size | integer | N | Maximum events per file before flush |
max_size | integer | N | Maximum file size in bytes before flush |
timeout | integer | N | COPY INTO command timeout in seconds. Default: 300 |
part_size | integer | N | S3 multipart upload part size in MB |