Cost Governance of the Snowflake Connector for PostgreSQL

Note

The Snowflake Connector for PostgreSQL is subject to the Connector Terms.

This topic provides best practices for cost governance and finding the optimal warehouse size for the Snowflake Connector for PostgreSQL.

Measuring Cost of the Connector

If the connector has a separate account only for data ingestion and storage, and the account shows no other activity (such as executing queries by users using the ingested data), you can read the overall cost on the account level. To learn more, refer to Exploring Overall Cost.

If the account is not dedicated only to the connector or you need to investigate the costs further, you should analyze the charged costs for the three components separately:

For an introduction to these three components of cost, refer to Understanding Overall Cost.

General Recommendations

To obtain cost generated by the connector, we recommend that you create a separate account solely for using the connector. Using a specific account you track the exact data transfer generated by the connector.

If you cannot use a separate account for the connector, consider the following:

  • Create a separate database for storing ingested data to track storage cost easier.

  • Allocate a warehouse only for the connector to get the exact compute cost.

  • Use object tags on databases and a warehouse to build custom cost reports.

Compute Cost

We recommend that you use a pair of dedicated operations and compute warehouses only for the connector. This configuration allows you to create resource monitors on these two warehouses. You can use the monitors to send email alerts and suspend both warehouses, stopping the connector when the set credit quota is exceeded.

Note

Setting the credit quota too low in configurations where large volumes of data are ingested may cause the connector to not ingest all data.

For information on how to check credits consumed by the warehouse, refer to Exploring Compute Cost. You can also assign object tags to the warehouse and use the tags to create cost reports.

Storage Cost

The PostgreSQL 6.0.0 connector stores data in:

  • The connector database, which is created when installing the connector and which holds the connector internal state.

  • One or many other databases, which are created when configuring data sources and where the ingested data is stored.

Data storage is also used by the Snowflake Fail-safe feature. The amount of data stored in Fail-safe depends on table updates done by the connector. Hence, the amount of data increases if table rows ingested from a source database are updated frequently or a whole table is reloaded. Typically, seven to ten days after the connector is set up, the amount of Fail-safe data stabilizes (assuming that no reloads are performed and that the flow of ingested data is at a steady rate).

If you want to check the storage usage using Snowsight, we recommend that you use separate databases for storing ingested data. This way you can filter the graphs for the storage usage by an object, which shows usage by individual databases. You can also view the storage usage by querying the DATABASE_STORAGE_USAGE_HISTORY view and filtering by databases used by the connector.

If a database contains other schemas not related to the connector, you can query the storage usage of a specific schema that is dedicated to data ingested by the connector. You can get this information from the TABLE_STORAGE_METRICS view after filtering by database and schema names and aggregating columns with storage usage.

Data Transfer Cost

The connector uses the Snowflake Snowpipe Streaming feature to transfer data from a source database to a destination database in your Snowflake account.

For information on how to check credits consumed by the Snowpipe Streaming, refer to Snowpipe Streaming costs.

Determining optimal warehouse size for the connector instance

A major benefit is that the compute warehouse size can be adjusted to the data volume. The connector typically requires a XSMALL ops warehouse and a XSMALL compute warehouse, and do not take advantage of larger warehouses during data ingestion.

To find the optimal warehouse size for the connector, you should consider the factors that affect the performance of the connector, such as the size of the source databases, the number of changes, the number of enabled datasources and tables.

We recommend that you define a set of measurable expectations, such as replication lag, and pick the smallest warehouse size that meets these expectations. Alternatively, when you just want to try out the connector and enable a single table for ingestion, an X-Small warehouse should be sufficient.

To determine if you can downsize the warehouse, see Monitoring warehouse load.

Language: English