Skip to main content

While working on an internal project—Nimbus Intelligence Data Warehouse—I was mainly responsible for cost-optimization settings in Snowflake so that my team and I wouldn’t exceed our budget. After several days of research, I identified seven key actions that I implemented on the project and that could cut your Snowflake bill by as much as 30 percent if you apply them, too.


šŸ—ŗļø Visual first: how does every piece fit together?

graph TD
%% — Observabilidad —
subgraph Observabilidad
A[WAREHOUSE_EVENTS_HISTORY] –>|uso real| B[QUERY_ATTRIBUTION_HISTORY] end

%% — Gobernanza —
subgraph Gobernanza
C[Tags & Budgets] D[Resource Monitors] end

%% — Automatización —
E[Triggered Tasks] –> F[Auto-downscale]

%% — Relaciones cruzadas —
A –> D
B –> C
C –>|Alertas / Webhooks| Fin(FinOps)

Paste this snippet into mermaid.live or export a PNG for your cost-committee deck.
ACCOUNT_USAGE tables (blue boxes) are the radar; monitors and budgets (red) are the sirens; triggered tasks fire the corrective actions.

1ļøāƒ£ Right-size and auto-suspend your warehouses

  • Aim for the ā€œjust-enoughā€ size: start at S and scale only if WAREHOUSE_LOAD_HISTORY > 80 %.
  • Enable AUTO_SUSPEND = 60 and AUTO_RESUME = TRUE.
  • Inspect WAREHOUSE_EVENTS_HISTORY to hunt for ghost resumes.

2ļøāƒ£ Resource Monitors—cut the card before it melts

Create one monitor per environment and set thresholds:

CREATE RESOURCE MONITOR rm_dev WITH CREDIT_QUOTA = 1000 
  TRIGGERS ON 80 PERCENT DO NOTIFY
           ON 100 PERCENT DO SUSPEND;

Your warehouse pauses before the budget is blown.

3ļøāƒ£ Set Budgets and keep FinOps in the loop

Budgets provide monthly limits and predictive alerts (webhook or email). Use them for serverless spend (Snowpipe, Tasks) that monitors don’t cover.

4ļøāƒ£ Attribute spend with Tags + Query Attribution

Tag databases, schemas and warehouses:

ALTER WAREHOUSE etl_wh SET TAG cost_center = 'marketing';

Then query QUERY_ATTRIBUTION_HISTORY to know who spent how much, and why.

5ļøāƒ£ Run only when data is present with Triggered Tasks

You used to fire a cron every 5 min; now trigger when a micro-batch arrives:

CREATE OR REPLACE TASK t_load_orders
  WAREHOUSE = ingest_wh
  SCHEDULE = 'USING CRON 0 * * * * UTC' -- ā€œold schoolā€  
  WHEN SYSTEM$STREAM_HAS_DATA('stg_orders_stream')
AS CALL load_orders();

With Triggered Tasks, the WHEN … clause replaces the cron: zero idle time, zero wasted credits.

6ļøāƒ£ Shamelessly exploit Zero-Copy Cloning

Clone 10 TB databases in seconds without paying storage; you’re charged only for changed blocks. Perfect for dev/QA. Drop stale clones so they don’t inherit the parent’s costs.

7ļøāƒ£ Put a FinOps bot in production

Combine:

  • A stream on WAREHOUSE_EVENTS_HISTORY
  • A triggered task that downsizes or suspends
  • A Slack webhook + Jira ticket

Turn cost post-mortems into preventive actions with zero human intervention.


āœ… Checklist for tomorrow

  1. Review warehouse sizes and AUTO_SUSPEND.
  2. Create Resource Monitors per environment.
  3. Enable Budgets at account level + one per project.
  4. Tag critical objects and enable QUERY_ATTRIBUTION_HISTORY.
  5. Migrate cron tasks to Triggered Tasks.
  6. Document a Zero-Copy Cloning policy (14-day TTL).
  7. Fire a Slack alert when usage > 80 % of budget.

šŸ’¬ Your turn

Which trick saved you the most Snowflake credits?

Leave a Reply