While working on an internal projectāNimbus Intelligence Data WarehouseāI was mainly responsible for cost-optimization settings in Snowflake so that my team and I wouldnāt exceed our budget. After several days of research, I identified seven key actions that I implemented on the project and that could cut your Snowflake bill by as much as 30 percent if you apply them, too.
šŗļø Visual first: how does every piece fit together?
graph TD
%% — Observabilidad —
subgraph Observabilidad
A[WAREHOUSE_EVENTS_HISTORY] –>|uso real| B[QUERY_ATTRIBUTION_HISTORY]
end
%% — Gobernanza —
subgraph Gobernanza
C[Tags & Budgets]
D[Resource Monitors]
end
%% — Automatización —
E[Triggered Tasks] –> F[Auto-downscale]
%% — Relaciones cruzadas —
A –> D
B –> C
C –>|Alertas / Webhooks| Fin(FinOps)
Paste this snippet into mermaid.live or export a PNG for your cost-committee deck. ACCOUNT_USAGE tables (blue boxes) are the radar; monitors and budgets (red) are the sirens; triggered tasks fire the corrective actions.
1ļøā£ Right-size and auto-suspend your warehouses
- Aim for the ājust-enoughā size: start at S and scale only if
WAREHOUSE_LOAD_HISTORY> 80 %. - Enable
AUTO_SUSPEND = 60andAUTO_RESUME = TRUE. - Inspect WAREHOUSE_EVENTS_HISTORY to hunt for ghost resumes.
2ļøā£ Resource Monitorsācut the card before it melts
Create one monitor per environment and set thresholds:
CREATE RESOURCE MONITOR rm_dev WITH CREDIT_QUOTA = 1000
TRIGGERS ON 80 PERCENT DO NOTIFY
ON 100 PERCENT DO SUSPEND;
Your warehouse pauses before the budget is blown.
3ļøā£ Set Budgets and keep FinOps in the loop
Budgets provide monthly limits and predictive alerts (webhook or email). Use them for serverless spend (Snowpipe, Tasks) that monitors donāt cover.
4ļøā£ Attribute spend with Tags + Query Attribution
Tag databases, schemas and warehouses:
ALTER WAREHOUSE etl_wh SET TAG cost_center = 'marketing';
Then query QUERY_ATTRIBUTION_HISTORY to know who spent how much, and why.
5ļøā£ Run only when data is present with Triggered Tasks
You used to fire a cron every 5 min; now trigger when a micro-batch arrives:
CREATE OR REPLACE TASK t_load_orders
WAREHOUSE = ingest_wh
SCHEDULE = 'USING CRON 0 * * * * UTC' -- āold schoolā
WHEN SYSTEM$STREAM_HAS_DATA('stg_orders_stream')
AS CALL load_orders();
With Triggered Tasks, the WHEN ⦠clause replaces the cron: zero idle time, zero wasted credits.
6ļøā£ Shamelessly exploit Zero-Copy Cloning
Clone 10 TB databases in seconds without paying storage; youāre charged only for changed blocks. Perfect for dev/QA. Drop stale clones so they donāt inherit the parentās costs.
7ļøā£ Put a FinOps bot in production
Combine:
- A stream on
WAREHOUSE_EVENTS_HISTORY - A triggered task that downsizes or suspends
- A Slack webhook + Jira ticket
Turn cost post-mortems into preventive actions with zero human intervention.
ā Checklist for tomorrow
- Review warehouse sizes and
AUTO_SUSPEND. - Create Resource Monitors per environment.
- Enable Budgets at account level + one per project.
- Tag critical objects and enable
QUERY_ATTRIBUTION_HISTORY. - Migrate cron tasks to Triggered Tasks.
- Document a Zero-Copy Cloning policy (14-day TTL).
- Fire a Slack alert when usage > 80 % of budget.
š¬ Your turn
Which trick saved you the most Snowflake credits?




