Staging
Health trend
Recommendations 9
Postgres is reading 11.7% of heap blocks from disk instead of RAM. This is the single biggest cause of slow queries on data-heavy workloads. Stats have been accumulating for 45.0 days — reliable enough to act on, but always verify on production.
Increase `shared_buffers` in postgresql.conf (try 25% of RAM as a starting point). On Render/Heroku you may be hitting plan memory limits — consider upgrading or reducing dataset size in RAM.
Index blocks are being read from disk 8.8% of the time. This slows index scans significantly. Stats have been accumulating for 45.0 days — reliable enough to act on, but always verify on production.
Increase `shared_buffers` or reduce index size by dropping unused indexes (see Indexes recommendations).
These indexes have had 0 scans since stats were last reset. Every write (INSERT/UPDATE/DELETE) still pays the cost of maintaining them. Stats have been accumulating for 45.0 days — reliable enough to act on, but always verify on production.
Generate ready-to-run DROP statements (review before executing): SELECT 'DROP INDEX CONCURRENTLY ' || indexrelname || ';' AS drop_statement, relname AS table, pg_size_pretty(pg_relation_size(indexrelid)) AS size FROM pg_stat_user_indexes JOIN pg_index USING (indexrelid) WHERE idx_scan = 0 AND NOT indisprimary AND NOT indisunique ORDER BY pg_relation_size(indexrelid) DESC; Only run on production after at least 2 weeks of stats accumulation.
Over 35.2% of rows across your tables are dead tuples. This inflates table size, slows sequential scans, and degrades query plans.
Run VACUUM ANALYZE on bloated tables immediately. For severe cases, VACUUM FULL reclaims space but requires an exclusive lock — schedule a maintenance window. Long-term: tune autovacuum_vacuum_scale_factor downward for high-write tables.
Queries running longer than 30 seconds hold locks that can block other operations and degrade overall database performance.
Identify and kill blockers: SELECT pid, now() - query_start AS duration, state, query FROM pg_stat_activity WHERE state = 'active' AND query_start < now() - interval '30 seconds' ORDER BY duration DESC; To kill: SELECT pg_terminate_backend(pid); Long-term: add statement_timeout to your database.yml or use query timeouts in Rails.
7 tables have a dead tuple ratio above 10% with live rows present.
Run: VACUUM ANALYZE; To target specific tables: SELECT relname, n_dead_tup, n_live_tup, round(n_dead_tup::numeric/nullif(n_live_tup+n_dead_tup,0)*100,1) AS dead_pct FROM pg_stat_user_tables WHERE n_dead_tup > n_live_tup * 0.1 AND n_live_tup > 0 ORDER BY dead_pct DESC;
3 tables have live rows but no recorded vacuum or autovacuum. This can happen after a database restore, pg_dump import, or if autovacuum is disabled. Dead tuples may be accumulating unseen.
Check autovacuum status and run a manual vacuum: SELECT relname, last_vacuum, last_autovacuum, n_live_tup, n_dead_tup FROM pg_stat_user_tables WHERE last_vacuum IS NULL AND last_autovacuum IS NULL AND n_live_tup > 0 ORDER BY n_live_tup DESC; Then run: VACUUM ANALYZE; Check autovacuum is enabled: SHOW autovacuum;
At least one table hasn't been vacuumed in over 3.0 days. Autovacuum may be disabled, suppressed by long transactions, or misconfigured.
Check autovacuum status: SELECT schemaname, relname, last_vacuum, last_autovacuum FROM pg_stat_user_tables ORDER BY greatest(last_vacuum, last_autovacuum) NULLS FIRST LIMIT 10; Also check for long-running transactions blocking autovacuum: SELECT pid, now() - xact_start AS age, query FROM pg_stat_activity WHERE xact_start IS NOT NULL ORDER BY age DESC LIMIT 5;
At 82.0% of 100 max connections. Spikes under load could push you over the limit.
Consider adding PgBouncer connection pooling before you hit 85%. Review your Rails database.yml pool size — it should not exceed max_connections / number of dynos/processes.
N+1 queries 2
Queries executing more than twice per request on average.
SELECT "notifications".* FROM "notifications" WHERE "notifications"."user_id" = $1 ORDER BY created_at DESC LIMIT $2
SELECT "notifications".* FROM "notifications" WHERE "notifications"."user_id" = 42 ORDER BY created_at DESC LIMIT 5
This query runs ~47.2x per request in dashboard#index. This looks like an N+1 — loading notifications rows one at a time. In your controller or model, add eager loading: # Before (N+1): @records = Parent.all # each record.notification triggers a query # After (eager loaded): @records = Parent.includes(:notification)
SELECT "attachments".* FROM "attachments" WHERE "attachments"."record_type" = $1 AND "attachments"."record_id" = $1
SELECT "attachments".* FROM "attachments" WHERE "attachments"."record_type" = 'Import' AND "attachments"."record_id" = 73
This query runs ~3.1x per request in imports#show. This looks like an N+1 — loading attachments rows one at a time. In your controller or model, add eager loading: # Before (N+1): @records = Parent.all # each record.attachment triggers a query # After (eager loaded): @records = Parent.includes(:attachment)
Latest snapshot
e91b4d7 — Add bulk import endpoint for CSV uploadsLike what you see?
Start free — no credit cardFree tier: 1 database, 3-day history. Upgrade anytime.