Demo mode — This is sample data. Sign up free to monitor your own databases.
← Demo Dashboard

Staging

38
Health score
Cache
35 88.3% buffer, 91.2% index
Indexes
20 15 unused (2.1 GB)
Bloat
15 35.2% bloat ratio
Queries
50 3 long-running
Vacuum
30 42.6% dead tuples
Connections
75 82/100 (82.0%)

Health trend

0255075100May 1May 1May 2May 2May 362 — May 1, 11:06am58 — May 1, 3:54pm55 — May 1, 8:42pm52 — May 2, 1:30am48 — May 2, 6:18am45 — May 2, 11:06am43 — May 2, 3:54pm40 — May 2, 8:42pm39 — May 3, 1:30am38 — May 3, 6:18am38 — May 3, 10:54am

Recommendations 9

Critical Cache Low shared buffer hit ratio (88.3%)

Postgres is reading 11.7% of heap blocks from disk instead of RAM. This is the single biggest cause of slow queries on data-heavy workloads. Stats have been accumulating for 45.0 days — reliable enough to act on, but always verify on production.

What to do
Increase `shared_buffers` in postgresql.conf (try 25% of RAM as a starting point). On Render/Heroku you may be hitting plan memory limits — consider upgrading or reducing dataset size in RAM.
Warning Cache Low index hit ratio (91.2%)

Index blocks are being read from disk 8.8% of the time. This slows index scans significantly. Stats have been accumulating for 45.0 days — reliable enough to act on, but always verify on production.

What to do
Increase `shared_buffers` or reduce index size by dropping unused indexes (see Indexes recommendations).
Critical Indexes 15 unused indices wasting 2.1 GB

These indexes have had 0 scans since stats were last reset. Every write (INSERT/UPDATE/DELETE) still pays the cost of maintaining them. Stats have been accumulating for 45.0 days — reliable enough to act on, but always verify on production.

What to do
Generate ready-to-run DROP statements (review before executing):

SELECT
  'DROP INDEX CONCURRENTLY ' || indexrelname || ';' AS drop_statement,
  relname AS table,
  pg_size_pretty(pg_relation_size(indexrelid)) AS size
FROM pg_stat_user_indexes
JOIN pg_index USING (indexrelid)
WHERE idx_scan = 0
  AND NOT indisprimary
  AND NOT indisunique
ORDER BY pg_relation_size(indexrelid) DESC;

Only run on production after at least 2 weeks of stats accumulation.
Critical Bloat High table bloat (35.2% dead tuple ratio)

Over 35.2% of rows across your tables are dead tuples. This inflates table size, slows sequential scans, and degrades query plans.

What to do
Run VACUUM ANALYZE on bloated tables immediately. For severe cases, VACUUM FULL reclaims space but requires an exclusive lock — schedule a maintenance window. Long-term: tune autovacuum_vacuum_scale_factor downward for high-write tables.
Warning Queries 3 long-running queries (longest: 4.1 min)

Queries running longer than 30 seconds hold locks that can block other operations and degrade overall database performance.

What to do
Identify and kill blockers:
SELECT pid, now() - query_start AS duration, state, query
FROM pg_stat_activity
WHERE state = 'active' AND query_start < now() - interval '30 seconds'
ORDER BY duration DESC;

To kill: SELECT pg_terminate_backend(pid);
Long-term: add statement_timeout to your database.yml or use query timeouts in Rails.
Critical Vacuum 7 tables flagged for vacuum

7 tables have a dead tuple ratio above 10% with live rows present.

What to do
Run: VACUUM ANALYZE;
To target specific tables:
SELECT relname, n_dead_tup, n_live_tup,
  round(n_dead_tup::numeric/nullif(n_live_tup+n_dead_tup,0)*100,1) AS dead_pct
FROM pg_stat_user_tables
WHERE n_dead_tup > n_live_tup * 0.1 AND n_live_tup > 0
ORDER BY dead_pct DESC;
Warning Vacuum 3 tables have never been vacuumed

3 tables have live rows but no recorded vacuum or autovacuum. This can happen after a database restore, pg_dump import, or if autovacuum is disabled. Dead tuples may be accumulating unseen.

What to do
Check autovacuum status and run a manual vacuum:
SELECT relname, last_vacuum, last_autovacuum, n_live_tup, n_dead_tup
FROM pg_stat_user_tables
WHERE last_vacuum IS NULL AND last_autovacuum IS NULL AND n_live_tup > 0
ORDER BY n_live_tup DESC;

Then run: VACUUM ANALYZE;
Check autovacuum is enabled: SHOW autovacuum;
Warning Vacuum No vacuum in 3.0 days

At least one table hasn't been vacuumed in over 3.0 days. Autovacuum may be disabled, suppressed by long transactions, or misconfigured.

What to do
Check autovacuum status: SELECT schemaname, relname, last_vacuum, last_autovacuum FROM pg_stat_user_tables ORDER BY greatest(last_vacuum, last_autovacuum) NULLS FIRST LIMIT 10;

Also check for long-running transactions blocking autovacuum: SELECT pid, now() - xact_start AS age, query FROM pg_stat_activity WHERE xact_start IS NOT NULL ORDER BY age DESC LIMIT 5;
Warning Connections Connection utilization elevated (82.0%)

At 82.0% of 100 max connections. Spikes under load could push you over the limit.

What to do
Consider adding PgBouncer connection pooling before you hit 85%. Review your Rails database.yml pool size — it should not exceed max_connections / number of dynos/processes.

N+1 queries 2

Queries executing more than twice per request on average.

Critical dashboard#index ~47.2 calls/request (max 120, sampled 85 requests)
Query pattern
SELECT "notifications".* FROM "notifications" WHERE "notifications"."user_id" = $1 ORDER BY created_at DESC LIMIT $2
Example SQL
SELECT "notifications".* FROM "notifications" WHERE "notifications"."user_id" = 42 ORDER BY created_at DESC LIMIT 5
What to do
This query runs ~47.2x per request in dashboard#index.

This looks like an N+1 — loading notifications rows one at a time.
In your controller or model, add eager loading:

  # Before (N+1):
  @records = Parent.all
  # each record.notification triggers a query

  # After (eager loaded):
  @records = Parent.includes(:notification)
Warning imports#show ~3.1 calls/request (max 8, sampled 44 requests)
Query pattern
SELECT "attachments".* FROM "attachments" WHERE "attachments"."record_type" = $1 AND "attachments"."record_id" = $1
Example SQL
SELECT "attachments".* FROM "attachments" WHERE "attachments"."record_type" = 'Import' AND "attachments"."record_id" = 73
What to do
This query runs ~3.1x per request in imports#show.

This looks like an N+1 — loading attachments rows one at a time.
In your controller or model, add eager loading:

  # Before (N+1):
  @records = Parent.all
  # each record.attachment triggers a query

  # After (eager loaded):
  @records = Parent.includes(:attachment)

Latest snapshot

Buffer hit ratio
88.30%
Index hit ratio
91.20%
Unused indexes
15 (2.1 GB)
Bloat ratio
35.2%
Long-running queries
3
Dead tuples
890,000
Connections
82 / 100
Snapshot taken
12m ago
Git commit
e91b4d7 — Add bulk import endpoint for CSV uploads

Like what you see?

Start free — no credit card

Free tier: 1 database, 3-day history. Upgrade anytime.