├── LICENSE ├── README.md ├── bloat ├── index_bloat_check.sql └── table_bloat_check.sql ├── columns ├── integer_usage.py └── missing_PKs.sql ├── duplicate_indexes ├── duplicate_indexes.sql └── readme.md ├── indexes ├── duplicate_indexes_fuzzy.sql ├── fk_no_index.sql ├── needed_indexes.sql └── unused_indexes.sql ├── kill_idle ├── kill_idle_93_table.sql ├── kill_idle_xact_91.sh └── kill_idle_xact_93.sh ├── locks ├── README.rst ├── lock_tables.sql └── log_locks.sh ├── pgbouncer └── log_pgbouncer_stats.sh ├── slony ├── log_slony_status.sh └── see_replication_config.sql └── vacuum ├── database_xid_age.sql ├── last_autovacuum.sql ├── last_autovacuum_and_autoanalyze.sql ├── no_stats_table_check.sql └── table_xid_age.sql /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2014, PostgreSQL Experts, Inc. 2 | and Additional Contributors (see README) 3 | All rights reserved. 4 | 5 | Redistribution and use in source and binary forms, with or without 6 | modification, are permitted provided that the following conditions are met: 7 | 8 | * Redistributions of source code must retain the above copyright notice, this 9 | list of conditions and the following disclaimer. 10 | 11 | * Redistributions in binary form must reproduce the above copyright notice, 12 | this list of conditions and the following disclaimer in the documentation 13 | and/or other materials provided with the distribution. 14 | 15 | * Neither the name of pgx_scripts nor the names of its 16 | contributors may be used to endorse or promote products derived from 17 | this software without specific prior written permission. 18 | 19 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 20 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 21 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 22 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE 23 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 24 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 25 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 26 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 27 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 28 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 29 | 30 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | pgx_scripts 2 | =========== 3 | 4 | A collection of useful little scripts for database analysis and administration, created by our team at PostgreSQL Experts. 5 | 6 | bloat 7 | ===== 8 | 9 | Queries to estimate bloat in tables and indexes. 10 | 11 | index_bloat_check.sql 12 | --------------------- 13 | 14 | An overhauled index bloat check. Lists indexes which are likely to be bloated and estimates bloat amounts. Requires PostgreSQL > 8.4, superuser access, and a 64-bit compile. Only works for BTree indexes, not GIN, GiST, or more exotic indexes. Still needs cleanup. 15 | 16 | table_bloat_check.sql 17 | --------------------- 18 | 19 | An overhauled table bloat check. Lists tables which are likely to be bloated and estimates bloat amounts. Requires PostgreSQL >= 8.4 and a 64-bit compile. Cannot estimate bloat for tables containing types with no stats functions (such as original JSON). 20 | 21 | no_stats_table_check.sql 22 | ------------------------ 23 | 24 | Query to list all tables which have "no stats" columns and thus can't be estimated. 25 | 26 | 27 | kill_idle 28 | ========= 29 | 30 | kill_idle_91.sql 31 | ---------------- 32 | 33 | A stored procedure which kills idle transactions on PostgreSQL versions 8.3 to 9.1. Intended to be called by a cron job. Takes idle time, polling time, and exempted user list parameters. Outputs pipe-delimited text with the data about the sessions killed. 34 | 35 | kill_idle_93.sql 36 | ---------------- 37 | 38 | A stored procedure which kills idle transactions on PostgreSQL versions 9.2 and later. Intended to be called by a cron job. Takes idle time and exempted user list parameters. Outputs JSON with the data about the sessions killed. 39 | 40 | Indexes 41 | ======= 42 | 43 | Various queries to introspect index usage. 44 | 45 | fk_no_index.sql 46 | --------------- 47 | 48 | Queries for foreign keys with no index on the referencing side. Note that you don't always want indexes on the referencing side, but this helps you decide if you do. 49 | 50 | duplicate_indexes_fuzzy.sql 51 | --------------------------- 52 | 53 | Check indexes and looks at whether or not they are potentially duplicates. It does this by checking the columns used by each index, so it reports lots of false duplicates for partial and functional indexes. 54 | 55 | needed_indexes.sql 56 | ------------------ 57 | 58 | Checks for tables which are getting too much sequential scan activity and might need additional indexing. Reports in four groups based on table size, number of scans, write activity, and number of existing indexes. 59 | 60 | unneeded_indexes.sql 61 | -------------------- 62 | 63 | Checks for indexes with relatively light usage stats, for possible removal. 64 | 65 | Locks 66 | ===== 67 | 68 | Tools and a set of queries to analyze lock-blocking. 69 | 70 | transaction_locks.sql 71 | --------------------- 72 | 73 | Requires: Postgres 9.2+ 74 | 75 | Lists waiting transaction locks and what they're waiting on, if possible. 76 | Includes relation and query information, but realistically needs to be 77 | accompanied by full query logging to be useful. Needs to be run 78 | per active database. 79 | 80 | table_locks.sql 81 | --------------- 82 | 83 | Lists direct locks on tables which conflict with locks held by other sessions. Note that table 84 | locks are often short-lived, and as a result this will often result in zero rows. 85 | 86 | 87 | Additional Contributors 88 | ======================= 89 | 90 | In addition to the staff of PostgreSQL Experts, we are indebted 91 | to: 92 | 93 | * The authors of the check_postgres.pl script, especially 94 | Greg Sabino Mulainne, for supplying the 95 | original bloat queries on which our bloat queries are based. 96 | * Andrew Gierth for help on various system queries. 97 | * ioguix for collaborating on bloat calculation math. -------------------------------------------------------------------------------- /bloat/index_bloat_check.sql: -------------------------------------------------------------------------------- 1 | -- btree index stats query 2 | -- estimates bloat for btree indexes 3 | WITH btree_index_atts AS ( 4 | SELECT nspname, 5 | indexclass.relname as index_name, 6 | indexclass.reltuples, 7 | indexclass.relpages, 8 | indrelid, indexrelid, 9 | indexclass.relam, 10 | tableclass.relname as tablename, 11 | regexp_split_to_table(indkey::text, ' ')::smallint AS attnum, 12 | indexrelid as index_oid 13 | FROM pg_index 14 | JOIN pg_class AS indexclass ON pg_index.indexrelid = indexclass.oid 15 | JOIN pg_class AS tableclass ON pg_index.indrelid = tableclass.oid 16 | JOIN pg_namespace ON pg_namespace.oid = indexclass.relnamespace 17 | JOIN pg_am ON indexclass.relam = pg_am.oid 18 | WHERE pg_am.amname = 'btree' and indexclass.relpages > 0 19 | AND nspname NOT IN ('pg_catalog','information_schema') 20 | ), 21 | index_item_sizes AS ( 22 | SELECT 23 | ind_atts.nspname, ind_atts.index_name, 24 | ind_atts.reltuples, ind_atts.relpages, ind_atts.relam, 25 | indrelid AS table_oid, index_oid, 26 | current_setting('block_size')::numeric AS bs, 27 | 8 AS maxalign, 28 | 24 AS pagehdr, 29 | CASE WHEN max(coalesce(pg_stats.null_frac,0)) = 0 30 | THEN 2 31 | ELSE 6 32 | END AS index_tuple_hdr, 33 | sum( (1-coalesce(pg_stats.null_frac, 0)) * coalesce(pg_stats.avg_width, 1024) ) AS nulldatawidth 34 | FROM pg_attribute 35 | JOIN btree_index_atts AS ind_atts ON pg_attribute.attrelid = ind_atts.indexrelid AND pg_attribute.attnum = ind_atts.attnum 36 | JOIN pg_stats ON pg_stats.schemaname = ind_atts.nspname 37 | -- stats for regular index columns 38 | AND ( (pg_stats.tablename = ind_atts.tablename AND pg_stats.attname = pg_catalog.pg_get_indexdef(pg_attribute.attrelid, pg_attribute.attnum, TRUE)) 39 | -- stats for functional indexes 40 | OR (pg_stats.tablename = ind_atts.index_name AND pg_stats.attname = pg_attribute.attname)) 41 | WHERE pg_attribute.attnum > 0 42 | GROUP BY 1, 2, 3, 4, 5, 6, 7, 8, 9 43 | ), 44 | index_aligned_est AS ( 45 | SELECT maxalign, bs, nspname, index_name, reltuples, 46 | relpages, relam, table_oid, index_oid, 47 | coalesce ( 48 | ceil ( 49 | reltuples * ( 6 50 | + maxalign 51 | - CASE 52 | WHEN index_tuple_hdr%maxalign = 0 THEN maxalign 53 | ELSE index_tuple_hdr%maxalign 54 | END 55 | + nulldatawidth 56 | + maxalign 57 | - CASE /* Add padding to the data to align on MAXALIGN */ 58 | WHEN nulldatawidth::integer%maxalign = 0 THEN maxalign 59 | ELSE nulldatawidth::integer%maxalign 60 | END 61 | )::numeric 62 | / ( bs - pagehdr::NUMERIC ) 63 | +1 ) 64 | , 0 ) 65 | as expected 66 | FROM index_item_sizes 67 | ), 68 | raw_bloat AS ( 69 | SELECT current_database() as dbname, nspname, pg_class.relname AS table_name, index_name, 70 | bs*(index_aligned_est.relpages)::bigint AS totalbytes, expected, 71 | CASE 72 | WHEN index_aligned_est.relpages <= expected 73 | THEN 0 74 | ELSE bs*(index_aligned_est.relpages-expected)::bigint 75 | END AS wastedbytes, 76 | CASE 77 | WHEN index_aligned_est.relpages <= expected 78 | THEN 0 79 | ELSE bs*(index_aligned_est.relpages-expected)::bigint * 100 / (bs*(index_aligned_est.relpages)::bigint) 80 | END AS realbloat, 81 | pg_relation_size(index_aligned_est.table_oid) as table_bytes, 82 | stat.idx_scan as index_scans 83 | FROM index_aligned_est 84 | JOIN pg_class ON pg_class.oid=index_aligned_est.table_oid 85 | JOIN pg_stat_user_indexes AS stat ON index_aligned_est.index_oid = stat.indexrelid 86 | ), 87 | format_bloat AS ( 88 | SELECT dbname as database_name, nspname as schema_name, table_name, index_name, 89 | round(realbloat) as bloat_pct, round(wastedbytes/(1024^2)::NUMERIC) as bloat_mb, 90 | round(totalbytes/(1024^2)::NUMERIC,3) as index_mb, 91 | round(table_bytes/(1024^2)::NUMERIC,3) as table_mb, 92 | index_scans 93 | FROM raw_bloat 94 | ) 95 | -- final query outputting the bloated indexes 96 | -- change the where and order by to change 97 | -- what shows up as bloated 98 | SELECT * 99 | FROM format_bloat 100 | WHERE ( bloat_pct > 50 and bloat_mb > 10 ) 101 | ORDER BY bloat_pct DESC; 102 | -------------------------------------------------------------------------------- /bloat/table_bloat_check.sql: -------------------------------------------------------------------------------- 1 | -- new table bloat query 2 | -- still needs work; is often off by +/- 20% 3 | WITH constants AS ( 4 | -- define some constants for sizes of things 5 | -- for reference down the query and easy maintenance 6 | SELECT current_setting('block_size')::numeric AS bs, 23 AS hdr, 8 AS ma 7 | ), 8 | no_stats AS ( 9 | -- screen out table who have attributes 10 | -- which dont have stats, such as JSON 11 | SELECT table_schema, table_name, 12 | n_live_tup::numeric as est_rows, 13 | pg_table_size(relid)::numeric as table_size 14 | FROM information_schema.columns 15 | JOIN pg_stat_user_tables as psut 16 | ON table_schema = psut.schemaname 17 | AND table_name = psut.relname 18 | LEFT OUTER JOIN pg_stats 19 | ON table_schema = pg_stats.schemaname 20 | AND table_name = pg_stats.tablename 21 | AND column_name = attname 22 | WHERE attname IS NULL 23 | AND table_schema NOT IN ('pg_catalog', 'information_schema') 24 | GROUP BY table_schema, table_name, relid, n_live_tup 25 | ), 26 | null_headers AS ( 27 | -- calculate null header sizes 28 | -- omitting tables which dont have complete stats 29 | -- and attributes which aren't visible 30 | SELECT 31 | hdr+1+(sum(case when null_frac <> 0 THEN 1 else 0 END)/8) as nullhdr, 32 | SUM((1-null_frac)*avg_width) as datawidth, 33 | MAX(null_frac) as maxfracsum, 34 | schemaname, 35 | tablename, 36 | hdr, ma, bs 37 | FROM pg_stats CROSS JOIN constants 38 | LEFT OUTER JOIN no_stats 39 | ON schemaname = no_stats.table_schema 40 | AND tablename = no_stats.table_name 41 | WHERE schemaname NOT IN ('pg_catalog', 'information_schema') 42 | AND no_stats.table_name IS NULL 43 | AND EXISTS ( SELECT 1 44 | FROM information_schema.columns 45 | WHERE schemaname = columns.table_schema 46 | AND tablename = columns.table_name ) 47 | GROUP BY schemaname, tablename, hdr, ma, bs 48 | ), 49 | data_headers AS ( 50 | -- estimate header and row size 51 | SELECT 52 | ma, bs, hdr, schemaname, tablename, 53 | (datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE hdr%ma END)))::numeric AS datahdr, 54 | (maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2 55 | FROM null_headers 56 | ), 57 | table_estimates AS ( 58 | -- make estimates of how large the table should be 59 | -- based on row and page size 60 | SELECT schemaname, tablename, bs, 61 | reltuples::numeric as est_rows, relpages * bs as table_bytes, 62 | CEIL((reltuples* 63 | (datahdr + nullhdr2 + 4 + ma - 64 | (CASE WHEN datahdr%ma=0 65 | THEN ma ELSE datahdr%ma END) 66 | )/(bs-20))) * bs AS expected_bytes, 67 | reltoastrelid 68 | FROM data_headers 69 | JOIN pg_class ON tablename = relname 70 | JOIN pg_namespace ON relnamespace = pg_namespace.oid 71 | AND schemaname = nspname 72 | WHERE pg_class.relkind = 'r' 73 | ), 74 | estimates_with_toast AS ( 75 | -- add in estimated TOAST table sizes 76 | -- estimate based on 4 toast tuples per page because we dont have 77 | -- anything better. also append the no_data tables 78 | SELECT schemaname, tablename, 79 | TRUE as can_estimate, 80 | est_rows, 81 | table_bytes + ( coalesce(toast.relpages, 0) * bs ) as table_bytes, 82 | expected_bytes + ( ceil( coalesce(toast.reltuples, 0) / 4 ) * bs ) as expected_bytes 83 | FROM table_estimates LEFT OUTER JOIN pg_class as toast 84 | ON table_estimates.reltoastrelid = toast.oid 85 | AND toast.relkind = 't' 86 | ), 87 | table_estimates_plus AS ( 88 | -- add some extra metadata to the table data 89 | -- and calculations to be reused 90 | -- including whether we cant estimate it 91 | -- or whether we think it might be compressed 92 | SELECT current_database() as databasename, 93 | schemaname, tablename, can_estimate, 94 | est_rows, 95 | CASE WHEN table_bytes > 0 96 | THEN table_bytes::NUMERIC 97 | ELSE NULL::NUMERIC END 98 | AS table_bytes, 99 | CASE WHEN expected_bytes > 0 100 | THEN expected_bytes::NUMERIC 101 | ELSE NULL::NUMERIC END 102 | AS expected_bytes, 103 | CASE WHEN expected_bytes > 0 AND table_bytes > 0 104 | AND expected_bytes <= table_bytes 105 | THEN (table_bytes - expected_bytes)::NUMERIC 106 | ELSE 0::NUMERIC END AS bloat_bytes 107 | FROM estimates_with_toast 108 | UNION ALL 109 | SELECT current_database() as databasename, 110 | table_schema, table_name, FALSE, 111 | est_rows, table_size, 112 | NULL::NUMERIC, NULL::NUMERIC 113 | FROM no_stats 114 | ), 115 | bloat_data AS ( 116 | -- do final math calculations and formatting 117 | select current_database() as databasename, 118 | schemaname, tablename, can_estimate, 119 | table_bytes, round(table_bytes/(1024^2)::NUMERIC,3) as table_mb, 120 | expected_bytes, round(expected_bytes/(1024^2)::NUMERIC,3) as expected_mb, 121 | round(bloat_bytes*100/table_bytes) as pct_bloat, 122 | round(bloat_bytes/(1024::NUMERIC^2),2) as mb_bloat, 123 | table_bytes, expected_bytes, est_rows 124 | FROM table_estimates_plus 125 | ) 126 | -- filter output for bloated tables 127 | SELECT databasename, schemaname, tablename, 128 | can_estimate, 129 | est_rows, 130 | pct_bloat, mb_bloat, 131 | table_mb 132 | FROM bloat_data 133 | -- this where clause defines which tables actually appear 134 | -- in the bloat chart 135 | -- example below filters for tables which are either 50% 136 | -- bloated and more than 20mb in size, or more than 25% 137 | -- bloated and more than 4GB in size 138 | WHERE ( pct_bloat >= 50 AND mb_bloat >= 10 ) 139 | OR ( pct_bloat >= 25 AND mb_bloat >= 1000 ) 140 | ORDER BY mb_bloat DESC; 141 | -------------------------------------------------------------------------------- /columns/integer_usage.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | import psycopg2 4 | import sys 5 | from optparse import OptionParser 6 | from time import sleep 7 | 8 | parser = OptionParser() 9 | parser.add_option("-d", "--database", dest="dbname", 10 | help="database to scanned (required)", metavar="DBNAME") 11 | parser.add_option("-m", "--minimum", dest="minimum", 12 | help="minimum percent used", metavar="PERCENT", 13 | default="0") 14 | parser.add_option("-U", "--user", dest="dbuser", 15 | help="database user", metavar="USERNAME", 16 | default="") 17 | parser.add_option("-H", "--host", dest="dbhost", 18 | help="database host system", metavar="HOST", 19 | default="") 20 | parser.add_option("-p", "--port", dest="dbport", 21 | help="database port", metavar="PORT", 22 | default="") 23 | parser.add_option("-w", "--password", dest="dbpass", 24 | help="database password", metavar="PWD", 25 | default="") 26 | 27 | (options, args) = parser.parse_args() 28 | 29 | def dq(val): 30 | return '"' + val + '"' 31 | 32 | if options.dbname: 33 | connect_string ="dbname=%s " % options.dbname 34 | else: 35 | print "ERROR: a target database is required." 36 | sys.exit(-1) 37 | 38 | if options.dbhost: 39 | connect_string += " host=%s " % options.dbhost 40 | 41 | if options.dbuser: 42 | connect_string += " user=%s " % options.dbuser 43 | 44 | if options.dbpass: 45 | connect_string += " password=%s " % options.dbpass 46 | 47 | if options.dbport: 48 | connect_string += " port=%s " % options.dbport 49 | 50 | conn = psycopg2.connect( connect_string ) 51 | cur = conn.cursor() 52 | 53 | # get a list of all integer fields with a key constraint on them 54 | # or with an attached sequence 55 | 56 | cur.execute("""SELECT table_schema, table_name, column_name, data_type, 57 | 2::numeric ^ ( numeric_precision - 1 ) AS maxnum 58 | FROM information_schema.columns 59 | JOIN information_schema.key_column_usage USING ( table_schema, table_name, column_name ) 60 | WHERE data_type IN ( 'bigint', 'integer' ) 61 | AND key_column_usage.ordinal_position = 1 62 | UNION 63 | SELECT table_schema, table_name, column_name, data_type, 64 | 2::numeric ^ ( numeric_precision - 1 ) AS maxnum 65 | FROM information_schema.columns 66 | WHERE column_default ILIKE '%nextval%' 67 | ORDER BY table_schema, table_name, column_name""") 68 | collist = cur.fetchall() 69 | 70 | retval = 0 71 | rest = False 72 | if len(collist) > 25: 73 | rest = True 74 | # for each, fetch the current maximum value 75 | for col in collist: 76 | 77 | if rest: 78 | sleep(.2) 79 | 80 | cur.execute("""SELECT max("%s") FROM "%s"."%s" """ % (col[2], col[0], col[1],)) 81 | maxval = cur.fetchone() 82 | 83 | if maxval[0] and col[4]: 84 | 85 | pctused = (maxval[0]*100.0)/float(col[4]) 86 | 87 | if pctused >= float(options.minimum): 88 | print "Column '%s' Table '%s.%s' is %3.1f%% used" % (col[2],col[0],col[1],pctused,) 89 | retval = 1 90 | 91 | sys.exit(retval) 92 | -------------------------------------------------------------------------------- /columns/missing_PKs.sql: -------------------------------------------------------------------------------- 1 | SELECT table_catalog, table_schema, table_name 2 | FROM information_schema.tables 3 | WHERE (table_catalog, table_schema, table_name) 4 | NOT IN ( 5 | SELECT table_catalog, table_schema, table_name 6 | FROM information_schema.table_constraints 7 | WHERE constraint_type = 'PRIMARY KEY' 8 | ) 9 | AND table_schema 10 | NOT IN ( 11 | 'information_schema', 'pg_catalog' 12 | ) 13 | AND table_type <> 'VIEW'; 14 | -------------------------------------------------------------------------------- /duplicate_indexes/duplicate_indexes.sql: -------------------------------------------------------------------------------- 1 | SELECT ni.nspname || '.' || ct.relname AS "table", 2 | ci.relname AS "dup index", 3 | pg_get_indexdef(i.indexrelid) AS "dup index definition", 4 | i.indkey AS "dup index attributes", 5 | cii.relname AS "encompassing index", 6 | pg_get_indexdef(ii.indexrelid) AS "encompassing index definition", 7 | ii.indkey AS "enc index attributes" 8 | FROM pg_index i 9 | JOIN pg_class ct ON i.indrelid=ct.oid 10 | JOIN pg_class ci ON i.indexrelid=ci.oid 11 | JOIN pg_namespace ni ON ci.relnamespace=ni.oid 12 | JOIN pg_index ii ON ii.indrelid=i.indrelid AND 13 | ii.indexrelid != i.indexrelid AND 14 | (array_to_string(ii.indkey, ' ') || ' ') like (array_to_string(i.indkey, ' ') || ' %') AND 15 | (array_to_string(ii.indcollation, ' ') || ' ') like (array_to_string(i.indcollation, ' ') || ' %') AND 16 | (array_to_string(ii.indclass, ' ') || ' ') like (array_to_string(i.indclass, ' ') || ' %') AND 17 | (array_to_string(ii.indoption, ' ') || ' ') like (array_to_string(i.indoption, ' ') || ' %') AND 18 | NOT (ii.indkey::integer[] @> ARRAY[0]) AND -- Remove if you want expression indexes (you probably don't) 19 | NOT (i.indkey::integer[] @> ARRAY[0]) AND -- Remove if you want expression indexes (you probably don't) 20 | i.indpred IS NULL AND -- Remove if you want indexes with predicates 21 | ii.indpred IS NULL AND -- Remove if you want indexes with predicates 22 | CASE WHEN i.indisunique THEN ii.indisunique AND 23 | array_to_string(ii.indkey, ' ') = array_to_string(i.indkey, ' ') ELSE true END 24 | JOIN pg_class ctii ON ii.indrelid=ctii.oid 25 | JOIN pg_class cii ON ii.indexrelid=cii.oid 26 | WHERE ct.relname NOT LIKE 'pg_%' AND 27 | NOT i.indisprimary 28 | ORDER BY 1, 2, 3 29 | ; -------------------------------------------------------------------------------- /duplicate_indexes/readme.md: -------------------------------------------------------------------------------- 1 | # Check for Duplicate Indexes 2 | 3 | Unneeded indexes, such as duplicates, take up disk space, require time to vacuum, and slow down update and insert operations 4 | 5 | This script checks for duplicate indexes in a more user-friendly fashion than our previous ones. Just run it in the database you want to check. 6 | 7 | Results are reported as a "duplicate index" and an "encompassing index." The theory is that you can drop the duplicate index in favor of the encompassing index. 8 | 9 | Do not just follow this blindly, though! For example, if you have two identical indexes, they'll appear in the report as a pair twice: once with one as the duplicate and the other as encompassing, and then the reverse. (If there are three, the report shows all possible pairs.) Be sure you leave one! 10 | 11 | Always review before dropping, and test in development or staging before dropping from your production environment. 12 | 13 | Some notes: 14 | 15 | * We recommend enabling extended output in psql with \\x for better readability. 16 | 17 | * This check skips: 18 | 19 | * `pg_*` tables 20 | * Any index that contains an expression, since those are hard to check for duplicates. Those should be checked manually. 21 | 22 | * A primary key index will never be marked as a "duplicate." 23 | 24 | * Review results for indexes on Foreign Keys and referenced columns. For referenced columns, dropping the duplicate index is not usually a problem, but for Foreign Keys you may have a tradeoff between query performance _vs_ INSERT, UPDATE, DELETE performance. 25 | 26 | * Your statistics may show that the "duplicate" index is being used; this is normal and not an argument to keep the duplicate. Postgres should switch to using the encompassing index once the duplicate is gone. 27 | 28 | * A lot of folks react to this report with "How could this happen!?!" This is not a personal failing; if you're using an ORM for schema management, that's probably the source of most if not all of the duplicates. You may have to do some manual wrangling with your ORM to prevent them from re-occurring. 29 | 30 | ## Example output 31 | 32 | ``` 33 | -[ RECORD 1 ]-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 34 | table | public.foo 35 | dup index | index_foo_on_bar 36 | dup index definition | CREATE INDEX index_foo_on_bar ON public.foo USING btree (bar) 37 | dup index attributes | 2 38 | encompassing index | index_foo_on_bar_and_baz 39 | encompassing index definition | CREATE INDEX index_foo_on_bar_and_baz ON public.foo USING btree (bar, baz) 40 | enc index attributes | 2 3 41 | ``` 42 | 43 | Since the multi-column index `index_foo_on_bar_and_baz` would be used for searches only on the `bar` column, we can drop the individual index `index_foo_on_bar`. 44 | 45 | ``` 46 | -[ RECORD 2 ]-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 47 | table | public.movies 48 | dup index | index_movies_on_title 49 | dup index definition | CREATE INDEX index_movies_on_title ON public.movies USING btree (title) 50 | dup index attributes | 2 51 | encompassing index | index_movies_on_title_and_studio 52 | encompassing index definition | CREATE UNIQUE INDEX index_movies_on_title_and_studio ON public.movies USING btree (title, studio) 53 | enc index attributes | 2 3 54 | ``` 55 | 56 | Same as example #1: the multi-column index `index_movies_on_title_and_studio` would be used for searches on just the `title` column, we can drop the individual index `index_movies_on_title`. 57 | 58 | This next example shows what happens with multiple duplicate indexes: 59 | 60 | ``` 61 | demo_db=# \d+ index_demo 62 | Table "public.index_demo" 63 | Column | Type | Collation | Nullable | Default | Storage | Stats target | Description 64 | --------+---------+-----------+----------+----------------------------------------+----------+--------------+------------- 65 | id | integer | | not null | nextval('index_demo_id_seq'::regclass) | plain | | 66 | name | text | | | | extended | | 67 | Indexes: 68 | "idx_demo_name_uniq" UNIQUE, btree (name) 69 | "unique_name" UNIQUE CONSTRAINT, btree (name) 70 | "idx_demo_name" btree (name) 71 | ``` 72 | 73 | ``` 74 | :::-->cat duplicate_indexes.txt 75 | -[ RECORD 1 ]-----------------+------------------------------------------------------------------------------- 76 | table | public.index_demo 77 | dup index | idx_demo_name 78 | dup index definition | CREATE INDEX idx_demo_name ON public.index_demo USING btree (name) 79 | dup index attributes | 2 80 | encompassing index | idx_demo_name_uniq 81 | encompassing index definition | CREATE UNIQUE INDEX idx_demo_name_uniq ON public.index_demo USING btree (name) 82 | enc index attributes | 2 83 | -[ RECORD 2 ]-----------------+------------------------------------------------------------------------------- 84 | table | public.index_demo 85 | dup index | idx_demo_name 86 | dup index definition | CREATE INDEX idx_demo_name ON public.index_demo USING btree (name) 87 | dup index attributes | 2 88 | encompassing index | unique_name 89 | encompassing index definition | CREATE UNIQUE INDEX unique_name ON public.index_demo USING btree (name) 90 | enc index attributes | 2 91 | -[ RECORD 3 ]-----------------+------------------------------------------------------------------------------- 92 | table | public.index_demo 93 | dup index | idx_demo_name_uniq 94 | dup index definition | CREATE UNIQUE INDEX idx_demo_name_uniq ON public.index_demo USING btree (name) 95 | dup index attributes | 2 96 | encompassing index | unique_name 97 | encompassing index definition | CREATE UNIQUE INDEX unique_name ON public.index_demo USING btree (name) 98 | enc index attributes | 2 99 | -[ RECORD 4 ]-----------------+------------------------------------------------------------------------------- 100 | table | public.index_demo 101 | dup index | unique_name 102 | dup index definition | CREATE UNIQUE INDEX unique_name ON public.index_demo USING btree (name) 103 | dup index attributes | 2 104 | encompassing index | idx_demo_name_uniq 105 | encompassing index definition | CREATE UNIQUE INDEX idx_demo_name_uniq ON public.index_demo USING btree (name) 106 | enc index attributes | 2 107 | ``` 108 | 109 | Note that the UNIQUE CONSTRAINT shows up as its underlying index. You only need to keep one of these three indexes; usually that's one of the UNIQUE options. 110 | -------------------------------------------------------------------------------- /indexes/duplicate_indexes_fuzzy.sql: -------------------------------------------------------------------------------- 1 | -- check for containment 2 | -- i.e. index A contains index B 3 | -- and both share the same first column 4 | -- but they are NOT identical 5 | 6 | WITH index_cols_ord as ( 7 | SELECT attrelid, attnum, attname 8 | FROM pg_attribute 9 | JOIN pg_index ON indexrelid = attrelid 10 | WHERE indkey[0] > 0 11 | ORDER BY attrelid, attnum 12 | ), 13 | index_col_list AS ( 14 | SELECT attrelid, 15 | array_agg(attname) as cols 16 | FROM index_cols_ord 17 | GROUP BY attrelid 18 | ), 19 | dup_natts AS ( 20 | SELECT indrelid, indexrelid 21 | FROM pg_index as ind 22 | WHERE EXISTS ( SELECT 1 23 | FROM pg_index as ind2 24 | WHERE ind.indrelid = ind2.indrelid 25 | AND ( ind.indkey @> ind2.indkey 26 | OR ind.indkey <@ ind2.indkey ) 27 | AND ind.indkey[0] = ind2.indkey[0] 28 | AND ind.indkey <> ind2.indkey 29 | AND ind.indexrelid <> ind2.indexrelid 30 | ) ) 31 | SELECT userdex.schemaname as schema_name, 32 | userdex.relname as table_name, 33 | userdex.indexrelname as index_name, 34 | array_to_string(cols, ', ') as index_cols, 35 | indexdef, 36 | idx_scan as index_scans 37 | FROM pg_stat_user_indexes as userdex 38 | JOIN index_col_list ON index_col_list.attrelid = userdex.indexrelid 39 | JOIN dup_natts ON userdex.indexrelid = dup_natts.indexrelid 40 | JOIN pg_indexes ON userdex.schemaname = pg_indexes.schemaname 41 | AND userdex.indexrelname = pg_indexes.indexname 42 | ORDER BY userdex.schemaname, userdex.relname, cols, userdex.indexrelname; -------------------------------------------------------------------------------- /indexes/fk_no_index.sql: -------------------------------------------------------------------------------- 1 | -- check for FKs where there is no matching index 2 | -- on the referencing side 3 | -- or a bad index 4 | 5 | WITH fk_actions ( code, action ) AS ( 6 | VALUES ( 'a', 'error' ), 7 | ( 'r', 'restrict' ), 8 | ( 'c', 'cascade' ), 9 | ( 'n', 'set null' ), 10 | ( 'd', 'set default' ) 11 | ), 12 | fk_list AS ( 13 | SELECT pg_constraint.oid as fkoid, conrelid, confrelid as parentid, 14 | conname, relname, nspname, 15 | fk_actions_update.action as update_action, 16 | fk_actions_delete.action as delete_action, 17 | conkey as key_cols 18 | FROM pg_constraint 19 | JOIN pg_class ON conrelid = pg_class.oid 20 | JOIN pg_namespace ON pg_class.relnamespace = pg_namespace.oid 21 | JOIN fk_actions AS fk_actions_update ON confupdtype = fk_actions_update.code 22 | JOIN fk_actions AS fk_actions_delete ON confdeltype = fk_actions_delete.code 23 | WHERE contype = 'f' 24 | ), 25 | fk_attributes AS ( 26 | SELECT fkoid, conrelid, attname, attnum 27 | FROM fk_list 28 | JOIN pg_attribute 29 | ON conrelid = attrelid 30 | AND attnum = ANY( key_cols ) 31 | ORDER BY fkoid, attnum 32 | ), 33 | fk_cols_list AS ( 34 | SELECT fkoid, array_agg(attname) as cols_list 35 | FROM fk_attributes 36 | GROUP BY fkoid 37 | ), 38 | index_list AS ( 39 | SELECT indexrelid as indexid, 40 | pg_class.relname as indexname, 41 | indrelid, 42 | indkey, 43 | indpred is not null as has_predicate, 44 | pg_get_indexdef(indexrelid) as indexdef 45 | FROM pg_index 46 | JOIN pg_class ON indexrelid = pg_class.oid 47 | WHERE indisvalid 48 | ), 49 | fk_index_match AS ( 50 | SELECT fk_list.*, 51 | indexid, 52 | indexname, 53 | indkey::int[] as indexatts, 54 | has_predicate, 55 | indexdef, 56 | array_length(key_cols, 1) as fk_colcount, 57 | array_length(indkey,1) as index_colcount, 58 | round(pg_relation_size(conrelid)/(1024^2)::numeric) as table_mb, 59 | cols_list 60 | FROM fk_list 61 | JOIN fk_cols_list USING (fkoid) 62 | LEFT OUTER JOIN index_list 63 | ON conrelid = indrelid 64 | AND (indkey::int2[])[0:(array_length(key_cols,1) -1)] @> key_cols 65 | 66 | ), 67 | fk_perfect_match AS ( 68 | SELECT fkoid 69 | FROM fk_index_match 70 | WHERE (index_colcount - 1) <= fk_colcount 71 | AND NOT has_predicate 72 | AND indexdef LIKE '%USING btree%' 73 | ), 74 | fk_index_check AS ( 75 | SELECT 'no index' as issue, *, 1 as issue_sort 76 | FROM fk_index_match 77 | WHERE indexid IS NULL 78 | UNION ALL 79 | SELECT 'questionable index' as issue, *, 2 80 | FROM fk_index_match 81 | WHERE indexid IS NOT NULL 82 | AND fkoid NOT IN ( 83 | SELECT fkoid 84 | FROM fk_perfect_match) 85 | ), 86 | parent_table_stats AS ( 87 | SELECT fkoid, tabstats.relname as parent_name, 88 | (n_tup_ins + n_tup_upd + n_tup_del + n_tup_hot_upd) as parent_writes, 89 | round(pg_relation_size(parentid)/(1024^2)::numeric) as parent_mb 90 | FROM pg_stat_user_tables AS tabstats 91 | JOIN fk_list 92 | ON relid = parentid 93 | ), 94 | fk_table_stats AS ( 95 | SELECT fkoid, 96 | (n_tup_ins + n_tup_upd + n_tup_del + n_tup_hot_upd) as writes, 97 | seq_scan as table_scans 98 | FROM pg_stat_user_tables AS tabstats 99 | JOIN fk_list 100 | ON relid = conrelid 101 | ) 102 | SELECT nspname as schema_name, 103 | relname as table_name, 104 | conname as fk_name, 105 | issue, 106 | table_mb, 107 | writes, 108 | table_scans, 109 | parent_name, 110 | parent_mb, 111 | parent_writes, 112 | cols_list, 113 | indexdef 114 | FROM fk_index_check 115 | JOIN parent_table_stats USING (fkoid) 116 | JOIN fk_table_stats USING (fkoid) 117 | WHERE table_mb > 9 118 | AND ( writes > 1000 119 | OR parent_writes > 1000 120 | OR parent_mb > 10 ) 121 | ORDER BY issue_sort, table_mb DESC, table_name, fk_name; 122 | -------------------------------------------------------------------------------- /indexes/needed_indexes.sql: -------------------------------------------------------------------------------- 1 | WITH 2 | index_usage AS ( 3 | SELECT sut.relid, 4 | current_database() AS database, 5 | sut.schemaname::text as schema_name, 6 | sut.relname::text AS table_name, 7 | sut.seq_scan as table_scans, 8 | sut.idx_scan as index_scans, 9 | pg_total_relation_size(relid) as table_bytes, 10 | round((sut.n_tup_ins + sut.n_tup_del + sut.n_tup_upd + sut.n_tup_hot_upd) / 11 | (seq_tup_read::NUMERIC + 2), 2) as writes_per_scan 12 | FROM pg_stat_user_tables sut 13 | ), 14 | index_counts AS ( 15 | SELECT sut.relid, 16 | count(*) as index_count 17 | FROM pg_stat_user_tables sut LEFT OUTER JOIN pg_indexes 18 | ON sut.schemaname = pg_indexes.schemaname AND 19 | sut.relname = pg_indexes.tablename 20 | GROUP BY relid 21 | ), 22 | too_many_tablescans AS ( 23 | SELECT 'many table scans'::TEXT as reason, 24 | database, schema_name, table_name, 25 | table_scans, pg_size_pretty(table_bytes) as table_size, 26 | writes_per_scan, index_count, table_bytes 27 | FROM index_usage JOIN index_counts USING ( relid ) 28 | WHERE table_scans > 1000 29 | AND table_scans > ( index_scans * 2 ) 30 | AND table_bytes > 32000000 31 | AND writes_per_scan < ( 1.0 ) 32 | ORDER BY table_scans DESC 33 | ), 34 | scans_no_index AS ( 35 | SELECT 'scans, few indexes'::TEXT as reason, 36 | database, schema_name, table_name, 37 | table_scans, pg_size_pretty(table_bytes) as table_size, 38 | writes_per_scan, index_count, table_bytes 39 | FROM index_usage JOIN index_counts USING ( relid ) 40 | WHERE table_scans > 100 41 | AND table_scans > ( index_scans ) 42 | AND index_count < 2 43 | AND table_bytes > 32000000 44 | AND writes_per_scan < ( 1.0 ) 45 | ORDER BY table_scans DESC 46 | ), 47 | big_tables_with_scans AS ( 48 | SELECT 'big table scans'::TEXT as reason, 49 | database, schema_name, table_name, 50 | table_scans, pg_size_pretty(table_bytes) as table_size, 51 | writes_per_scan, index_count, table_bytes 52 | FROM index_usage JOIN index_counts USING ( relid ) 53 | WHERE table_scans > 100 54 | AND table_scans > ( index_scans / 10 ) 55 | AND table_bytes > 1000000000 56 | AND writes_per_scan < ( 1.0 ) 57 | ORDER BY table_bytes DESC 58 | ), 59 | scans_no_writes AS ( 60 | SELECT 'scans, no writes'::TEXT as reason, 61 | database, schema_name, table_name, 62 | table_scans, pg_size_pretty(table_bytes) as table_size, 63 | writes_per_scan, index_count, table_bytes 64 | FROM index_usage JOIN index_counts USING ( relid ) 65 | WHERE table_scans > 100 66 | AND table_scans > ( index_scans / 4 ) 67 | AND table_bytes > 32000000 68 | AND writes_per_scan < ( 0.1 ) 69 | ORDER BY writes_per_scan ASC 70 | ) 71 | SELECT reason, database, schema_name, table_name, table_scans, 72 | table_size, writes_per_scan, index_count 73 | FROM too_many_tablescans 74 | UNION ALL 75 | SELECT reason, database, schema_name, table_name, table_scans, 76 | table_size, writes_per_scan, index_count 77 | FROM scans_no_index 78 | UNION ALL 79 | SELECT reason, database, schema_name, table_name, table_scans, 80 | table_size, writes_per_scan, index_count 81 | FROM big_tables_with_scans 82 | UNION ALL 83 | SELECT reason, database, schema_name, table_name, table_scans, 84 | table_size, writes_per_scan, index_count 85 | FROM scans_no_writes; 86 | 87 | 88 | 89 | 90 | -------------------------------------------------------------------------------- /indexes/unused_indexes.sql: -------------------------------------------------------------------------------- 1 | WITH table_scans as ( 2 | SELECT relid, 3 | tables.idx_scan + tables.seq_scan as all_scans, 4 | ( tables.n_tup_ins + tables.n_tup_upd + tables.n_tup_del ) as writes, 5 | pg_relation_size(relid) as table_size 6 | FROM pg_stat_user_tables as tables 7 | ), 8 | all_writes as ( 9 | SELECT sum(writes) as total_writes 10 | FROM table_scans 11 | ), 12 | indexes as ( 13 | SELECT idx_stat.relid, idx_stat.indexrelid, 14 | idx_stat.schemaname, idx_stat.relname as tablename, 15 | idx_stat.indexrelname as indexname, 16 | idx_stat.idx_scan, 17 | pg_relation_size(idx_stat.indexrelid) as index_bytes, 18 | indexdef ~* 'USING btree' AS idx_is_btree 19 | FROM pg_stat_user_indexes as idx_stat 20 | JOIN pg_index 21 | USING (indexrelid) 22 | JOIN pg_indexes as indexes 23 | ON idx_stat.schemaname = indexes.schemaname 24 | AND idx_stat.relname = indexes.tablename 25 | AND idx_stat.indexrelname = indexes.indexname 26 | WHERE pg_index.indisunique = FALSE 27 | ), 28 | index_ratios AS ( 29 | SELECT schemaname, tablename, indexname, 30 | idx_scan, all_scans, 31 | round(( CASE WHEN all_scans = 0 THEN 0.0::NUMERIC 32 | ELSE idx_scan::NUMERIC/all_scans * 100 END),2) as index_scan_pct, 33 | writes, 34 | round((CASE WHEN writes = 0 THEN idx_scan::NUMERIC ELSE idx_scan::NUMERIC/writes END),2) 35 | as scans_per_write, 36 | pg_size_pretty(index_bytes) as index_size, 37 | pg_size_pretty(table_size) as table_size, 38 | idx_is_btree, index_bytes 39 | FROM indexes 40 | JOIN table_scans 41 | USING (relid) 42 | ), 43 | index_groups AS ( 44 | SELECT 'Never Used Indexes' as reason, *, 1 as grp 45 | FROM index_ratios 46 | WHERE 47 | idx_scan = 0 48 | and idx_is_btree 49 | UNION ALL 50 | SELECT 'Low Scans, High Writes' as reason, *, 2 as grp 51 | FROM index_ratios 52 | WHERE 53 | scans_per_write <= 1 54 | and index_scan_pct < 10 55 | and idx_scan > 0 56 | and writes > 100 57 | and idx_is_btree 58 | UNION ALL 59 | SELECT 'Seldom Used Large Indexes' as reason, *, 3 as grp 60 | FROM index_ratios 61 | WHERE 62 | index_scan_pct < 5 63 | and scans_per_write > 1 64 | and idx_scan > 0 65 | and idx_is_btree 66 | and index_bytes > 100000000 67 | UNION ALL 68 | SELECT 'High-Write Large Non-Btree' as reason, index_ratios.*, 4 as grp 69 | FROM index_ratios, all_writes 70 | WHERE 71 | ( writes::NUMERIC / ( total_writes + 1 ) ) > 0.02 72 | AND NOT idx_is_btree 73 | AND index_bytes > 100000000 74 | ORDER BY grp, index_bytes DESC ) 75 | SELECT reason, schemaname, tablename, indexname, 76 | index_scan_pct, scans_per_write, index_size, table_size 77 | FROM index_groups; 78 | -------------------------------------------------------------------------------- /kill_idle/kill_idle_93_table.sql: -------------------------------------------------------------------------------- 1 | create or replace function kill_idle_transactions ( 2 | timelimit INTERVAL DEFAULT '10 minutes', 3 | safe_users TEXT[] DEFAULT '{}') 4 | returns SETOF json 5 | language plpgsql 6 | as 7 | $f$ 8 | declare 9 | cancelled JSON; 10 | 11 | begin 12 | 13 | FOR cancelled IN 14 | WITH terminated AS ( 15 | SELECT pg_stat_activity.*, pg_terminate_backend(pid) 16 | FROM pg_stat_activity 17 | WHERE state = 'idle in transaction' 18 | AND (now() - state_change) > timelimit 19 | AND ( usename != ANY(safe_users) 20 | OR safe_users = '{}' )), 21 | termformat AS ( 22 | SELECT now() as killtime, 23 | datname, pid, usename, application_name, 24 | client_addr, backend_start, xact_start, 25 | state_change, waiting, "query" 26 | FROM terminated ) 27 | SELECT row_to_json(termformat.*) 28 | FROM termformat 29 | LOOP 30 | 31 | RETURN NEXT cancelled; 32 | 33 | END LOOP; 34 | 35 | RETURN; 36 | 37 | end; $f$; 38 | 39 | -------------------------------------------------------------------------------- /kill_idle/kill_idle_xact_91.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # please configure by setting the folloiwng shell variables 4 | 5 | # REQUIRED: set location of log file for logging killed transactions 6 | LOGFILE=/var/log/postgresql/kill_idle.log 7 | 8 | # REQUIRED: set timelimit for oldest idle transaction, in minutes 9 | IDLETIME=15 10 | 11 | # REQUIRED: set timelimit for oldest non-idle long-running transaction 12 | # in minutes. Set to 1000 if you don't really want these cancelled 13 | XACTTIME=120 14 | 15 | # REQUIRED: set users to be ignored and not kill idle transactions 16 | # generally you want to omit the postgres superuser and the user 17 | # pg_dump runs as from being killed 18 | # if you have no users like this, just set both to XXXXX 19 | SUPERUSER=postgres 20 | BACKUPUSER=XXXXX 21 | 22 | # REQUIRED: path to psql, since cron often lacks search paths 23 | PSQL=/usr/lib/postgresql/9.1/bin/psql 24 | 25 | # OPTIONAL: set these connection variables. if you are running as the 26 | # postgres user on the local machine with passwordless login, you will 27 | # not needto set any of these 28 | PGHOST= 29 | PGUSER= 30 | PGPORT= 31 | PGPASSWORD= 32 | 33 | # you should not need to change code below this line 34 | #################################################### 35 | 36 | export PGHOST 37 | export PGUSER 38 | export PGPORT 39 | export PGPASSWORD 40 | exec >> $LOGFILE 2>&1 41 | SAFELIST="ARRAY['${SUPERUSER}', '${BACKUPUSER}']" 42 | IDLEPARAM="'${IDLETIME} minutes'" 43 | XACTPARAM="'${XACTTIME} minutes'" 44 | 45 | KILLQUERY="WITH idles AS ( 46 | SELECT datname, procpid, usename, application_name, 47 | client_addr, backend_start, xact_start, query_start, 48 | waiting, pg_terminate_backend(procpid) 49 | FROM pg_stat_activity 50 | WHERE current_query = ' in transaction' 51 | AND usename != '${SUPERUSER}' 52 | AND usename != '${BACKUPUSER}' 53 | AND ( ( now() - xact_start ) > '${XACTTIME} minutes' 54 | OR ( now() - query_start ) > '${IDLETIME} minutes' ) 55 | ) 56 | SELECT array_to_string(ARRAY[ now()::TEXT, 57 | idles.datname::TEXT, idles.procpid::TEXT, idles.usename::TEXT, 58 | idles.application_name, idles.client_addr::TEXT, 59 | idles.backend_start::TEXT, idles.xact_start::TEXT, 60 | idles.query_start::TEXT, idles.waiting::TEXT], '|') 61 | FROM idles 62 | ORDER BY xact_start;" 63 | 64 | $PSQL -q -t -c "${KILLQUERY}" 65 | 66 | exit 0 67 | 68 | -------------------------------------------------------------------------------- /kill_idle/kill_idle_xact_93.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # this verson of kill idle works on versions 9.2 and later. 4 | 5 | # please configure by setting the folloiwng shell variables 6 | 7 | # REQUIRED: set location of log file for logging killed transactions 8 | LOGFILE=/var/log/postgresql/kill_idle.log 9 | 10 | # REQUIRED: set timelimit for oldest idle transaction, in minutes 11 | IDLETIME=15 12 | 13 | # REQUIRED: set timelimit for oldest non-idle long-running transaction 14 | # in minutes. Set to 1000 if you don't really want these cancelled 15 | XACTTIME=120 16 | 17 | # REQUIRED: set users to be ignored and not kill idle transactions 18 | # generally you want to omit the postgres superuser and the user 19 | # pg_dump runs as from being killed 20 | # if you have no users like this, just set both to XXXXX 21 | SUPERUSER=postgres 22 | BACKUPUSER=XXXXX 23 | 24 | # REQUIRED: path to psql, since cron often lacks search paths 25 | PSQL=/usr/lib/postgresql/9.3/bin/psql 26 | 27 | # OPTIONAL: set these connection variables. if you are running as the 28 | # postgres user on the local machine with passwordless login, you will 29 | # not needto set any of these 30 | PGHOST= 31 | PGUSER= 32 | PGPORT= 33 | PGPASSWORD= 34 | 35 | # you should not need to change code below this line 36 | #################################################### 37 | 38 | export PGHOST 39 | export PGUSER 40 | export PGPORT 41 | export PGPASSWORD 42 | exec >> $LOGFILE 2>&1 43 | SAFELIST="ARRAY['${SUPERUSER}', '${BACKUPUSER}']" 44 | IDLEPARAM="'${IDLETIME} minutes'" 45 | XACTPARAM="'${XACTTIME} minutes'" 46 | 47 | KILLQUERY="WITH idles AS ( 48 | SELECT now() as run_at, datname, pid, usename, application_name, 49 | client_addr, backend_start, xact_start, state_change, 50 | waiting, regexp_replace(substr(query, 1, 100), E$$[\n\r]+$$, ' ', 'g' ) as query, 51 | pg_terminate_backend(pid) 52 | FROM pg_stat_activity 53 | WHERE state = 'idle in transaction' 54 | AND usename != '${SUPERUSER}' 55 | AND usename != '${BACKUPUSER}' 56 | AND ( ( now() - xact_start ) > '${XACTTIME} minutes' 57 | OR ( now() - state_change ) > '${IDLETIME} minutes' ) 58 | ) 59 | SELECT row_to_json(idles.*) 60 | FROM idles 61 | ORDER BY xact_start;" 62 | 63 | $PSQL -q -t -c "${KILLQUERY}" 64 | 65 | exit 0 66 | 67 | -------------------------------------------------------------------------------- /locks/README.rst: -------------------------------------------------------------------------------- 1 | Lock-Logging Scripts 2 | ==================== 3 | 4 | Since the amount of information you can usefully get out of log_lock_waits is limited, these 5 | scripts add a way to log complex information to PostgreSQL tables. 6 | 7 | Run the script as follows: 8 | 9 | ./log_locks.sh INTERVAL XNUMBER 10 | 11 | INTERVAL: number of seconds at which to repeat the logging 12 | XNUMBER: number of times to poll the locks 13 | 14 | So for example, to log every 30 seconds for 1 hour, you'd do: 15 | 16 | ./log_locks.sh 30 120 17 | 18 | This script can be terminated using ctrl-C or kill without consequences. 19 | 20 | This produces two logs in the current directory, lock_table.log and lock_transaction.log. These 21 | logs can be loaded into a database using the table definitions in lock_tables.sql. If you run the 22 | script more than once, new output will be appended to those files. 23 | 24 | Note that, under pathological conditions, querying the locks tables can have significant overhead, 25 | and the lock logging query itself can become blocked or bog down. For that reason, we recommend 26 | discretion on how frequently you poll the locks. Polling locks more often than every 10 seconds 27 | is never recommended. 28 | 29 | **Requires PostgreSQL 9.2 or Later** -------------------------------------------------------------------------------- /locks/lock_tables.sql: -------------------------------------------------------------------------------- 1 | CREATE TABLE table_locks ( 2 | lock_ts TIMESTAMPTZ, 3 | dbname TEXT, 4 | waiting_pid INT, 5 | wait_xid TEXT, 6 | locked_pid INT, 7 | locked_xid TEXT, 8 | locked_relation TEXT, 9 | waiting_type TEXT, 10 | waiting_mode TEXT, 11 | waiting_tuple TEXT, 12 | locked_type TEXT, 13 | locked_mode TEXT, 14 | locked_tuple TEXT, 15 | waiting_app TEXT, 16 | waiting_addr TEXT, 17 | waiting_xact_start TIMESTAMPTZ, 18 | waiting_query_start TIMESTAMPTZ, 19 | waiting_start TIMESTAMPTZ, 20 | waiting_query TEXT, 21 | locked_app TEXT, 22 | locked_addr TEXT, 23 | locked_xact_start TIMESTAMPTZ, 24 | locked_query_start TIMESTAMPTZ, 25 | locked_state TEXT, 26 | locked_state_start TIMESTAMPTZ, 27 | locked_last_query TEXT 28 | ); 29 | 30 | CREATE TABLE transaction_locks ( 31 | lock_ts TIMESTAMPTZ, 32 | dbname TEXT, 33 | waiting_pid INT, 34 | waiting_xid TEXT, 35 | locked_pid INT, 36 | waiting_app TEXT, 37 | waiting_addr TEXT, 38 | waiting_xact_start TIMESTAMPTZ, 39 | waiting_query_start TIMESTAMPTZ, 40 | waiting_start TIMESTAMPTZ, 41 | waiting_query TEXT, 42 | locked_app TEXT, 43 | locked_addr TEXT, 44 | locked_xact_start TIMESTAMPTZ, 45 | locked_query_start TIMESTAMPTZ, 46 | locked_state TEXT, 47 | locked_state_start TIMESTAMPTZ, 48 | locked_last_query TEXT, 49 | waiting_relations TEXT[], 50 | waiting_modes TEXT[], 51 | waiting_lock_types TEXT[], 52 | locked_relations TEXT[], 53 | locked_modes TEXT[], 54 | locked_lock_types TEXT[] 55 | ); 56 | 57 | 58 | -------------------------------------------------------------------------------- /locks/log_locks.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # simple script to log transaction locks to a table 4 | # assumes that you're running it as the postgres user and don't need a password 5 | 6 | # set interval and number of executions 7 | # interval is set in seconds 8 | # defaults to 1000 minutes once per minute 9 | # not recommended to run this more than 4X per minute 10 | INTERVAL=${1:-60} 11 | XNUMBER=${2:-1000} 12 | 13 | # change to path of psql 14 | PSQL='psql' 15 | 16 | # list all databases you're targeting, space separated 17 | DBNAMES='postgres flo' 18 | 19 | # modify if required 20 | # statement timeout is required to keep the lock query 21 | # from hanging 22 | DBPORT='' 23 | DBHOST='' 24 | STATTIMEOUT=2000 25 | SUPER="josh" 26 | 27 | # output files. you shouldn't need to modify these 28 | # unless you're doing something special 29 | TABLELOG='lock_table.log' 30 | XTNLOG='lock_transaction.log' 31 | 32 | # queries; you should not need to modify these 33 | TABLEQUERY="WITH table_locks AS ( 34 | select pid, 35 | relation::int as relation, 36 | (relation::regclass)::text as locked_relation, 37 | mode, 38 | page || ':' || tuple as locked_tuple, 39 | locktype, 40 | coalesce(transactionid::text, virtualxid) as lxid, 41 | granted 42 | from pg_locks 43 | join pg_database 44 | ON pg_locks.database = pg_database.oid 45 | where relation is not null 46 | and pg_database.datname = current_database() 47 | and locktype IN ( 'relation', 'extend', 'page', 'tuple' ) 48 | ), 49 | lock_granted AS ( 50 | select * from table_locks 51 | where granted 52 | ), 53 | lock_waiting AS ( 54 | select * from table_locks 55 | where not granted 56 | ) 57 | select now() as lock_ts, 58 | current_database() as dbname, 59 | lock_waiting.pid as waiting_pid, 60 | lock_waiting.lxid as wait_xid, 61 | lock_granted.pid as locked_pid, 62 | lock_granted.lxid as locked_xid, 63 | lock_granted.locked_relation, 64 | lock_waiting.locktype as waiting_type, 65 | lock_waiting.mode as waiting_mode, 66 | lock_waiting.locked_tuple as tuple_waiting, 67 | lock_granted.locktype as locked_type, 68 | lock_granted.mode as lock_mode, 69 | lock_granted.locked_tuple as tuple_locked, 70 | waiting_proc.application_name as waiting_app, 71 | waiting_proc.client_addr as waiting_addr, 72 | waiting_proc.xact_start as waiting_xact_start, 73 | waiting_proc.query_start as waiting_query_start, 74 | waiting_proc.state_change as waiting_start, 75 | waiting_proc.query as waiting_query, 76 | locked_proc.application_name as locked_app, 77 | locked_proc.client_addr as locked_addr, 78 | locked_proc.xact_start as locked_xact_start, 79 | locked_proc.query_start as locked_query_start, 80 | locked_proc.state as locked_state, 81 | locked_proc.state_change as locked_state_start, 82 | locked_proc.query as locked_last_query 83 | from lock_waiting 84 | JOIN pg_stat_activity as waiting_proc 85 | ON lock_waiting.pid = waiting_proc.pid 86 | LEFT OUTER JOIN lock_granted 87 | ON lock_waiting.relation = lock_granted.relation 88 | LEFT OUTER JOIN pg_stat_activity as locked_proc 89 | ON lock_granted.pid = locked_proc.pid 90 | order by locked_pid, locked_relation;" 91 | 92 | XTNQUERY="WITH mylocks AS ( 93 | SELECT * FROM pg_locks 94 | WHERE locktype IN ( 'transactionid', 'virtualxid' ) 95 | ), 96 | table_locks AS ( 97 | select pid, 98 | (relation::regclass)::TEXT as lockobj, 99 | case when page is not null and tuple is not null then 100 | mode || ' on ' || page::text || ':' || tuple::text 101 | else 102 | mode 103 | end as lock_mode, 104 | locktype 105 | from mylocks 106 | join pg_database 107 | ON mylocks.database = pg_database.oid 108 | where relation is not null 109 | and pg_database.datname = current_database() 110 | order by lockobj 111 | ), 112 | locked_list AS ( 113 | select pid, 114 | array_agg(lockobj) as lock_relations, 115 | array_agg(lock_mode) as lock_modes, 116 | array_agg(locktype) as lock_types 117 | from table_locks 118 | group by pid 119 | ), 120 | txn_locks AS ( 121 | select pid, transactionid::text as lxid, granted 122 | from mylocks 123 | where locktype = 'transactionid' 124 | union all 125 | select pid, virtualxid::text as lxid, granted 126 | from mylocks 127 | where locktype = 'virtualxid' 128 | ), 129 | txn_granted AS ( 130 | select pid, lxid from txn_locks 131 | where granted 132 | ), 133 | txn_waiting AS ( 134 | select pid, lxid from txn_locks 135 | where not granted 136 | ) 137 | select now() as lock_ts, 138 | current_database() AS dbname, 139 | txn_waiting.pid as waiting_pid, 140 | txn_waiting.lxid as wait_xid, 141 | txn_granted.pid as locked_pid, 142 | waiting_proc.application_name as waiting_app, 143 | waiting_proc.client_addr as waiting_addr, 144 | waiting_proc.xact_start as waiting_xact_start, 145 | waiting_proc.query_start as waiting_query_start, 146 | waiting_proc.state_change as waiting_start, 147 | waiting_proc.query as waiting_query, 148 | locked_proc.application_name as locked_app, 149 | locked_proc.client_addr as locked_addr, 150 | locked_proc.xact_start as locked_xact_start, 151 | locked_proc.query_start as locked_query_start, 152 | locked_proc.state as locked_state, 153 | locked_proc.state_change as locked_state_start, 154 | locked_proc.query as locked_last_query, 155 | waiting_locks.lock_relations as waiting_relations, 156 | waiting_locks.lock_modes as waiting_modes, 157 | waiting_locks.lock_types as waiting_lock_types, 158 | locked_locks.lock_relations as locked_relations, 159 | locked_locks.lock_modes as locked_modes, 160 | locked_locks.lock_types as locked_lock_types 161 | from txn_waiting 162 | JOIN pg_stat_activity as waiting_proc 163 | ON txn_waiting.pid = waiting_proc.pid 164 | LEFT OUTER JOIN txn_granted 165 | ON txn_waiting.lxid = txn_granted.lxid 166 | LEFT OUTER JOIN pg_stat_activity as locked_proc 167 | ON txn_granted.pid = locked_proc.pid 168 | LEFT OUTER JOIN locked_list AS waiting_locks 169 | ON txn_waiting.pid = waiting_locks.pid 170 | LEFT OUTER JOIN locked_list AS locked_locks 171 | ON txn_granted.pid = locked_locks.pid;" 172 | 173 | # write headers 174 | if [ ! -f $TABLELOG ]; then 175 | echo 'lock_ts|dbname|waiting_pid|wait_xid|locked_pid|locked_xid|locked_relation|waiting_type|waiting_mode|waiting_tuple|locked_type|locked_mode|locked_tuple|waiting_app|waiting_addr|waiting_xact_start|waiting_query_start|waiting_start|waiting_query|locked_app|locked_addr|locked_xact_start|locked_query_start|locked_state|locked_state_start|locked_last_query' > $TABLELOG 176 | fi 177 | 178 | if [ ! -f XTNLOG ]; then 179 | echo 'lock_ts|dbname|waiting_pid|waiting_xid|locked_pid|waiting_app|waiting_addr|waiting_xact_start|waiting_query_start|waiting_start|waiting_query|locked_app|locked_addr|locked_xact_start|locked_query_start|locked_state|locked_state_start|locked_last_query|waiting_relations|waiting_modes|waiting_lock_types|locked_relations|locked_modes|locked_lock_types' > $XTNLOG 180 | fi 181 | 182 | for ((i=0; i<$XNUMBER; i++)); do 183 | 184 | for DBNAME in $DBNAMES; do 185 | $PSQL -A -q -t -c "SET STATEMENT_TIMEOUT=${STATTIMEOUT}; ${TABLEQUERY}" -U $SUPER $DBNAME $DBPORT $DBHOST >> $TABLELOG 186 | $PSQL -A -q -t -c "SET STATEMENT_TIMEOUT=${STATTIMEOUT}; ${XTNQUERY}" -U $SUPER $DBNAME $DBPORT $DBHOST >> $XTNLOG 187 | done 188 | 189 | if (($i%10==0)); then 190 | echo "locks polled $i times" 191 | fi 192 | 193 | sleep $INTERVAL 194 | 195 | done 196 | 197 | exit 0 198 | -------------------------------------------------------------------------------- /pgbouncer/log_pgbouncer_stats.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # take number of minutes as parameter; otherwise 4 | # run for 1 hour 5 | NUMMIN=${1:-60} 6 | NUMMIN=$(($NUMMIN * 2 + 10)) 7 | CURMIN=0 8 | 9 | # One typical use case is to run this script, plus other scripts that connect 10 | # to the target database, inside the same interactive shell process. 11 | # To make that work, you need to hard-code the PG* environment variables 12 | # for connecting to pgbouncer inside this script. The defaults below probably 13 | # will not work for your setup (in particular, your pgbouncer PGPORT 14 | # is probably 6543, not 6542). 15 | # 16 | # If you need to give a password to get access to pgbouncer, use ~/.pgpass: 17 | # http://www.postgresql.org/docs/9.3/static/libpq-pgpass.html 18 | 19 | export PGUSER=pgbouncer 20 | export PGDATABASE=pgbouncer 21 | export PGPORT=6542 22 | ## export PGPASSWORD= 23 | 24 | PSFILE="ps-scratch.txt" 25 | STOPFILE="./stopfile" 26 | clean_up_and_exit() { 27 | rm $PSFILE 28 | rm -f "$STOPFILE" 29 | exit $? 30 | } 31 | trap "echo 'Caught interrupt; exiting'; clean_up_and_exit" SIGINT 32 | 33 | 34 | log_resource_usage() { 35 | ps -eo pid,pcpu,pmem,rss,vsz,args > $PSFILE 36 | grep '/usr/lib/postgresql/9.3/bin/postgres -D' $PSFILE | adddate >> ps.log 37 | grep '/usr/sbin/pgbouncer' $PSFILE | adddate >> ps.log 38 | } 39 | 40 | adddate() { 41 | DTSTAMP=$(date +"%Y-%m-%d %H:%M:%S %z") 42 | while IFS= read -r line; do 43 | echo "$DTSTAMP $line" 44 | done 45 | } 46 | 47 | while [ $CURMIN -lt $NUMMIN ] 48 | do 49 | 50 | psql -q -A -t -F " " -c "show pools" | adddate >> pools.log 51 | psql -q -A -t -F " " -c "show stats" | adddate >> stats.log 52 | psql -q -A -t -F " " -c "show clients" | adddate >> clients.log 53 | psql -q -A -t -F " " -U postgres -p 5432 -d justintv_prod -c "select * from pg_stat_activity" | adddate >> pg_stat_activity.log 54 | 55 | log_resource_usage 56 | 57 | if [[ -e "$STOPFILE" ]]; then 58 | echo "Exiting; detected '$STOPFILE'" 59 | clean_up_and_exit 60 | else 61 | #cycle twice per minute 62 | sleep 30 63 | let CURMIN=CURMIN+1 64 | fi 65 | done 66 | 67 | clean_up_and_exit 68 | -------------------------------------------------------------------------------- /slony/log_slony_status.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # simple script to log transaction locks to a table 4 | # assumes that you're running it as the postgres user and don't need a password 5 | 6 | # set interval and number of executions 7 | # interval is set in seconds 8 | # defaults to 1000 minutes once per minute 9 | # not recommended to run this more than 4X per minute 10 | INTERVAL=${1:-60} 11 | XNUMBER=${2:-1000} 12 | 13 | # change to path of psql 14 | PSQL='psql' 15 | 16 | # database and slony schema you're targeting 17 | DBNAME='postgres' 18 | SLSCHEMA='_slony' 19 | 20 | # modify if required 21 | # statement timeout is required to keep the lock query 22 | # from hanging 23 | DBPORT='' 24 | DBHOST='' 25 | STATTIMEOUT=2000 26 | SUPER="postgres" 27 | 28 | # output files. you shouldn't need to modify these 29 | # unless you're doing something special 30 | LOG="slony_stats.log" 31 | 32 | # queries; you should not need to modify these 33 | SLQUERY=" 34 | SELECT now() as log_ts, 35 | current_database() as dbname, 36 | sl_status.*, 37 | (SELECT count(*) FROM ${SLSCHEMA}.sl_log_1) as log_1_count, 38 | (SELECT count(*) FROM ${SLSCHEMA}.sl_log_2) as log_2_count 39 | FROM ${SLSCHEMA}.sl_status; 40 | " 41 | 42 | # write headers 43 | if [ ! -f $LOG ]; then 44 | echo '' > $TABLELOG 45 | fi 46 | 47 | if [ ! -f XTNLOG ]; then 48 | echo 'log_ts|dbname|st_origin|st_received|st_last_event|st_last_event_ts|st_last_received|st_last_received_ts|st_last_received_event_ts|st_lag_num_events|st_lag_time|log_1_count|log_2_count' > $LOG 49 | fi 50 | 51 | for ((i=0; i<$XNUMBER; i++)); do 52 | 53 | $PSQL -A -q -t -c "SET STATEMENT_TIMEOUT=${STATTIMEOUT}; ${SLQUERY}" -U $SUPER $DBNAME $DBPORT $DBHOST >> $LOG 54 | 55 | if (($i%10==0)); then 56 | echo "slony polled $i times" 57 | fi 58 | 59 | sleep $INTERVAL 60 | 61 | done 62 | 63 | exit 0 64 | -------------------------------------------------------------------------------- /slony/see_replication_config.sql: -------------------------------------------------------------------------------- 1 | select set_id, master.no_comment as master, 2 | array_to_string(array_agg(replicas.no_comment), ',') as replicas, 3 | set_comment as replication_desc 4 | from sl_set 5 | join sl_node as master on set_origin = no_id 6 | join sl_subscribe ON set_id = sub_set 7 | join sl_node as replicas ON sub_receiver = replicas.no_id 8 | where sub_active 9 | and replicas.no_active 10 | group by set_id, master.no_comment, set_comment 11 | order by set_id; -------------------------------------------------------------------------------- /vacuum/database_xid_age.sql: -------------------------------------------------------------------------------- 1 | with datage as ( 2 | select datname, age(datfrozenxid) as xid_age, 3 | round(pg_database_size(oid)/(128*1024::numeric),1) as gb_size 4 | from pg_database 5 | where datname not in ('rdsadmin') -- no perms to examine this one (AWS) 6 | ), 7 | av_max_age as ( 8 | select setting::numeric as max_age from pg_settings where name = 'autovacuum_freeze_max_age' 9 | ), 10 | wrap_pct AS ( 11 | select datname, xid_age, 12 | round(xid_age*100::numeric/max_age, 1) as av_wrap_pct, 13 | round(xid_age*100::numeric/2200000000, 1) as shutdown_pct, 14 | gb_size 15 | from datage cross join av_max_age 16 | ) 17 | SELECT wrap_pct.* 18 | FROM wrap_pct 19 | WHERE ((av_wrap_pct >= 75 or shutdown_pct > 50 20 | and gb_size > 1)) 21 | or (av_wrap_pct > 100 or shutdown_pct > 80) 22 | order by xid_age desc; 23 | -------------------------------------------------------------------------------- /vacuum/last_autovacuum.sql: -------------------------------------------------------------------------------- 1 | -- Show last vacuum or autovacuum by table size descending. 2 | SELECT pg_class.relname 3 | , pg_namespace.nspname 4 | , pg_size_pretty(pg_total_relation_size(pg_class.oid)) 5 | , 6 | CASE 7 | WHEN COALESCE(last_vacuum,'1/1/1000') > 8 | COALESCE(last_autovacuum,'1/1/1000') THEN 9 | pg_stat_all_tables.last_vacuum 10 | ELSE last_autovacuum 11 | END AS last_vacuumed 12 | , pg_relation_size(pg_class.oid) 13 | FROM pg_class 14 | JOIN pg_namespace 15 | ON pg_class.relnamespace = pg_namespace.oid 16 | JOIN pg_stat_all_tables 17 | ON ( 18 | pg_class.relname = pg_stat_all_tables.relname 19 | AND pg_namespace.nspname = pg_stat_all_tables.schemaname 20 | ) 21 | WHERE pg_namespace.nspname not in ('pg_toast') 22 | ORDER BY pg_relation_size(pg_class.oid) DESC 23 | ; 24 | -------------------------------------------------------------------------------- /vacuum/last_autovacuum_and_autoanalyze.sql: -------------------------------------------------------------------------------- 1 | -- Show last autovacuum and autoanalyze by table size descending. 2 | SELECT pg_class.relname 3 | , pg_namespace.nspname 4 | , pg_size_pretty(pg_total_relation_size(pg_class.oid)) 5 | , 6 | CASE 7 | WHEN COALESCE(last_vacuum,'1/1/1000') > 8 | COALESCE(last_autovacuum,'1/1/1000') THEN 9 | pg_stat_all_tables.last_vacuum 10 | ELSE last_autovacuum 11 | END AS last_vacuumed 12 | , 13 | CASE 14 | WHEN COALESCE(last_analyze,'1/1/1000') > 15 | COALESCE(last_autoanalyze,'1/1/1000') THEN 16 | pg_stat_all_tables.last_analyze 17 | ELSE last_autoanalyze 18 | END AS last_analyzed 19 | , pg_relation_size(pg_class.oid) 20 | FROM pg_class 21 | JOIN pg_namespace 22 | ON pg_class.relnamespace = pg_namespace.oid 23 | JOIN pg_stat_all_tables 24 | ON ( 25 | pg_class.relname = pg_stat_all_tables.relname 26 | AND pg_namespace.nspname = pg_stat_all_tables.schemaname 27 | ) 28 | WHERE pg_namespace.nspname NOT IN ('pg_toast') 29 | ORDER BY pg_relation_size(pg_class.oid) DESC ; 30 | -------------------------------------------------------------------------------- /vacuum/no_stats_table_check.sql: -------------------------------------------------------------------------------- 1 | -- no stats query 2 | -- to display tables/columns which are without stats 3 | -- so we can't estimate bloat 4 | SELECT table_schema, table_name, 5 | ( pg_class.relpages = 0 ) AS is_empty, 6 | ( psut.relname IS NULL OR ( psut.last_analyze IS NULL and psut.last_autoanalyze IS NULL ) ) AS never_analyzed, 7 | array_agg(column_name::TEXT) as no_stats_columns 8 | FROM information_schema.columns 9 | JOIN pg_class ON columns.table_name = pg_class.relname 10 | AND pg_class.relkind = 'r' 11 | JOIN pg_namespace ON pg_class.relnamespace = pg_namespace.oid 12 | AND nspname = table_schema 13 | LEFT OUTER JOIN pg_stats 14 | ON table_schema = pg_stats.schemaname 15 | AND table_name = pg_stats.tablename 16 | AND column_name = pg_stats.attname 17 | LEFT OUTER JOIN pg_stat_user_tables AS psut 18 | ON table_schema = psut.schemaname 19 | AND table_name = psut.relname 20 | WHERE pg_stats.attname IS NULL 21 | AND table_schema NOT IN ('pg_catalog', 'information_schema') 22 | GROUP BY table_schema, table_name, relpages, psut.relname, last_analyze, last_autoanalyze 23 | ORDER BY table_schema, table_name 24 | ; 25 | -------------------------------------------------------------------------------- /vacuum/table_xid_age.sql: -------------------------------------------------------------------------------- 1 | with relage as ( 2 | select relname, age(relfrozenxid) as xid_age, 3 | round((relpages/128::numeric),1) as mb_size 4 | from pg_class 5 | where relkind IN ('r', 't') 6 | ), 7 | av_max_age as ( 8 | select setting::numeric as max_age from pg_settings where name = 'autovacuum_freeze_max_age' 9 | ), 10 | wrap_pct AS ( 11 | select relname, xid_age, 12 | round(xid_age*100::numeric/max_age, 1) as av_wrap_pct, 13 | round(xid_age*100::numeric/2200000000, 1) as shutdown_pct, 14 | mb_size 15 | from relage cross join av_max_age 16 | ) 17 | select wrap_pct.*, pgsa.pid 18 | from wrap_pct 19 | left outer join pg_stat_activity pgsa on (pgsa.query ilike '%autovacuum%' and pgsa.query ilike '%' || relname || '%') 20 | where ((av_wrap_pct >= 75 21 | or shutdown_pct >= 50) 22 | and mb_size > 1000) 23 | or 24 | (av_wrap_pct > 100 25 | or shutdown_pct > 80) 26 | order by xid_age desc; --------------------------------------------------------------------------------