├── doc ├── .gitignore ├── Makefile ├── style.css ├── release.rst └── pg_repack.rst ├── lib ├── .gitignore ├── pg_migrate.control.in ├── exports.txt ├── migrate.h ├── pgut │ ├── pgut-spi.h │ ├── pgut-be.h │ └── pgut-spi.c ├── Makefile └── pg_migrate.sql.in ├── bin ├── .gitignore ├── Makefile └── pgut │ ├── pgut-fe.h │ ├── pgut.h │ └── pgut-fe.c ├── regress ├── sql │ ├── init_extension.sql │ ├── after_schema.sql │ ├── nosuper.sql │ ├── migrate_run.sql │ ├── ordered_indexes.sql │ ├── tablespace.sql │ ├── migrate_setup.sql │ └── check.sql ├── expected │ ├── init_extension.out │ ├── nosuper.out │ ├── migrate_run.out │ ├── after_schema_1.out │ ├── ordered_indexes.out │ ├── after_schema.out │ ├── migrate_setup.out │ ├── tablespace_1.out │ ├── tablespace.out │ └── check.out ├── travis_test.sh ├── Makefile └── travis_prepare.sh ├── .gitignore ├── CHANGELOG.md ├── .travis.yml ├── nfpm.yaml.tmpl ├── META.json ├── Formula ├── pg_migrate_postgresql@10.rb ├── pg_migrate_postgresql@11.rb ├── pg_migrate_postgresql@12.rb ├── pg_migrate_postgresql@13.rb ├── pg_migrate_postgresql@14.rb └── pg_migrate_postgresql@9.6.rb ├── COPYRIGHT ├── ci └── key.gpg.pub ├── Makefile ├── SPECS └── pg_repack90.spec ├── README.md └── .github └── workflows └── test.yml /doc/.gitignore: -------------------------------------------------------------------------------- 1 | /*.html 2 | -------------------------------------------------------------------------------- /lib/.gitignore: -------------------------------------------------------------------------------- 1 | /.deps/ 2 | /pg_migrate.sql 3 | /pg_migrate--[0-9.]*.sql 4 | /pg_migrate.control 5 | -------------------------------------------------------------------------------- /bin/.gitignore: -------------------------------------------------------------------------------- 1 | /.deps/ 2 | /pg_migrate 3 | !/pg_migrate.* 4 | /pg_migrate.o 5 | /results/ 6 | /sql/init.sql 7 | /sql/init-*.*.sql 8 | -------------------------------------------------------------------------------- /regress/sql/init_extension.sql: -------------------------------------------------------------------------------- 1 | SET client_min_messages = warning; 2 | CREATE EXTENSION pg_migrate; 3 | RESET client_min_messages; 4 | -------------------------------------------------------------------------------- /regress/expected/init_extension.out: -------------------------------------------------------------------------------- 1 | SET client_min_messages = warning; 2 | CREATE EXTENSION pg_migrate; 3 | RESET client_min_messages; 4 | -------------------------------------------------------------------------------- /regress/travis_test.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -x 4 | 5 | export PG_REGRESS_DIFF_OPTS=-u 6 | 7 | if ! make installcheck; then 8 | cat regress/regression.diffs 9 | exit 1 10 | fi 11 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Global excludes across all subdirectories 2 | *.o 3 | *.so 4 | *.bc 5 | regress/regression.diffs 6 | regress/regression.out 7 | regress/results/ 8 | dist/*.zip 9 | lib/exports.list 10 | -------------------------------------------------------------------------------- /lib/pg_migrate.control.in: -------------------------------------------------------------------------------- 1 | # pg_migrate extension 2 | comment = 'Migrates a PostgreSQL table avoiding long locks' 3 | default_version = 'MIGRATE_VERSION' 4 | module_pathname = '$libdir/pg_migrate' 5 | relocatable = false 6 | -------------------------------------------------------------------------------- /regress/sql/after_schema.sql: -------------------------------------------------------------------------------- 1 | -- 2 | -- tables schema after running migrations 3 | -- 4 | 5 | \d tbl_cluster 6 | \d tbl_gistkey 7 | \d tbl_only_ckey 8 | \d tbl_only_pkey 9 | \d tbl_with_dropped_column 10 | \d tbl_with_dropped_toast 11 | \d tbl_idxopts 12 | -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | # Changelog 2 | ## Unreleased 3 | ### Changed 4 | 5 | ### Added 6 | 7 | ### Fixed 8 | 9 | ## [0.1.1] - 2022-03-30 10 | ### Fixed 11 | - [ci] Fix matrix build & release 12 | 13 | ## [0.1.0] - 2022-03-27 14 | ### Added 15 | - [cli] Initial release 16 | - [extension] Initial release 17 | -------------------------------------------------------------------------------- /doc/Makefile: -------------------------------------------------------------------------------- 1 | RST2HTML = $(shell which rst2html || which rst2html.py) 2 | RSTOPTS = --stylesheet-path=style.css,html4css1.css --initial-header-level=2 3 | 4 | HTML = $(patsubst %.rst,%.html,$(wildcard *.rst)) 5 | 6 | .PHONY: clean 7 | 8 | all : html 9 | 10 | html : $(HTML) 11 | 12 | %.html: %.rst style.css 13 | $(RST2HTML) $(RSTOPTS) $< $@ 14 | 15 | clean: 16 | rm -f $(HTML) 17 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | # Travis CI configuration file for psycopg2 2 | 3 | dist: focal 4 | sudo: required 5 | 6 | env: 7 | - PGVER=14 8 | - PGVER=13 9 | - PGVER=12 10 | - PGVER=11 11 | - PGVER=10 12 | # Disabled because packages broken at least on xenial 13 | # https://www.postgresql.org/message-id/CA%2Bmi_8a1oEnCzkt0CvqysgY4MQ6jEefjmS%3Dq_K-AvOx%3DF7m2%2BQ%40mail.gmail.com 14 | - PGVER=9.6 15 | - PGVER=9.5 16 | - PGVER=9.4 17 | 18 | install: 19 | - source regress/travis_prepare.sh 20 | - make 21 | - sudo make install 22 | 23 | script: 24 | - regress/travis_test.sh 25 | 26 | notifications: 27 | email: false 28 | -------------------------------------------------------------------------------- /regress/sql/nosuper.sql: -------------------------------------------------------------------------------- 1 | -- 2 | -- no superuser check 3 | -- 4 | SET client_min_messages = error; 5 | DROP ROLE IF EXISTS nosuper; 6 | SET client_min_messages = warning; 7 | CREATE ROLE nosuper WITH LOGIN; 8 | -- => OK 9 | \! pg_migrate --execute --alter='ADD COLUMN ns1 INT' --dbname=contrib_regression --table=tbl_cluster --no-superuser-check 10 | -- => ERROR 11 | \! pg_migrate --execute --alter='ADD COLUMN ns2 INT' --dbname=contrib_regression --table=tbl_cluster --username=nosuper 12 | -- => ERROR 13 | \! pg_migrate --execute --alter='ADD COLUMN ns3 INT' --dbname=contrib_regression --table=tbl_cluster --username=nosuper --no-superuser-check 14 | DROP ROLE IF EXISTS nosuper; 15 | -------------------------------------------------------------------------------- /nfpm.yaml.tmpl: -------------------------------------------------------------------------------- 1 | name: "postgresql-${PG}-pg_migrate" 2 | arch: "amd64" 3 | platform: "linux" 4 | version: "v${VERSION}" 5 | section: "default" 6 | priority: "extra" 7 | depends: 8 | - postgresql-${PG} 9 | maintainer: "phillbaker " 10 | vendor: phillbaker 11 | homepage: "http://example.com" 12 | license: "BSD" 13 | contents: 14 | - src: ./bin/pg_migrate 15 | dst: /usr/lib/postgresql/${PG}/bin/pg_migrate 16 | - src: ./lib/pg_migrate.so 17 | dst: /usr/lib/postgresql/${PG}/lib/pg_migrate.so 18 | - src: ./lib/pg_migrate--${VERSION}.sql 19 | dst: /usr/share/postgresql/${PG}/extension/pg_migrate--${VERSION}.sql 20 | - src: ./lib/pg_migrate.control 21 | dst: /usr/share/postgresql/${PG}/extension/pg_migrate.control 22 | -------------------------------------------------------------------------------- /regress/Makefile: -------------------------------------------------------------------------------- 1 | # 2 | # pg_repack: regress/Makefile 3 | # 4 | # Portions Copyright (c) 2008-2012, NIPPON TELEGRAPH AND TELEPHONE CORPORATION 5 | # Portions Copyright (c) 2011, Itagaki Takahiro 6 | # Portions Copyright (c) 2012-2020, The Reorg Development Team 7 | # 8 | 9 | PG_CONFIG ?= pg_config 10 | 11 | # version as a number, e.g. 9.1.4 -> 901 12 | VERSION := $(shell $(PG_CONFIG) --version | sed 's/.* \([[:digit:].]\{1,\}\).*/\1/') 13 | INTVERSION := $(shell echo $$(($$(echo $(VERSION).0 | sed 's/\([[:digit:]]\{1,\}\)\.\([[:digit:]]\{1,\}\).*/\1*100+\2/')))) 14 | 15 | 16 | # 17 | # Test suite 18 | # 19 | 20 | REGRESS := init_extension migrate_setup migrate_run after_schema check nosuper tablespace ordered_indexes 21 | 22 | USE_PGXS = 1 # use pgxs if not in contrib directory 23 | PGXS := $(shell $(PG_CONFIG) --pgxs) 24 | include $(PGXS) 25 | -------------------------------------------------------------------------------- /regress/expected/nosuper.out: -------------------------------------------------------------------------------- 1 | -- 2 | -- no superuser check 3 | -- 4 | SET client_min_messages = error; 5 | DROP ROLE IF EXISTS nosuper; 6 | SET client_min_messages = warning; 7 | CREATE ROLE nosuper WITH LOGIN; 8 | -- => OK 9 | \! pg_migrate --execute --alter='ADD COLUMN ns1 INT' --dbname=contrib_regression --table=tbl_cluster --no-superuser-check 10 | INFO: migrating table "public.tbl_cluster" 11 | INFO: altering table with: ADD COLUMN ns1 INT 12 | -- => ERROR 13 | \! pg_migrate --execute --alter='ADD COLUMN ns2 INT' --dbname=contrib_regression --table=tbl_cluster --username=nosuper 14 | ERROR: pg_migrate failed with error: You must be a superuser to use pg_migrate 15 | -- => ERROR 16 | \! pg_migrate --execute --alter='ADD COLUMN ns3 INT' --dbname=contrib_regression --table=tbl_cluster --username=nosuper --no-superuser-check 17 | ERROR: pg_migrate failed with error: ERROR: permission denied for schema migrate 18 | LINE 1: select migrate.version(), migrate.version_sql() 19 | ^ 20 | DROP ROLE IF EXISTS nosuper; 21 | -------------------------------------------------------------------------------- /lib/exports.txt: -------------------------------------------------------------------------------- 1 | Pg_magic_func 1 2 | pg_finfo_migrate_apply 2 3 | pg_finfo_migrate_disable_autovacuum 3 4 | pg_finfo_migrate_drop 4 5 | pg_finfo_migrate_get_order_by 5 6 | pg_finfo_migrate_get_table_and_inheritors 6 7 | pg_finfo_migrate_indexdef 7 8 | pg_finfo_migrate_reset_autovacuum 8 9 | pg_finfo_migrate_swap 9 10 | pg_finfo_migrate_trigger 10 11 | pg_finfo_migrate_version 11 12 | pg_finfo_migrate_index_swap 12 13 | migrate_apply 13 14 | migrate_disable_autovacuum 14 15 | migrate_drop 15 16 | migrate_get_order_by 16 17 | migrate_get_table_and_inheritors 17 18 | migrate_index_swap 18 19 | migrate_indexdef 19 20 | migrate_reset_autovacuum 20 21 | migrate_swap 21 22 | migrate_trigger 22 23 | migrate_version 23 24 | -------------------------------------------------------------------------------- /lib/migrate.h: -------------------------------------------------------------------------------- 1 | #ifndef MIGRATE_H 2 | #define MIGRATE_H 3 | 4 | /* 5 | * Parsed CREATE INDEX statement. You can rebuild sql using 6 | * sprintf(buf, "%s %s ON %s USING %s (%s)%s", 7 | * create, index, table type, columns, options) 8 | */ 9 | typedef struct IndexDef 10 | { 11 | char *create; /* CREATE INDEX or CREATE UNIQUE INDEX */ 12 | char *index; /* index name including schema */ 13 | char *table; /* table name including schema */ 14 | char *type; /* btree, hash, gist or gin */ 15 | char *columns; /* column definition */ 16 | char *options; /* options after columns, before TABLESPACE (e.g. COLLATE) */ 17 | char *tablespace; /* tablespace if specified */ 18 | char *where; /* WHERE content if specified */ 19 | } IndexDef; 20 | 21 | extern char *skip_const(const char *original_sql, char *sql, const char *arg1, const char *arg2); 22 | extern char *skip_ident(const char *original_sql, char *sql); 23 | extern char *parse_error(const char *original_sql); 24 | extern char *skip_until_const(const char *original_sql, char *sql, const char *what); 25 | extern char *skip_until(const char *original_sql, char *sql, char end); 26 | 27 | #endif 28 | -------------------------------------------------------------------------------- /lib/pgut/pgut-spi.h: -------------------------------------------------------------------------------- 1 | /*------------------------------------------------------------------------- 2 | * pgut-spi.h 3 | * 4 | * Portions Copyright (c) 2008-2011, NIPPON TELEGRAPH AND TELEPHONE CORPORATION 5 | * Portions Copyright (c) 2011, Itagaki Takahiro 6 | * Portions Copyright (c) 2012-2020, The Reorg Development Team 7 | *------------------------------------------------------------------------- 8 | */ 9 | 10 | #ifndef PGUT_SPI_H 11 | #define PGUT_SPI_H 12 | 13 | #include "executor/spi.h" 14 | 15 | #ifdef _MSC_VER 16 | #define __attribute__(x) 17 | #endif 18 | 19 | extern void execute(int expected, const char *sql); 20 | extern void execute_plan(int expected, SPIPlanPtr plan, Datum *values, const char *nulls); 21 | extern void execute_with_format(int expected, const char *format, ...) 22 | __attribute__((format(printf, 2, 3))); 23 | extern void execute_with_args(int expected, const char *src, int nargs, Oid argtypes[], Datum values[], const bool nulls[]); 24 | extern void execute_with_format_args(int expected, const char *format, int nargs, Oid argtypes[], Datum values[], const bool nulls[], ...) 25 | __attribute__((format(printf, 2, 7))); 26 | 27 | #endif /* PGUT_SPI_H */ 28 | -------------------------------------------------------------------------------- /META.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "pg_migrate", 3 | "abstract": "Migrates a PostgreSQL table avoiding long locks", 4 | "description": "Reorganize tables in PostgreSQL databases with minimal locks", 5 | "version": "0.1.1", 6 | "maintainer": [], 7 | "tags": [ "maintenance", "locks", "migrations", "schema-change" ], 8 | "release_status": "unstable", 9 | "license": "bsd", 10 | "provides": { 11 | "pg_migrate": { 12 | "file": "lib/pg_migrate.sql", 13 | "version": "0.1.1", 14 | "abstract": "Migrates a PostgreSQL table avoiding long locks" 15 | } 16 | }, 17 | "prereqs": { 18 | "runtime": { 19 | "requires": { 20 | "PostgreSQL": "9.6.0" 21 | } 22 | } 23 | }, 24 | "resources": { 25 | "homepage": "https://github.com/phillbaker/pg_migrate", 26 | "bugtracker": { 27 | "web": "https://github.com/phillbaker/pg_migrate/issues" 28 | }, 29 | "repository": { 30 | "url": "https://github.com/phillbaker/pg_migrate.git", 31 | "web": "https://github.com/phillbaker/pg_migrate", 32 | "type": "git" 33 | } 34 | }, 35 | "meta-spec": { 36 | "version": "1.0.0", 37 | "url": "https://pgxn.org/meta/spec.txt" 38 | } 39 | } 40 | -------------------------------------------------------------------------------- /bin/Makefile: -------------------------------------------------------------------------------- 1 | # 2 | # pg_migrate: bin/Makefile 3 | # 4 | # Portions Copyright (c) 2008-2012, NIPPON TELEGRAPH AND TELEPHONE CORPORATION 5 | # Portions Copyright (c) 2011, Itagaki Takahiro 6 | # Portions Copyright (c) 2012-2020, The Reorg Development Team 7 | # 8 | 9 | PG_CONFIG ?= pg_config 10 | 11 | SRCS = pg_migrate.c pgut/pgut.c pgut/pgut-fe.c 12 | OBJS = $(SRCS:.c=.o) 13 | PROGRAM = pg_migrate 14 | 15 | 16 | # The version number of the program. It should be the same of the library. 17 | MIGRATE_VERSION = $(shell grep '"version":' ../META.json | head -1 \ 18 | | sed -e 's/[ ]*"version":[ ]*"\(.*\)",/\1/') 19 | 20 | PG_CPPFLAGS = -I$(libpq_srcdir) -DMIGRATE_VERSION=$(MIGRATE_VERSION) 21 | 22 | ifdef DEBUG_MIGRATE 23 | PG_CPPFLAGS += -DDEBUG_MIGRATE 24 | endif 25 | 26 | PG_LIBS = $(libpq) 27 | 28 | # libs pgport, pgcommon moved somewhere else in some ubuntu version 29 | # see ticket #179 30 | PG_LIBS += -L$(shell $(PG_CONFIG) --pkglibdir) 31 | 32 | USE_PGXS = 1 # use pgxs if not in contrib directory 33 | PGXS := $(shell $(PG_CONFIG) --pgxs) 34 | include $(PGXS) 35 | 36 | # remove dependency on libxml2, libxslt, and libpam. 37 | # XXX: find a better way to make sure we are linking with libraries 38 | # from pg_config which we actually need. 39 | LIBS := $(filter-out -ledit -lgssapi_krb5 -lpam -lselinux -lxml2 -lxslt, $(LIBS)) 40 | -------------------------------------------------------------------------------- /regress/sql/migrate_run.sql: -------------------------------------------------------------------------------- 1 | -- 2 | -- do migration 3 | -- 4 | 5 | \! pg_migrate --dbname=contrib_regression --table=tbl_cluster --alter='ADD COLUMN a1 INT' --execute 6 | \! pg_migrate --dbname=contrib_regression --table=tbl_badindex --alter='ADD COLUMN a1 INT' --execute 7 | \! pg_migrate --dbname=contrib_regression --table=tbl_gistkey --alter='ADD COLUMN a1 INT' --execute 8 | \! pg_migrate --dbname=contrib_regression --table=tbl_only_ckey --alter='ADD COLUMN a1 INT' --execute 9 | \! pg_migrate --dbname=contrib_regression --table=tbl_idxopts --alter='ADD COLUMN a1 INT' --execute 10 | \! pg_migrate --dbname=contrib_regression --table=tbl_only_pkey --alter='ADD COLUMN a1 INT' --execute 11 | \! pg_migrate --dbname=contrib_regression --table=tbl_order --alter='ADD COLUMN a1 INT' --execute 12 | \! pg_migrate --dbname=contrib_regression --table=tbl_with_dropped_column --alter='ADD COLUMN a1 INT' --execute 13 | \! pg_migrate --dbname=contrib_regression --table=tbl_with_dropped_toast --alter='ADD COLUMN a1 INT' --execute 14 | \! pg_migrate --dbname=contrib_regression --table=tbl_with_view --alter='ADD COLUMN a1 INT' --execute 15 | \! pg_migrate --dbname=contrib_regression --table=tbl_with_mod_column_storage --alter='ADD COLUMN a1 INT' --execute 16 | \! pg_migrate --dbname=contrib_regression --table=tbl_with_toast --alter='ADD COLUMN a1 INT' --execute 17 | -------------------------------------------------------------------------------- /Formula/pg_migrate_postgresql@10.rb: -------------------------------------------------------------------------------- 1 | class PgMigratePostgresqlAT10 < Formula 2 | desc "PostgreSQL 10 extension and CLI to make schema changes with minimal locks" 3 | homepage "https://github.com/phillbaker/pg_migrate" 4 | url "https://github.com/phillbaker/pg_migrate/releases/download/v0.1.0/pg_migrate-0.1.0.zip" 5 | sha256 "7b4d7fe8d4cd47e235e3d689cc4cfd712ec3146e834958349715583f4bbe5784" 6 | license "BSD-3-Clause" 7 | head "https://github.com/phillbaker/pg_migrate", using: :git, branch: "master" 8 | 9 | depends_on "postgresql@10" 10 | 11 | def install 12 | ENV.prepend "LDFLAGS", "-L#{Formula["postgresql@10"].opt_lib}" 13 | ENV.prepend "CPPFLAGS", "-I#{Formula["postgresql@10"].opt_include}" 14 | ENV.prepend "PKG_CONFIG_PATH", "-I#{Formula["postgresql@10"].opt_lib}/pkgconfig" 15 | 16 | system "make" 17 | 18 | bin.install "./bin/pg_migrate" 19 | end 20 | 21 | def caveats 22 | <<~EOS 23 | To use this on a locally running version of postgres, please install the extension files: 24 | 25 | cp #{lib}/pg_migrate.so #{Formula["postgresql@10"].lib} 26 | cp #{lib}/pg_migrate--#{version}.sql #{Formula["postgresql@10"].share}/postgresql@10/extension/ 27 | cp #{lib}/pg_migrate.control #{Formula["postgresql@10"].share}/postgresql@10/extension/ 28 | 29 | Then run: 30 | `psql -c "DROP EXTENSION IF EXISTS pg_migrate cascade; CREATE EXTENSION pg_migrate" -d postgres` 31 | EOS 32 | end 33 | 34 | test do 35 | system bin/"pg_migrate", "--version" 36 | end 37 | end 38 | -------------------------------------------------------------------------------- /Formula/pg_migrate_postgresql@11.rb: -------------------------------------------------------------------------------- 1 | class PgMigratePostgresqlAT11 < Formula 2 | desc "PostgreSQL 11 extension and CLI to make schema changes with minimal locks" 3 | homepage "https://github.com/phillbaker/pg_migrate" 4 | url "https://github.com/phillbaker/pg_migrate/releases/download/v0.1.0/pg_migrate-0.1.0.zip" 5 | sha256 "7b4d7fe8d4cd47e235e3d689cc4cfd712ec3146e834958349715583f4bbe5784" 6 | license "BSD-3-Clause" 7 | head "https://github.com/phillbaker/pg_migrate", using: :git, branch: "master" 8 | 9 | depends_on "postgresql@11" 10 | 11 | def install 12 | ENV.prepend "LDFLAGS", "-L#{Formula["postgresql@11"].opt_lib}" 13 | ENV.prepend "CPPFLAGS", "-I#{Formula["postgresql@11"].opt_include}" 14 | ENV.prepend "PKG_CONFIG_PATH", "-I#{Formula["postgresql@11"].opt_lib}/pkgconfig" 15 | 16 | system "make" 17 | 18 | bin.install "./bin/pg_migrate" 19 | end 20 | 21 | def caveats 22 | <<~EOS 23 | To use this on a locally running version of postgres, please install the extension files: 24 | 25 | cp #{lib}/pg_migrate.so #{Formula["postgresql@11"].lib} 26 | cp #{lib}/pg_migrate--#{version}.sql #{Formula["postgresql@11"].share}/postgresql@11/extension/ 27 | cp #{lib}/pg_migrate.control #{Formula["postgresql@11"].share}/postgresql@11/extension/ 28 | 29 | Then run: 30 | `psql -c "DROP EXTENSION IF EXISTS pg_migrate cascade; CREATE EXTENSION pg_migrate" -d postgres` 31 | EOS 32 | end 33 | 34 | test do 35 | system bin/"pg_migrate", "--version" 36 | end 37 | end 38 | -------------------------------------------------------------------------------- /Formula/pg_migrate_postgresql@12.rb: -------------------------------------------------------------------------------- 1 | class PgMigratePostgresqlAT12 < Formula 2 | desc "PostgreSQL 12 extension and CLI to make schema changes with minimal locks" 3 | homepage "https://github.com/phillbaker/pg_migrate" 4 | url "https://github.com/phillbaker/pg_migrate/releases/download/v0.1.0/pg_migrate-0.1.0.zip" 5 | sha256 "7b4d7fe8d4cd47e235e3d689cc4cfd712ec3146e834958349715583f4bbe5784" 6 | license "BSD-3-Clause" 7 | head "https://github.com/phillbaker/pg_migrate", using: :git, branch: "master" 8 | 9 | depends_on "postgresql@12" 10 | 11 | def install 12 | ENV.prepend "LDFLAGS", "-L#{Formula["postgresql@12"].opt_lib}" 13 | ENV.prepend "CPPFLAGS", "-I#{Formula["postgresql@12"].opt_include}" 14 | ENV.prepend "PKG_CONFIG_PATH", "-I#{Formula["postgresql@12"].opt_lib}/pkgconfig" 15 | 16 | system "make" 17 | 18 | bin.install "./bin/pg_migrate" 19 | end 20 | 21 | def caveats 22 | <<~EOS 23 | To use this on a locally running version of postgres, please install the extension files: 24 | 25 | cp #{lib}/pg_migrate.so #{Formula["postgresql@12"].lib} 26 | cp #{lib}/pg_migrate--#{version}.sql #{Formula["postgresql@12"].share}/postgresql@12/extension/ 27 | cp #{lib}/pg_migrate.control #{Formula["postgresql@12"].share}/postgresql@12/extension/ 28 | 29 | Then run: 30 | `psql -c "DROP EXTENSION IF EXISTS pg_migrate cascade; CREATE EXTENSION pg_migrate" -d postgres` 31 | EOS 32 | end 33 | 34 | test do 35 | system bin/"pg_migrate", "--version" 36 | end 37 | end 38 | -------------------------------------------------------------------------------- /Formula/pg_migrate_postgresql@13.rb: -------------------------------------------------------------------------------- 1 | class PgMigratePostgresqlAT13 < Formula 2 | desc "PostgreSQL 13 extension and CLI to make schema changes with minimal locks" 3 | homepage "https://github.com/phillbaker/pg_migrate" 4 | url "https://github.com/phillbaker/pg_migrate/releases/download/v0.1.0/pg_migrate-0.1.0.zip" 5 | sha256 "7b4d7fe8d4cd47e235e3d689cc4cfd712ec3146e834958349715583f4bbe5784" 6 | license "BSD-3-Clause" 7 | head "https://github.com/phillbaker/pg_migrate", using: :git, branch: "master" 8 | 9 | depends_on "postgresql@13" 10 | 11 | def install 12 | ENV.prepend "LDFLAGS", "-L#{Formula["postgresql@13"].opt_lib}" 13 | ENV.prepend "CPPFLAGS", "-I#{Formula["postgresql@13"].opt_include}" 14 | ENV.prepend "PKG_CONFIG_PATH", "-I#{Formula["postgresql@13"].opt_lib}/pkgconfig" 15 | 16 | system "make" 17 | 18 | bin.install "./bin/pg_migrate" 19 | end 20 | 21 | def caveats 22 | <<~EOS 23 | To use this on a locally running version of postgres, please install the extension files: 24 | 25 | cp #{lib}/pg_migrate.so #{Formula["postgresql@13"].lib} 26 | cp #{lib}/pg_migrate--#{version}.sql #{Formula["postgresql@13"].share}/postgresql@13/extension/ 27 | cp #{lib}/pg_migrate.control #{Formula["postgresql@13"].share}/postgresql@13/extension/ 28 | 29 | Then run: 30 | `psql -c "DROP EXTENSION IF EXISTS pg_migrate cascade; CREATE EXTENSION pg_migrate" -d postgres` 31 | EOS 32 | end 33 | 34 | test do 35 | system bin/"pg_migrate", "--version" 36 | end 37 | end 38 | -------------------------------------------------------------------------------- /Formula/pg_migrate_postgresql@14.rb: -------------------------------------------------------------------------------- 1 | class PgMigratePostgresqlAT14 < Formula 2 | desc "PostgreSQL 14 extension and CLI to make schema changes with minimal locks" 3 | homepage "https://github.com/phillbaker/pg_migrate" 4 | url "https://github.com/phillbaker/pg_migrate/releases/download/v0.1.0/pg_migrate-0.1.0.zip" 5 | sha256 "7b4d7fe8d4cd47e235e3d689cc4cfd712ec3146e834958349715583f4bbe5784" 6 | license "BSD-3-Clause" 7 | head "https://github.com/phillbaker/pg_migrate", using: :git, branch: "master" 8 | 9 | depends_on "postgresql@14" 10 | 11 | def install 12 | ENV.prepend "LDFLAGS", "-L#{Formula["postgresql@14"].opt_lib}" 13 | ENV.prepend "CPPFLAGS", "-I#{Formula["postgresql@14"].opt_include}" 14 | ENV.prepend "PKG_CONFIG_PATH", "-I#{Formula["postgresql@14"].opt_lib}/pkgconfig" 15 | 16 | system "make" 17 | 18 | bin.install "./bin/pg_migrate" 19 | end 20 | 21 | def caveats 22 | <<~EOS 23 | To use this on a locally running version of postgres, please install the extension files: 24 | 25 | cp #{lib}/pg_migrate.so #{Formula["postgresql@14"].lib} 26 | cp #{lib}/pg_migrate--#{version}.sql #{Formula["postgresql@14"].share}/postgresql@14/extension/ 27 | cp #{lib}/pg_migrate.control #{Formula["postgresql@14"].share}/postgresql@14/extension/ 28 | 29 | Then run: 30 | `psql -c "DROP EXTENSION IF EXISTS pg_migrate cascade; CREATE EXTENSION pg_migrate" -d postgres` 31 | EOS 32 | end 33 | 34 | test do 35 | system bin/"pg_migrate", "--version" 36 | end 37 | end 38 | -------------------------------------------------------------------------------- /Formula/pg_migrate_postgresql@9.6.rb: -------------------------------------------------------------------------------- 1 | class PgMigratePostgresqlAT96 < Formula 2 | desc "PostgreSQL 9.6 extension and CLI to make schema changes with minimal locks" 3 | homepage "https://github.com/phillbaker/pg_migrate" 4 | url "https://github.com/phillbaker/pg_migrate/releases/download/v0.1.0/pg_migrate-0.1.0.zip" 5 | sha256 "7b4d7fe8d4cd47e235e3d689cc4cfd712ec3146e834958349715583f4bbe5784" 6 | license "BSD-3-Clause" 7 | head "https://github.com/phillbaker/pg_migrate", using: :git, branch: "master" 8 | 9 | depends_on "postgresql@9.6" 10 | 11 | def install 12 | ENV.prepend "LDFLAGS", "-L#{Formula["postgresql@9.6"].opt_lib}" 13 | ENV.prepend "CPPFLAGS", "-I#{Formula["postgresql@9.6"].opt_include}" 14 | ENV.prepend "PKG_CONFIG_PATH", "-I#{Formula["postgresql@9.6"].opt_lib}/pkgconfig" 15 | 16 | system "make" 17 | 18 | bin.install "./bin/pg_migrate" 19 | end 20 | 21 | def caveats 22 | <<~EOS 23 | To use this on a locally running version of postgres, please install the extension files: 24 | 25 | cp #{lib}/pg_migrate.so #{Formula["postgresql@9.6"].lib} 26 | cp #{lib}/pg_migrate--#{version}.sql #{Formula["postgresql@9.6"].share}/postgresql@9.6/extension/ 27 | cp #{lib}/pg_migrate.control #{Formula["postgresql@9.6"].share}/postgresql@9.6/extension/ 28 | 29 | Then run: 30 | `psql -c "DROP EXTENSION IF EXISTS pg_migrate cascade; CREATE EXTENSION pg_migrate" -d postgres` 31 | EOS 32 | end 33 | 34 | test do 35 | system bin/"pg_migrate", "--version" 36 | end 37 | end 38 | -------------------------------------------------------------------------------- /lib/pgut/pgut-be.h: -------------------------------------------------------------------------------- 1 | /*------------------------------------------------------------------------- 2 | * 3 | * pgut-be.h 4 | * 5 | * Copyright (c) 2009-2011, NIPPON TELEGRAPH AND TELEPHONE CORPORATION 6 | * Portions Copyright (c) 2012-2020, The Reorg Development Team 7 | * 8 | *------------------------------------------------------------------------- 9 | */ 10 | 11 | #ifndef PGUT_BE_H 12 | #define PGUT_BE_H 13 | 14 | #include "fmgr.h" 15 | #include "utils/tuplestore.h" 16 | 17 | #ifndef WIN32 18 | 19 | #define PGUT_EXPORT 20 | 21 | #else 22 | 23 | #define PGUT_EXPORT __declspec(dllexport) 24 | 25 | /* 26 | * PG_MODULE_MAGIC and PG_FUNCTION_INFO_V1 macros seems to be broken. 27 | * It uses PGDLLIMPORT, but those objects are not imported from postgres 28 | * and exported from the user module. So, it should be always dllexported. 29 | */ 30 | 31 | #undef PG_MODULE_MAGIC 32 | #define PG_MODULE_MAGIC \ 33 | extern PGUT_EXPORT const Pg_magic_struct *PG_MAGIC_FUNCTION_NAME(void); \ 34 | const Pg_magic_struct * \ 35 | PG_MAGIC_FUNCTION_NAME(void) \ 36 | { \ 37 | static const Pg_magic_struct Pg_magic_data = PG_MODULE_MAGIC_DATA; \ 38 | return &Pg_magic_data; \ 39 | } \ 40 | extern int no_such_variable 41 | 42 | #undef PG_FUNCTION_INFO_V1 43 | #define PG_FUNCTION_INFO_V1(funcname) \ 44 | extern PGUT_EXPORT const Pg_finfo_record * CppConcat(pg_finfo_,funcname)(void); \ 45 | const Pg_finfo_record * \ 46 | CppConcat(pg_finfo_,funcname) (void) \ 47 | { \ 48 | static const Pg_finfo_record my_finfo = { 1 }; \ 49 | return &my_finfo; \ 50 | } \ 51 | extern int no_such_variable 52 | 53 | #endif 54 | 55 | #endif /* PGUT_BE_H */ 56 | -------------------------------------------------------------------------------- /COPYRIGHT: -------------------------------------------------------------------------------- 1 | Portions Copyright (c) 2008-2011, NIPPON TELEGRAPH AND TELEPHONE CORPORATION 2 | Portions Copyright (c) 2011, Itagaki Takahiro 3 | Portions Copyright (c) 2012-2020, The Reorg Development Team 4 | All rights reserved. 5 | 6 | Redistribution and use in source and binary forms, with or without 7 | modification, are permitted provided that the following conditions are met: 8 | 9 | * Redistributions of source code must retain the above copyright notice, 10 | this list of conditions and the following disclaimer. 11 | * Redistributions in binary form must reproduce the above copyright 12 | notice, this list of conditions and the following disclaimer in the 13 | documentation and/or other materials provided with the distribution. 14 | * Neither the name of the authors nor the names of its contributors may 15 | be used to endorse or promote products derived from this software 16 | without specific prior written permission. 17 | 18 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 19 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 20 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 21 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE 22 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 23 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 24 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 25 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 26 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 27 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 28 | -------------------------------------------------------------------------------- /ci/key.gpg.pub: -------------------------------------------------------------------------------- 1 | -----BEGIN PGP PUBLIC KEY BLOCK----- 2 | 3 | mQINBGI/cKABEADGV1XdA1zvjCaSrmkWpIFQgcUnR07xv1b6LQFsdAAH0IETUxGp 4 | t789tglJMcLw6fR82wS9iKEVqMPJmOn5RSt5q2dyxoWzN5C2RO9pkdDGICBeDvIx 5 | v8GoGk+Xv3rF9FQcLYkhoDpBM36gH5KsOfLl3vWvsWZN466D6X7pblCep/zYQnfp 6 | Yd0X3ftIqALr1JJy2foBZbU7Dt0L8H/52sjHl0z8B+oJ71JPllMDO0jo4t8HS/TO 7 | fsfd4UVFANANo2nyASgr9cYZTa1jMTwuWAreBaCIJj7592lSabhVlPbqfAPATyP+ 8 | txDP5LRF9h6Fza4hA7a0Hg/XYJIh590vLhytKHr1LFu5hVuthp5zAqnunXAolOBg 9 | ZryOOCAKYfs/Uu+4ZgzZvtd/NvwR1m8BTCa7eAsUB3KKV+bqbDmwNuX1ZPLs5RR7 10 | 4ISywd9/flD0VK2cYetbxKz15Pav7WovFFVrquLgTxmdgW2jrUWJY5F3bKrSBpKK 11 | WCbRnFlHc1Bjq/W+4efj6Wq1IGVR3aUWa3JOV/DRAjto6+9tQQqBkvM+OVRuZmBC 12 | buQez1DTw+D7fqMX712bL0c8CukRXIkxfoVTtPAT/2QUBk7js8d/7x4gNuJqMvMM 13 | Hm0E046Ei2FvT41ZUZSJZ6rhtL15gY9/MSPAKGgIthbvjH9L0Ho56nBhgwARAQAB 14 | tChQaGlsbGlwIEJha2VyIDxwaGlsbGJha2VyQHJldHJvZGljdC5jb20+iQJYBBMB 15 | CABCFiEEFPssJFhKc6farbhaA7D2/FGMEHIFAmI/cKACGwMFCQPCZwAFCwkIBwID 16 | IgIBBhUKCQgLAgQWAgMBAh4HAheAAAoJEAOw9vxRjBByrB4P/jJziqxj23CZs3ko 17 | vl4nUBZrlIbx2JNvX4HBtti6gVbxVTMRmIS9Duw5p1o2Z9t1A4u/D5gPX6NiJZZ0 18 | blPa7iNZRhGzA7Zf008wW/RVLg186scqvFMlBMzDXtd9xgM4yVbONpn7O+ubCoL3 19 | zjYj2uPnc+xt4Ru1IAlqSZbKUsGLUE2AvU067yHRn7FtDmyk26YQ89VG3MDfGdLb 20 | 63xXAimF2XAPREjv683KJ8g5umWdpeUbGFCGWh4ednW6EKl1TtNyaN9NoF54bPgg 21 | iATljWHJlmEpvmgthaVLIcjBf0xmneRKCE/kGrGtzdp0cmUTapqzHpu9/JTvXSKu 22 | tEJSdXRubwB9r5irKwNq5GH13oa36SlD/Qmaoi3huKTydJJ+20vcwsr5kzCKdk4w 23 | K3jabfTBLngwiUbni/qZ+fQx22dkJ6AGk05Ez8vAorhAbR4+35MOco9ZxafU9Nwl 24 | uGfI2WxHnz+Goq6jJDAuEqHv6V6NDwooXb51JbSlcmBnaPySIpSmllmRF+BN0ZGS 25 | OGskIUsFqXQFF8Ru3CcTwHd4bkiA+bmwfMU7vUQ6X3Hy+4kYMMM6PGTyg5GJd3vv 26 | QogfDyO3ysgF8tZLrGdVfu6BUTSpjifc+4bxDaD2lYeMzP1Nl4gHAZ/SSkeN1j64 27 | aRmFyyuLA3GEkgSX+hvCsbaqD2YL 28 | =kQO/ 29 | -----END PGP PUBLIC KEY BLOCK----- 30 | -------------------------------------------------------------------------------- /regress/sql/ordered_indexes.sql: -------------------------------------------------------------------------------- 1 | -- 2 | -- pg_repack issue #3 3 | -- 4 | CREATE TABLE issue3_1 (col1 int NOT NULL, col2 text NOT NULL); 5 | CREATE UNIQUE INDEX issue3_1_idx ON issue3_1 (col1, col2 DESC); 6 | SELECT migrate.get_order_by('issue3_1_idx'::regclass::oid, 'issue3_1'::regclass::oid); 7 | \! pg_migrate --dbname=contrib_regression --table=issue3_1 --alter='ADD COLUMN c1 INT' --execute 8 | 9 | CREATE TABLE issue3_2 (col1 int NOT NULL, col2 text NOT NULL); 10 | CREATE UNIQUE INDEX issue3_2_idx ON issue3_2 (col1 DESC, col2 text_pattern_ops); 11 | SELECT migrate.get_order_by('issue3_2_idx'::regclass::oid, 'issue3_2'::regclass::oid); 12 | \! pg_migrate --dbname=contrib_regression --table=issue3_2 --alter='ADD COLUMN c1 INT' --execute 13 | 14 | CREATE TABLE issue3_3 (col1 int NOT NULL, col2 text NOT NULL); 15 | CREATE UNIQUE INDEX issue3_3_idx ON issue3_3 (col1 DESC, col2 DESC); 16 | SELECT migrate.get_order_by('issue3_3_idx'::regclass::oid, 'issue3_3'::regclass::oid); 17 | \! pg_migrate --dbname=contrib_regression --table=issue3_3 --alter='ADD COLUMN c1 INT' --execute 18 | 19 | CREATE TABLE issue3_4 (col1 int NOT NULL, col2 text NOT NULL); 20 | CREATE UNIQUE INDEX issue3_4_idx ON issue3_4 (col1 NULLS FIRST, col2 text_pattern_ops DESC NULLS LAST); 21 | SELECT migrate.get_order_by('issue3_4_idx'::regclass::oid, 'issue3_4'::regclass::oid); 22 | \! pg_migrate --dbname=contrib_regression --table=issue3_4 --alter='ADD COLUMN c1 INT' --execute 23 | 24 | CREATE TABLE issue3_5 (col1 int NOT NULL, col2 text NOT NULL); 25 | CREATE UNIQUE INDEX issue3_5_idx ON issue3_5 (col1 DESC NULLS FIRST, col2 COLLATE "POSIX" DESC); 26 | SELECT migrate.get_order_by('issue3_5_idx'::regclass::oid, 'issue3_5'::regclass::oid); 27 | \! pg_migrate --dbname=contrib_regression --table=issue3_5 --alter='ADD COLUMN c1 INT' --execute 28 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | # 2 | # pg_migrate: Makefile 3 | # 4 | # Portions Copyright (c) 2008-2011, NIPPON TELEGRAPH AND TELEPHONE CORPORATION 5 | # Portions Copyright (c) 2011, Itagaki Takahiro 6 | # Portions Copyright (c) 2012-2020, The Reorg Development Team 7 | # 8 | 9 | PG_CONFIG ?= pg_config 10 | EXTENSION = pg_migrate 11 | 12 | .PHONY: dist/$(EXTENSION)-$(EXTVERSION).zip 13 | 14 | # Pull out PostgreSQL version number from pg_config 15 | VERSION := $(shell $(PG_CONFIG) --version | sed 's/.* \([[:digit:].]\{1,\}\).*/\1/') 16 | ifeq ("$(VERSION)","") 17 | $(error pg_config not found) 18 | endif 19 | 20 | # PostgreSQL version as a number, e.g. 9.1.4 -> 901 21 | INTVERSION := $(shell echo $$(($$(echo $(VERSION).0 | sed 's/\([[:digit:]]\{1,\}\)\.\([[:digit:]]\{1,\}\).*/\1*100+\2/')))) 22 | 23 | # The version number of the library 24 | EXTVERSION = $(shell grep '"version":' META.json | head -1 \ 25 | | sed -e 's/[ ]*"version":[ ]*"\(.*\)",/\1/') 26 | 27 | # NOTE: keep consistent with META.json 28 | ifeq ($(shell echo $$(($(INTVERSION) < 904))),1) 29 | $(error $(EXTENSION) requires PostgreSQL 9.4 or later. This is $(VERSION)) 30 | endif 31 | 32 | SUBDIRS = bin lib regress 33 | 34 | all install installdirs uninstall distprep clean distclean maintainer-clean debug: 35 | @for dir in $(SUBDIRS); do \ 36 | $(MAKE) -C $$dir $@ || exit; \ 37 | done 38 | 39 | # We'd like check operations to run all the subtests before failing. 40 | check installcheck: 41 | @CHECKERR=0; for dir in $(SUBDIRS); do \ 42 | $(MAKE) -C $$dir $@ || CHECKERR=$$?; \ 43 | done; \ 44 | exit $$CHECKERR 45 | 46 | # Prepare the package for PGXN submission 47 | package: dist dist/$(EXTENSION)-$(EXTVERSION).zip 48 | 49 | dist: 50 | mkdir -p dist 51 | 52 | dist/$(EXTENSION)-$(EXTVERSION).zip: 53 | git archive --format zip --prefix=$(EXTENSION)-$(EXTVERSION)/ --output $@ master 54 | -------------------------------------------------------------------------------- /doc/style.css: -------------------------------------------------------------------------------- 1 | body { 2 | font-family: 3 | Lucida Grande, Verdana, Arial, Helvetica, 4 | 'メイリオ', 5 | 'Meiryo', 6 | 'ヒラギノ角ゴ Pro W3', 7 | 'Hiragino Kaku Gothic Pro', 8 | 'Osaka', 9 | 'MS Pゴシック', 10 | sans-serif; 11 | color: #202020; 12 | } 13 | 14 | h2, h3, h4, h5, h6 { 15 | color: Black; 16 | background: none; 17 | padding-top: 0.5em; 18 | padding-bottom: 0.17em; 19 | border-bottom: 1px solid #aaaaaa; 20 | } 21 | H1 { font-size: x-large; font-family: Lucida Grande,verdana,arial,helvetica,sans-serif; } 22 | H2 { font-size: large; font-family: Lucida Grande,verdana,arial,helvetica,sans-serif; } 23 | H3 { padding-left: 1em; font-size: medium; font-family: Lucida Grande,verdana,arial,helvetica,sans-serif; } 24 | H4 { padding-left: 2em; font-size: small; font-family: Lucida Grande,verdana,arial,helvetica,sans-serif; } 25 | H5 { padding-left: 3em; font-size: x-small; font-family: Lucida Grande,verdana,arial,helvetica,sans-serif; } 26 | H6 { padding-left: 4em; font-size: xx-small; font-family: Lucida Grande,verdana,arial,helvetica,sans-serif; } 27 | 28 | pre { 29 | font-family: courier,sans-serif; 30 | background-color: #FBFBFD; 31 | border: 1px dashed #7E7ECB; 32 | color: black; 33 | line-height: 1.1em; padding: 0.5em; 34 | overflow: auto; 35 | } 36 | 37 | li { 38 | line-height: 1.4em; 39 | } 40 | 41 | div.contents { 42 | float:right; 43 | border:thin solid black; 44 | background-color: white; 45 | padding-top: 0.2em; 46 | padding-bottom: 0.2em; 47 | padding-left: 1em; 48 | padding-right: 1em; 49 | margin-left: 0.5em; 50 | margin-top: 2.5em !important; 51 | } 52 | 53 | div.contents ul { 54 | padding-left: 1em; 55 | } 56 | 57 | dl.diag dt, 58 | dl.ddl dt { 59 | font-weight: bold; 60 | margin-top: 1em; 61 | margin-bottom: 0.5em; 62 | } 63 | -------------------------------------------------------------------------------- /lib/Makefile: -------------------------------------------------------------------------------- 1 | # 2 | # pg_migrate: lib/Makefile 3 | # 4 | # Portions Copyright (c) 2008-2012, NIPPON TELEGRAPH AND TELEPHONE CORPORATION 5 | # Portions Copyright (c) 2011, Itagaki Takahiro 6 | # Portions Copyright (c) 2012-2020, The Reorg Development Team 7 | # 8 | 9 | PG_CONFIG ?= pg_config 10 | 11 | # version as a number, e.g. 9.1.4 -> 901 12 | VERSION := $(shell $(PG_CONFIG) --version | sed 's/.* \([[:digit:].]\{1,\}\).*/\1/') 13 | INTVERSION := $(shell echo $$(($$(echo $(VERSION).0 | sed 's/\([[:digit:]]\{1,\}\)\.\([[:digit:]]\{1,\}\).*/\1*100+\2/')))) 14 | 15 | EXTENSION = pg_migrate 16 | MODULE_big = $(EXTENSION) 17 | 18 | OBJS = migrate.o pgut/pgut-spi.o 19 | 20 | SHLIB_EXPORTS = exports.txt 21 | 22 | 23 | # It is not possible to create tables with OIDs on PostgreSQL 12 or later 24 | ifeq ($(shell echo $$(($(INTVERSION) < 1200))),1) 25 | RELHASOIDS := relhasoids 26 | else 27 | RELHASOIDS := false 28 | endif 29 | 30 | # The version number of the program. It should be the same of the library. 31 | MIGRATE_VERSION = $(shell grep '"version":' ../META.json | head -1 \ 32 | | sed -e 's/[ ]*"version":[ ]*"\(.*\)",/\1/') 33 | 34 | PG_CPPFLAGS = -DMIGRATE_VERSION=$(MIGRATE_VERSION) 35 | 36 | DATA_built = pg_migrate--$(MIGRATE_VERSION).sql pg_migrate.control 37 | 38 | USE_PGXS = 1 39 | PGXS := $(shell $(PG_CONFIG) --pgxs) 40 | include $(PGXS) 41 | 42 | # remove dependency on libxml2, libxslt, and libpam. 43 | # XXX: find a better way to make sure we are linking with libraries 44 | # from pg_config which we actually need. 45 | LIBS := $(filter-out -lpam -lxml2 -lxslt, $(LIBS)) 46 | 47 | pg_migrate.sql: pg_migrate.sql.in 48 | echo "BEGIN;" > $@; \ 49 | sed 's,MODULE_PATHNAME,$$libdir/$(MODULE_big),g' $< \ 50 | | sed 's,MIGRATE_VERSION,$(MIGRATE_VERSION),g' >> $@; \ 51 | echo "COMMIT;" >> $@; 52 | 53 | pg_migrate--$(MIGRATE_VERSION).sql: pg_migrate.sql.in 54 | sed 's,MIGRATE_VERSION,$(MIGRATE_VERSION),g' $< \ 55 | | sed 's,relhasoids,$(RELHASOIDS),g'> $@; 56 | 57 | pg_migrate.control: pg_migrate.control.in 58 | sed 's,MIGRATE_VERSION,$(MIGRATE_VERSION),g' $< > $@ 59 | -------------------------------------------------------------------------------- /doc/release.rst: -------------------------------------------------------------------------------- 1 | What to do to release pg_repack 2 | =============================== 3 | 4 | This document is the list of operations to do to release a new pg_repack 5 | version. The version number in this document is indicated by ``$VER``: it 6 | should be a three-digit dot-separated version, eventually followed by a 7 | pre-release string: ``1.2.0``, ``1.2.1``, ``1.2-dev0``, ``1.2.0-beta1`` are 8 | valid version numbers. 9 | 10 | In order to release the package you will need accounts on Github and PGXN 11 | with the right privileges: contact Daniele Varrazzo to obtain them. 12 | 13 | - Set the right version number in ``META.json`` (note: it's in two different 14 | places). 15 | - Set the right release_status in ``META.json``: ``testing`` or ``stable``. 16 | - Commit the above metadata changes. 17 | 18 | - Create a package running ``make package``. The package will be called 19 | ``dist/pg_repack-$VER.zip``. 20 | 21 | - Verify the packages installs and passes tests with `pgxn client`__:: 22 | 23 | pgxn install --sudo -- dist/pg_repack-$VER.zip 24 | pgxn check dist/pg_repack-$VER.zip 25 | 26 | (note that ``check`` may require the Postgres bin directory to be added to 27 | the path, e.g. ``PATH=$(pg_config --bindir):$PATH``; check the ``install`` 28 | log to see where ``pg_repack`` executable was installed). 29 | 30 | .. __: https://pgxn.github.io/pgxnclient/ 31 | 32 | - Push the code changes on github:: 33 | 34 | git push 35 | 36 | - Upload the package on http://manager.pgxn.org/. 37 | 38 | - Check the uploaded package works as expected; if not fix and push more:: 39 | 40 | pgxn install --sudo -- pg_repack 41 | pgxn check pg_repack 42 | 43 | - Create a tag, signed if possible:: 44 | 45 | git tag -a -s ver_$VER 46 | 47 | - Push the new tag on github:: 48 | 49 | git push --tags 50 | 51 | - Upload the docs by pushing in the repos at 52 | http://reorg.github.io/pg_repack/. The operations are roughly:: 53 | 54 | git clone --recursive git@github.com:reorg/reorg.github.com.git 55 | cd reorg.github.com 56 | make sm 57 | make 58 | git commit -a -m "Docs upload for release $VER" 59 | git push 60 | 61 | - Check the page http://reorg.github.io/pg_repack/ is right. 62 | 63 | - Announce the package on pgsql-announce@postgresql.org. 64 | -------------------------------------------------------------------------------- /SPECS/pg_repack90.spec: -------------------------------------------------------------------------------- 1 | # SPEC file for pg_repack 2 | # Copyright(C) 2009-2010 NIPPON TELEGRAPH AND TELEPHONE CORPORATION 3 | %define sname pg_repack 4 | 5 | %define _pgdir /usr/pgsql-9.0 6 | %define _bindir %{_pgdir}/bin 7 | %define _libdir %{_pgdir}/lib 8 | %define _datadir %{_pgdir}/share 9 | 10 | Summary: Reorganize tables in PostgreSQL databases without any locks. 11 | Name: %{sname} 12 | Version: 1.1.5 13 | Release: 1%{?dist} 14 | License: BSD 15 | Group: Applications/Databases 16 | Source0: %{sname}-%{version}.tar.gz 17 | URL: https://reorg.github.io/%{sname}/ 18 | BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-%(%{__id_u} -n) 19 | 20 | BuildRequires: postgresql90-devel, postgresql90 21 | Requires: postgresql90, postgresql90-libs 22 | 23 | %description 24 | pg_repack can re-organize tables on a postgres database without any locks so that 25 | you can retrieve or update rows in tables being reorganized. 26 | The module is developed to be a better alternative of CLUSTER and VACUUM FULL. 27 | 28 | %prep 29 | %setup -q -n %{sname}-%{version} 30 | 31 | %build 32 | USE_PGXS=1 make %{?_smp_mflags} 33 | 34 | %install 35 | rm -rf %{buildroot} 36 | USE_PGXS=1 make DESTDIR=%{buildroot} 37 | 38 | install -d %{buildroot}%{_libdir} 39 | install -d %{buildroot}%{_bindir} 40 | install -d %{buildroot}%{_datadir}/contrib 41 | 42 | install -m 755 bin/pg_repack %{buildroot}%{_bindir}/pg_repack 43 | install -m 755 lib/pg_repack.so %{buildroot}%{_libdir}/pg_repack.so 44 | install -m 644 lib/pg_repack.sql %{buildroot}%{_datadir}/contrib/pg_repack.sql 45 | install -m 644 lib/uninstall_pg_repack.sql %{buildroot}%{_datadir}/contrib/uninstall_pg_repack.sql 46 | 47 | %define pg_sharedir 48 | 49 | %files 50 | %defattr(755,root,root,755) 51 | %{_bindir}/pg_repack 52 | %{_libdir}/pg_repack.so 53 | %defattr(644,root,root,755) 54 | %{_datadir}/contrib/pg_repack.sql 55 | %{_datadir}/contrib/uninstall_pg_repack.sql 56 | 57 | %clean 58 | rm -rf %{buildroot} 59 | 60 | %changelog 61 | * Thu Oct 21 2010 - NTT OSS Center 1.1.5-1 62 | * Wed Sep 22 2010 - NTT OSS Center 1.1.4-1 63 | * Thu Apr 22 2010 - NTT OSS Center 1.1.2-1 64 | * Mon Jan 15 2010 - Toru SHIMOGAKI 1.0.8-1 65 | * Tue Sep 08 2009 - Toru SHIMOGAKI 1.0.6-1 66 | * Fri May 15 2009 - Toru SHIMOGAKI 1.0.4-1 67 | - Initial packaging 68 | -------------------------------------------------------------------------------- /regress/travis_prepare.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e -x 4 | 5 | export PACKAGE="$PGVER" 6 | export PGDIR="/usr/lib/postgresql/$PACKAGE" 7 | export CONFDIR="/etc/postgresql/$PACKAGE/main" 8 | export DATADIR="/var/lib/postgresql/$PACKAGE/main" 9 | export PGBIN="$PGDIR/bin" 10 | export PATH="$PGBIN:$PATH" 11 | 12 | # This also stops the server currently running on port 5432 13 | sudo apt-get remove -y libpq5 14 | 15 | # Match libpq and server-dev packages 16 | # See https://github.com/reorg/pg_repack/issues/63 17 | sudo sh -c 'echo "deb [arch=amd64] http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main ${PGVER}" > /etc/apt/sources.list.d/pgdg.list' 18 | 19 | # Import the repository signing key: 20 | wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add - 21 | 22 | sudo apt-get update 23 | 24 | # This might be a moving target, but it currently fails. 13 could start 25 | # failing in the future instead. 26 | # Some versions break if this is not specified (9.4 for sure, maybe 9.6) 27 | if [[ "$PGVER" = "9.4" ]]; then 28 | sudo apt-get install -y "libpq5=${PGVER}*" "libpq-dev=${PGVER}*" 29 | sudo apt-mark hold libpq5 30 | fi 31 | 32 | # missing build dependency by postgresql-server-dev 33 | if [[ "$PGVER" -ge "14" ]]; then 34 | sudo apt-get install -y liblz4-dev 35 | fi 36 | 37 | if ! sudo apt-get install -y \ 38 | postgresql-$PGVER \ 39 | postgresql-client-$PGVER \ 40 | postgresql-server-dev-$PGVER 41 | then 42 | sudo systemctl status postgresql.service -l 43 | exit 1 44 | fi 45 | 46 | # ensure PostgreSQL is running on 5432 port with proper auth 47 | sudo sed -i \ 48 | 's/\(^local[[:space:]]\+all[[:space:]]\+all[[:space:]]\+\).*/\1trust/' \ 49 | "$CONFDIR/pg_hba.conf" 50 | sudo bash -c "echo 'port=5432' >> $CONFDIR/postgresql.conf" 51 | sudo service postgresql restart $PGVER 52 | 53 | # ensure travis user exists. May be missed if the database was not provided by Travis 54 | userexists=`sudo -u postgres "$PGBIN/psql" -tc "select count(*) from pg_catalog.pg_user where usename='travis';"` 55 | if [ ${userexists} -eq 0 ]; then 56 | sudo -u postgres "$PGBIN/psql" -c "create user travis superuser" 57 | fi 58 | 59 | # Go somewhere else or sudo will fail 60 | cd / 61 | 62 | # Already started because of installing posgresql-$PGVER 63 | # sudo -u postgres "$PGBIN/pg_ctl" -w -l /dev/null -D "$CONFDIR" start 64 | sudo -u postgres mkdir -p /var/lib/postgresql/testts 65 | sudo -u postgres "$PGBIN/psql" \ 66 | -c "create tablespace testts location '/var/lib/postgresql/testts'" 67 | 68 | # Go back to the build dir 69 | cd - 70 | -------------------------------------------------------------------------------- /regress/expected/migrate_run.out: -------------------------------------------------------------------------------- 1 | -- 2 | -- do migration 3 | -- 4 | \! pg_migrate --dbname=contrib_regression --table=tbl_cluster --alter='ADD COLUMN a1 INT' --execute 5 | INFO: migrating table "public.tbl_cluster" 6 | INFO: altering table with: ADD COLUMN a1 INT 7 | \! pg_migrate --dbname=contrib_regression --table=tbl_badindex --alter='ADD COLUMN a1 INT' --execute 8 | INFO: migrating table "public.tbl_badindex" 9 | WARNING: skipping invalid index: CREATE UNIQUE INDEX idx_badindex_n ON public.tbl_badindex USING btree (n) 10 | INFO: altering table with: ADD COLUMN a1 INT 11 | \! pg_migrate --dbname=contrib_regression --table=tbl_gistkey --alter='ADD COLUMN a1 INT' --execute 12 | INFO: migrating table "public.tbl_gistkey" 13 | INFO: altering table with: ADD COLUMN a1 INT 14 | \! pg_migrate --dbname=contrib_regression --table=tbl_only_ckey --alter='ADD COLUMN a1 INT' --execute 15 | WARNING: relation "public.tbl_only_ckey" must have a primary key or not-null unique keys 16 | \! pg_migrate --dbname=contrib_regression --table=tbl_idxopts --alter='ADD COLUMN a1 INT' --execute 17 | INFO: migrating table "public.tbl_idxopts" 18 | INFO: altering table with: ADD COLUMN a1 INT 19 | \! pg_migrate --dbname=contrib_regression --table=tbl_only_pkey --alter='ADD COLUMN a1 INT' --execute 20 | INFO: migrating table "public.tbl_only_pkey" 21 | INFO: altering table with: ADD COLUMN a1 INT 22 | \! pg_migrate --dbname=contrib_regression --table=tbl_order --alter='ADD COLUMN a1 INT' --execute 23 | INFO: migrating table "public.tbl_order" 24 | INFO: altering table with: ADD COLUMN a1 INT 25 | \! pg_migrate --dbname=contrib_regression --table=tbl_with_dropped_column --alter='ADD COLUMN a1 INT' --execute 26 | INFO: migrating table "public.tbl_with_dropped_column" 27 | INFO: altering table with: ADD COLUMN a1 INT 28 | \! pg_migrate --dbname=contrib_regression --table=tbl_with_dropped_toast --alter='ADD COLUMN a1 INT' --execute 29 | INFO: migrating table "public.tbl_with_dropped_toast" 30 | INFO: altering table with: ADD COLUMN a1 INT 31 | \! pg_migrate --dbname=contrib_regression --table=tbl_with_view --alter='ADD COLUMN a1 INT' --execute 32 | WARNING: the table "public.tbl_with_view" has 1 views depending on it. this tool does not currently support migrating tables with dependent views. 33 | \! pg_migrate --dbname=contrib_regression --table=tbl_with_mod_column_storage --alter='ADD COLUMN a1 INT' --execute 34 | INFO: migrating table "public.tbl_with_mod_column_storage" 35 | INFO: altering table with: ADD COLUMN a1 INT 36 | \! pg_migrate --dbname=contrib_regression --table=tbl_with_toast --alter='ADD COLUMN a1 INT' --execute 37 | INFO: migrating table "public.tbl_with_toast" 38 | INFO: altering table with: ADD COLUMN a1 INT 39 | -------------------------------------------------------------------------------- /regress/expected/after_schema_1.out: -------------------------------------------------------------------------------- 1 | -- 2 | -- tables schema after running migrations 3 | -- 4 | \d tbl_cluster 5 | Table "public.tbl_cluster" 6 | Column | Type | Modifiers 7 | --------+-----------------------------+----------- 8 | col1 | integer | not null 9 | time | timestamp without time zone | 10 | ,") | text | not null 11 | a1 | integer | 12 | Indexes: 13 | "tbl_cluster_pkey_f6d2e" PRIMARY KEY, btree (","")", col1) WITH (fillfactor='75') 14 | ",") cluster_f6d2e" btree ("time", length(","")"), ","")" text_pattern_ops) WITH (fillfactor='75') 15 | 16 | \d tbl_gistkey 17 | Table "public.tbl_gistkey" 18 | Column | Type | Modifiers 19 | --------+---------+----------- 20 | id | integer | not null 21 | c | circle | 22 | a1 | integer | 23 | Indexes: 24 | "tbl_gistkey_pkey_b253a" PRIMARY KEY, btree (id) 25 | "cidx_circle_b253a" gist (c) 26 | 27 | \d tbl_only_ckey 28 | Table "public.tbl_only_ckey" 29 | Column | Type | Modifiers 30 | --------+-----------------------------+----------- 31 | col1 | integer | 32 | col2 | timestamp without time zone | 33 | ,") | text | 34 | Indexes: 35 | "cidx_only_ckey" btree (col2, ","")") CLUSTER 36 | 37 | \d tbl_only_pkey 38 | Table "public.tbl_only_pkey" 39 | Column | Type | Modifiers 40 | --------+---------+----------- 41 | col1 | integer | not null 42 | ,") | text | 43 | a1 | integer | 44 | Indexes: 45 | "tbl_only_pkey_pkey_5e978" PRIMARY KEY, btree (col1) 46 | 47 | \d tbl_with_dropped_column 48 | Table "public.tbl_with_dropped_column" 49 | Column | Type | Modifiers 50 | --------+---------+----------- 51 | c1 | text | 52 | id | integer | not null 53 | c2 | text | 54 | c3 | text | 55 | a1 | integer | 56 | Indexes: 57 | "tbl_with_dropped_column_pkey_f15cf" PRIMARY KEY, btree (id) WITH (fillfactor='75') 58 | "idx_c1c2_f15cf" btree (c1, c2) WITH (fillfactor='75') 59 | "idx_c2c1_f15cf" btree (c2, c1) 60 | 61 | \d tbl_with_dropped_toast 62 | Table "public.tbl_with_dropped_toast" 63 | Column | Type | Modifiers 64 | --------+---------+----------- 65 | i | integer | not null 66 | j | integer | not null 67 | a1 | integer | 68 | Indexes: 69 | "tbl_with_dropped_toast_pkey_9550c" PRIMARY KEY, btree (i, j) 70 | 71 | \d tbl_idxopts 72 | Table "public.tbl_idxopts" 73 | Column | Type | Modifiers 74 | --------+---------+----------- 75 | i | integer | not null 76 | t | text | 77 | a1 | integer | 78 | Indexes: 79 | "tbl_idxopts_pkey_0a789" PRIMARY KEY, btree (i) 80 | "idxopts_t_0a789" btree (t DESC NULLS LAST) WHERE t <> 'aaa'::text 81 | 82 | -------------------------------------------------------------------------------- /bin/pgut/pgut-fe.h: -------------------------------------------------------------------------------- 1 | /*------------------------------------------------------------------------- 2 | * pgut-fe.h 3 | * 4 | * Portions Copyright (c) 2008-2011, NIPPON TELEGRAPH AND TELEPHONE CORPORATION 5 | * Portions Copyright (c) 2011, Itagaki Takahiro 6 | * Portions Copyright (c) 2012-2020, The Reorg Development Team 7 | *------------------------------------------------------------------------- 8 | */ 9 | 10 | #ifndef PGUT_FE_H 11 | #define PGUT_FE_H 12 | 13 | #include "pgut.h" 14 | 15 | typedef enum pgut_optsrc 16 | { 17 | SOURCE_DEFAULT, 18 | SOURCE_ENV, 19 | SOURCE_FILE, 20 | SOURCE_CMDLINE, 21 | SOURCE_CONST 22 | } pgut_optsrc; 23 | 24 | /* 25 | * type: 26 | * b: bool (true) 27 | * B: bool (false) 28 | * f: pgut_optfn 29 | * i: 32bit signed integer 30 | * l: StringList 31 | * u: 32bit unsigned integer 32 | * I: 64bit signed integer 33 | * U: 64bit unsigned integer 34 | * s: string 35 | * t: time_t 36 | * y: YesNo (YES) 37 | * Y: YesNo (NO) 38 | */ 39 | typedef struct pgut_option 40 | { 41 | char type; 42 | char sname; /* short name */ 43 | const char *lname; /* long name */ 44 | void *var; /* pointer to variable */ 45 | pgut_optsrc allowed; /* allowed source */ 46 | pgut_optsrc source; /* actual source */ 47 | } pgut_option; 48 | 49 | typedef void (*pgut_optfn) (pgut_option *opt, const char *arg); 50 | 51 | typedef struct worker_conns 52 | { 53 | int max_num_workers; 54 | int num_workers; 55 | PGconn **conns; 56 | } worker_conns; 57 | 58 | 59 | 60 | extern char *dbname; 61 | extern char *host; 62 | extern char *port; 63 | extern char *username; 64 | extern char *password; 65 | extern YesNo prompt_password; 66 | 67 | extern PGconn *connection; 68 | extern PGconn *conn2; 69 | extern worker_conns workers; 70 | 71 | extern void pgut_help(bool details); 72 | extern void help(bool details); 73 | 74 | extern void disconnect(void); 75 | extern void reconnect(int elevel); 76 | extern void setup_workers(int num_workers); 77 | extern void disconnect_workers(void); 78 | extern PGresult *execute(const char *query, int nParams, const char **params); 79 | extern PGresult *execute_elevel(const char *query, int nParams, const char **params, int elevel); 80 | extern ExecStatusType command(const char *query, int nParams, const char **params); 81 | 82 | extern int pgut_getopt(int argc, char **argv, pgut_option options[]); 83 | extern void pgut_readopt(const char *path, pgut_option options[], int elevel); 84 | extern void pgut_setopt(pgut_option *opt, const char *optarg, pgut_optsrc src); 85 | extern bool pgut_keyeq(const char *lhs, const char *rhs); 86 | 87 | /* So we don't need to fret over multiple calls to PQclear(), e.g. 88 | * in cleanup labels. 89 | */ 90 | #define CLEARPGRES(pgres) do { PQclear(pgres); pgres = NULL; } while (0) 91 | 92 | #endif /* PGUT_FE_H */ 93 | -------------------------------------------------------------------------------- /regress/expected/ordered_indexes.out: -------------------------------------------------------------------------------- 1 | -- 2 | -- pg_repack issue #3 3 | -- 4 | CREATE TABLE issue3_1 (col1 int NOT NULL, col2 text NOT NULL); 5 | CREATE UNIQUE INDEX issue3_1_idx ON issue3_1 (col1, col2 DESC); 6 | SELECT migrate.get_order_by('issue3_1_idx'::regclass::oid, 'issue3_1'::regclass::oid); 7 | get_order_by 8 | ----------------- 9 | col1, col2 DESC 10 | (1 row) 11 | 12 | \! pg_migrate --dbname=contrib_regression --table=issue3_1 --alter='ADD COLUMN c1 INT' --execute 13 | INFO: migrating table "public.issue3_1" 14 | INFO: altering table with: ADD COLUMN c1 INT 15 | CREATE TABLE issue3_2 (col1 int NOT NULL, col2 text NOT NULL); 16 | CREATE UNIQUE INDEX issue3_2_idx ON issue3_2 (col1 DESC, col2 text_pattern_ops); 17 | SELECT migrate.get_order_by('issue3_2_idx'::regclass::oid, 'issue3_2'::regclass::oid); 18 | get_order_by 19 | --------------------------- 20 | col1 DESC, col2 USING ~<~ 21 | (1 row) 22 | 23 | \! pg_migrate --dbname=contrib_regression --table=issue3_2 --alter='ADD COLUMN c1 INT' --execute 24 | INFO: migrating table "public.issue3_2" 25 | INFO: altering table with: ADD COLUMN c1 INT 26 | CREATE TABLE issue3_3 (col1 int NOT NULL, col2 text NOT NULL); 27 | CREATE UNIQUE INDEX issue3_3_idx ON issue3_3 (col1 DESC, col2 DESC); 28 | SELECT migrate.get_order_by('issue3_3_idx'::regclass::oid, 'issue3_3'::regclass::oid); 29 | get_order_by 30 | ---------------------- 31 | col1 DESC, col2 DESC 32 | (1 row) 33 | 34 | \! pg_migrate --dbname=contrib_regression --table=issue3_3 --alter='ADD COLUMN c1 INT' --execute 35 | INFO: migrating table "public.issue3_3" 36 | INFO: altering table with: ADD COLUMN c1 INT 37 | CREATE TABLE issue3_4 (col1 int NOT NULL, col2 text NOT NULL); 38 | CREATE UNIQUE INDEX issue3_4_idx ON issue3_4 (col1 NULLS FIRST, col2 text_pattern_ops DESC NULLS LAST); 39 | SELECT migrate.get_order_by('issue3_4_idx'::regclass::oid, 'issue3_4'::regclass::oid); 40 | get_order_by 41 | -------------------------------------------------- 42 | col1 NULLS FIRST, col2 DESC USING ~<~ NULLS LAST 43 | (1 row) 44 | 45 | \! pg_migrate --dbname=contrib_regression --table=issue3_4 --alter='ADD COLUMN c1 INT' --execute 46 | INFO: migrating table "public.issue3_4" 47 | INFO: altering table with: ADD COLUMN c1 INT 48 | CREATE TABLE issue3_5 (col1 int NOT NULL, col2 text NOT NULL); 49 | CREATE UNIQUE INDEX issue3_5_idx ON issue3_5 (col1 DESC NULLS FIRST, col2 COLLATE "POSIX" DESC); 50 | SELECT migrate.get_order_by('issue3_5_idx'::regclass::oid, 'issue3_5'::regclass::oid); 51 | get_order_by 52 | -------------------------------------- 53 | col1 DESC, col2 COLLATE "POSIX" DESC 54 | (1 row) 55 | 56 | \! pg_migrate --dbname=contrib_regression --table=issue3_5 --alter='ADD COLUMN c1 INT' --execute 57 | INFO: migrating table "public.issue3_5" 58 | INFO: altering table with: ADD COLUMN c1 INT 59 | -------------------------------------------------------------------------------- /regress/sql/tablespace.sql: -------------------------------------------------------------------------------- 1 | SET client_min_messages = warning; 2 | 3 | -- 4 | -- Tablespace features tests 5 | -- 6 | 7 | DROP TABLESPACE IF EXISTS testts; 8 | \! mkdir -p /tmp/pg-migrate-tablespace 9 | CREATE TABLESPACE testts LOCATION '/tmp/pg-migrate-tablespace'; 10 | 11 | SELECT spcname FROM pg_tablespace WHERE spcname = 'testts'; 12 | -- If the query above failed you must create the 'testts' tablespace; 13 | 14 | CREATE TABLE testts1 (id serial primary key, data text); 15 | CREATE INDEX testts1_partial_idx on testts1 (id) where (id > 0); 16 | CREATE INDEX testts1_with_idx on testts1 (id) with (fillfactor=80); 17 | INSERT INTO testts1 (data) values ('a'); 18 | INSERT INTO testts1 (data) values ('b'); 19 | INSERT INTO testts1 (data) values ('c'); 20 | 21 | -- check the indexes definitions 22 | SELECT regexp_replace( 23 | migrate.migrate_indexdef(indexrelid, 'testts1'::regclass, NULL, false, 'hash'), 24 | '_[0-9]+', '_OID', 'g') 25 | FROM pg_index i join pg_class c ON c.oid = indexrelid 26 | WHERE indrelid = 'testts1'::regclass ORDER BY relname; 27 | 28 | SELECT regexp_replace( 29 | migrate.migrate_indexdef(indexrelid, 'testts1'::regclass, 'foo', false, 'hash'), 30 | '_[0-9]+', '_OID', 'g') 31 | FROM pg_index i join pg_class c ON c.oid = indexrelid 32 | WHERE indrelid = 'testts1'::regclass ORDER BY relname; 33 | 34 | SELECT regexp_replace( 35 | migrate.migrate_indexdef(indexrelid, 'testts1'::regclass, NULL, true, 'hash'), 36 | '_[0-9]+', '_OID', 'g') 37 | FROM pg_index i join pg_class c ON c.oid = indexrelid 38 | WHERE indrelid = 'testts1'::regclass ORDER BY relname; 39 | 40 | SELECT regexp_replace( 41 | migrate.migrate_indexdef(indexrelid, 'testts1'::regclass, 'foo', true, 'hash'), 42 | '_[0-9]+', '_OID', 'g') 43 | FROM pg_index i join pg_class c ON c.oid = indexrelid 44 | WHERE indrelid = 'testts1'::regclass ORDER BY relname; 45 | 46 | -- can specify the tablespace, other than default 47 | \! pg_migrate --dbname=contrib_regression --table=testts1 --tablespace testts --alter='ADD COLUMN a1 INT' --execute 48 | 49 | SELECT relname, spcname 50 | FROM pg_class JOIN pg_tablespace ts ON ts.oid = reltablespace 51 | WHERE relname ~ '^testts1' AND NOT relname ~ '^testts1_pre_migrate' 52 | ORDER BY relname; 53 | 54 | SELECT * from testts1 order by id; 55 | 56 | -- tablespace stays where it is 57 | \! pg_migrate --dbname=contrib_regression --table=testts1 --alter='ADD COLUMN a2 INT' --execute 58 | 59 | SELECT relname, spcname 60 | FROM pg_class JOIN pg_tablespace ts ON ts.oid = reltablespace 61 | WHERE relname ~ '^testts1' AND NOT relname ~ '^testts1_pre_migrate' 62 | ORDER BY relname; 63 | 64 | -- can move the tablespace back to default 65 | \! pg_migrate --dbname=contrib_regression --table=testts1 -s pg_default --alter='ADD COLUMN a3 INT' --execute 66 | 67 | SELECT relname, spcname 68 | FROM pg_class JOIN pg_tablespace ts ON ts.oid = reltablespace 69 | WHERE relname ~ '^testts1' AND NOT relname ~ '^testts1_pre_migrate' 70 | ORDER BY relname; 71 | 72 | -- can move the table together with the indexes 73 | \! pg_migrate --dbname=contrib_regression --table=testts1 --tablespace testts --alter='ADD COLUMN a4 INT' --execute 74 | 75 | SELECT relname, spcname 76 | FROM pg_class JOIN pg_tablespace ts ON ts.oid = reltablespace 77 | WHERE relname ~ '^testts1' AND NOT relname ~ '^testts1_pre_migrate' 78 | ORDER BY relname; 79 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # pg_migrate -- Perform schema changes in PostgreSQL with minimal locks 2 | 3 | - Download: https://github.com/phillbaker/pg_migrate/releases 4 | - Development: https://github.com/phillbaker/pg_migrate 5 | - Bug Reports: https://github.com/phillbaker/pg_migrate/issues 6 | 7 | 8 | 9 | ## About 10 | 11 | pg_migrate is a PostgreSQL extension and CLI which lets you make schema 12 | changes to tables and indexes. Unlike `ALTER TABLE` it works online, without 13 | holding a long lived exclusive lock on the processed tables during the 14 | migration. It builds a copy of the target table and swaps them. 15 | 16 | Please check the documentation (in the ``doc`` directory or online) for 17 | installation and usage instructions. 18 | 19 | Forked from the excellent pg_repack project (https://reorg.github.io/pg_repack). 20 | 21 | ## Supported Postgres Versions 22 | 23 | Postgres >= 9.6 24 | 25 | ## Installation 26 | 27 | ### Ubuntu/Debian 28 | 29 | Use `apt-get` to install the package matching the Postgres version (`postgresql--pg_migrate`) being run from [this repo's APT repository](https://github.com/phillbaker/pg_migrate/releases/tag/apt-release-amd64). 30 | 31 | To add to your `/etc/apt/sources.list.d`, add the signing GPG key, and update the package DB, run: 32 | 33 | ``` 34 | curl -L https://github.com/phillbaker/pg_migrate/releases/download/apt-release-amd64/apt-add-repo | sh 35 | ``` 36 | 37 | Then to install, for example, for Postgres 10: 38 | ``` 39 | apt-get install -y postgresql-10-pg_migrate 40 | ``` 41 | 42 | Load the pg_migrate Postgres extension in the database you want to work on: 43 | ``` 44 | psql -c "DROP EXTENSION IF EXISTS pg_migrate cascade; CREATE EXTENSION pg_migrate" -d postgres 45 | ``` 46 | 47 | ### Mac 48 | 49 | Use `homebrew` to install the package matching the Postgres version being used. 50 | 51 | ``` 52 | brew tap phillbaker/pg_migrate https://github.com/phillbaker/pg_migrate 53 | brew install pg_migrate_postgresql@10 54 | # follow the post install instructions if you're running postgres on your local machine 55 | ``` 56 | 57 | ## Examples 58 | 59 | ### Change the type of a column 60 | 61 | ``` 62 | pg_migrate --table=my_table --alter='ALTER COLUMN id TYPE bigint' # Add --execute to run 63 | ``` 64 | 65 | ### Add a column with a default (non-nullable) 66 | 67 | ``` 68 | pg_migrate --table=my_table --alter='ADD COLUMN foo integer NOT NULL DEFAULT 42' # Add --execute to run 69 | ``` 70 | 71 | ## Known Limitations 72 | 73 | * Unique constraints are converted into unique indexes, [they are equivalent in Postgres](https://stackoverflow.com/questions/23542794/postgres-unique-constraint-vs-index). However, this may be an unexpected change. 74 | * Index names on the target table and foreign key constraints are changed during the migration. 75 | * If the generated names are > 63 characters, this will likely break 76 | * If the target table is used in views, those objects will continue to reference the original table - this is not supported currently. 77 | * If the target table is used in stored procedures, those functions are stored as text so are not linked through object IDs and will reference the migrated table. 78 | * DDL to drop columns or add columns without a default is not currently supported 79 | * Hosted PG databases (RDS, Cloud SQL) are not supported because they do not allow installing custom extensions. 80 | 81 | -------------------------------------------------------------------------------- /lib/pgut/pgut-spi.c: -------------------------------------------------------------------------------- 1 | /*------------------------------------------------------------------------- 2 | * pgut-spi.c 3 | * 4 | * Portions Copyright (c) 2008-2011, NIPPON TELEGRAPH AND TELEPHONE CORPORATION 5 | * Portions Copyright (c) 2011, Itagaki Takahiro 6 | * Portions Copyright (c) 2012-2020, The Reorg Development Team 7 | *------------------------------------------------------------------------- 8 | */ 9 | 10 | #include "postgres.h" 11 | #include "pgut-spi.h" 12 | #include "lib/stringinfo.h" 13 | 14 | #define EXEC_FAILED(ret, expected) \ 15 | (((expected) > 0 && (ret) != (expected)) || (ret) < 0) 16 | 17 | static void 18 | appendStringInfoVA_s(StringInfo str, const char *fmt, va_list args) 19 | __attribute__((format(printf, 2, 0))); 20 | 21 | static void 22 | termStringInfo(StringInfo str) 23 | { 24 | if (str && str->data) 25 | pfree(str->data); 26 | } 27 | 28 | /* appendStringInfoVA + automatic buffer extension */ 29 | static void 30 | appendStringInfoVA_s(StringInfo str, const char *fmt, va_list args) 31 | { 32 | int needed; 33 | while ((needed = appendStringInfoVA(str, fmt, args)) > 0) 34 | { 35 | /* Double the buffer size and try again. */ 36 | enlargeStringInfo(str, needed); 37 | } 38 | } 39 | 40 | /* simple execute */ 41 | void 42 | execute(int expected, const char *sql) 43 | { 44 | int ret = SPI_execute(sql, false, 0); 45 | if EXEC_FAILED(ret, expected) 46 | elog(ERROR, "query failed: (sql=%s, code=%d, expected=%d)", sql, ret, expected); 47 | } 48 | 49 | /* execute prepared plan */ 50 | void 51 | execute_plan(int expected, SPIPlanPtr plan, Datum *values, const char *nulls) 52 | { 53 | int ret = SPI_execute_plan(plan, values, nulls, false, 0); 54 | if EXEC_FAILED(ret, expected) 55 | elog(ERROR, "query failed: (code=%d, expected=%d)", ret, expected); 56 | } 57 | 58 | /* execute sql with format */ 59 | void 60 | execute_with_format(int expected, const char *format, ...) 61 | { 62 | va_list ap; 63 | StringInfoData sql; 64 | int ret; 65 | 66 | initStringInfo(&sql); 67 | va_start(ap, format); 68 | appendStringInfoVA_s(&sql, format, ap); 69 | va_end(ap); 70 | 71 | if (sql.len == 0) 72 | elog(WARNING, "execute_with_format(%s)", format); 73 | ret = SPI_exec(sql.data, 0); 74 | if EXEC_FAILED(ret, expected) 75 | elog(ERROR, "query failed: (sql=%s, code=%d, expected=%d)", sql.data, ret, expected); 76 | 77 | termStringInfo(&sql); 78 | } 79 | 80 | void 81 | execute_with_args(int expected, const char *src, int nargs, Oid argtypes[], Datum values[], const bool nulls[]) 82 | { 83 | int ret; 84 | int i; 85 | char c_nulls[FUNC_MAX_ARGS]; 86 | 87 | for (i = 0; i < nargs; i++) 88 | c_nulls[i] = (nulls[i] ? 'n' : ' '); 89 | 90 | ret = SPI_execute_with_args(src, nargs, argtypes, values, c_nulls, false, 0); 91 | if EXEC_FAILED(ret, expected) 92 | elog(ERROR, "query failed: (sql=%s, code=%d, expected=%d)", src, ret, expected); 93 | } 94 | 95 | void 96 | execute_with_format_args(int expected, const char *format, int nargs, Oid argtypes[], Datum values[], const bool nulls[], ...) 97 | { 98 | va_list ap; 99 | StringInfoData sql; 100 | 101 | initStringInfo(&sql); 102 | va_start(ap, nulls); 103 | appendStringInfoVA_s(&sql, format, ap); 104 | va_end(ap); 105 | 106 | execute_with_args(expected, sql.data, nargs, argtypes, values, nulls); 107 | 108 | termStringInfo(&sql); 109 | } 110 | -------------------------------------------------------------------------------- /regress/expected/after_schema.out: -------------------------------------------------------------------------------- 1 | -- 2 | -- tables schema after running migrations 3 | -- 4 | \d tbl_cluster 5 | Table "public.tbl_cluster" 6 | Column | Type | Collation | Nullable | Default 7 | --------+-----------------------------+-----------+----------+--------- 8 | col1 | integer | | not null | 9 | time | timestamp without time zone | | | 10 | ,") | text | | not null | 11 | a1 | integer | | | 12 | Indexes: 13 | "tbl_cluster_pkey_f6d2e" PRIMARY KEY, btree (","")", col1) WITH (fillfactor='75') 14 | ",") cluster_f6d2e" btree ("time", length(","")"), ","")" text_pattern_ops) WITH (fillfactor='75') 15 | 16 | \d tbl_gistkey 17 | Table "public.tbl_gistkey" 18 | Column | Type | Collation | Nullable | Default 19 | --------+---------+-----------+----------+--------- 20 | id | integer | | not null | 21 | c | circle | | | 22 | a1 | integer | | | 23 | Indexes: 24 | "tbl_gistkey_pkey_b253a" PRIMARY KEY, btree (id) 25 | "cidx_circle_b253a" gist (c) 26 | 27 | \d tbl_only_ckey 28 | Table "public.tbl_only_ckey" 29 | Column | Type | Collation | Nullable | Default 30 | --------+-----------------------------+-----------+----------+--------- 31 | col1 | integer | | | 32 | col2 | timestamp without time zone | | | 33 | ,") | text | | | 34 | Indexes: 35 | "cidx_only_ckey" btree (col2, ","")") CLUSTER 36 | 37 | \d tbl_only_pkey 38 | Table "public.tbl_only_pkey" 39 | Column | Type | Collation | Nullable | Default 40 | --------+---------+-----------+----------+--------- 41 | col1 | integer | | not null | 42 | ,") | text | | | 43 | a1 | integer | | | 44 | Indexes: 45 | "tbl_only_pkey_pkey_5e978" PRIMARY KEY, btree (col1) 46 | 47 | \d tbl_with_dropped_column 48 | Table "public.tbl_with_dropped_column" 49 | Column | Type | Collation | Nullable | Default 50 | --------+---------+-----------+----------+--------- 51 | c1 | text | | | 52 | id | integer | | not null | 53 | c2 | text | | | 54 | c3 | text | | | 55 | a1 | integer | | | 56 | Indexes: 57 | "tbl_with_dropped_column_pkey_f15cf" PRIMARY KEY, btree (id) WITH (fillfactor='75') 58 | "idx_c1c2_f15cf" btree (c1, c2) WITH (fillfactor='75') 59 | "idx_c2c1_f15cf" btree (c2, c1) 60 | 61 | \d tbl_with_dropped_toast 62 | Table "public.tbl_with_dropped_toast" 63 | Column | Type | Collation | Nullable | Default 64 | --------+---------+-----------+----------+--------- 65 | i | integer | | not null | 66 | j | integer | | not null | 67 | a1 | integer | | | 68 | Indexes: 69 | "tbl_with_dropped_toast_pkey_9550c" PRIMARY KEY, btree (i, j) 70 | 71 | \d tbl_idxopts 72 | Table "public.tbl_idxopts" 73 | Column | Type | Collation | Nullable | Default 74 | --------+---------+-----------+----------+--------- 75 | i | integer | | not null | 76 | t | text | | | 77 | a1 | integer | | | 78 | Indexes: 79 | "tbl_idxopts_pkey_0a789" PRIMARY KEY, btree (i) 80 | "idxopts_t_0a789" btree (t DESC NULLS LAST) WHERE t <> 'aaa'::text 81 | 82 | -------------------------------------------------------------------------------- /regress/sql/migrate_setup.sql: -------------------------------------------------------------------------------- 1 | SET client_min_messages = warning; 2 | 3 | -- 4 | -- create tables 5 | -- 6 | 7 | CREATE TABLE tbl_cluster ( 8 | col1 int, 9 | "time" timestamp, 10 | ","")" text, 11 | PRIMARY KEY (","")", col1) WITH (fillfactor = 75) 12 | ) WITH (fillfactor = 70); 13 | 14 | CREATE INDEX ","") cluster" ON tbl_cluster ("time", length(","")"), ","")" text_pattern_ops) WITH (fillfactor = 75); 15 | ALTER TABLE tbl_cluster CLUSTER ON ","") cluster"; 16 | 17 | CREATE TABLE tbl_only_pkey ( 18 | col1 int PRIMARY KEY, 19 | ","")" text 20 | ); 21 | 22 | CREATE TABLE tbl_only_ckey ( 23 | col1 int, 24 | col2 timestamp, 25 | ","")" text 26 | ) WITH (fillfactor = 70); 27 | 28 | CREATE INDEX cidx_only_ckey ON tbl_only_ckey (col2, ","")"); 29 | ALTER TABLE tbl_only_ckey CLUSTER ON cidx_only_ckey; 30 | 31 | CREATE TABLE tbl_gistkey ( 32 | id integer PRIMARY KEY, 33 | c circle 34 | ); 35 | 36 | CREATE INDEX cidx_circle ON tbl_gistkey USING gist (c); 37 | ALTER TABLE tbl_gistkey CLUSTER ON cidx_circle; 38 | 39 | CREATE TABLE tbl_with_dropped_column ( 40 | d1 text, 41 | c1 text, 42 | id integer PRIMARY KEY, 43 | d2 text, 44 | c2 text, 45 | d3 text 46 | ); 47 | ALTER INDEX tbl_with_dropped_column_pkey SET (fillfactor = 75); 48 | ALTER TABLE tbl_with_dropped_column CLUSTER ON tbl_with_dropped_column_pkey; 49 | CREATE INDEX idx_c1c2 ON tbl_with_dropped_column (c1, c2) WITH (fillfactor = 75); 50 | CREATE INDEX idx_c2c1 ON tbl_with_dropped_column (c2, c1); 51 | 52 | CREATE TABLE tbl_with_view ( 53 | c1 text, 54 | id integer PRIMARY KEY, 55 | c2 text 56 | ); 57 | 58 | CREATE TABLE tbl_with_dropped_toast ( 59 | i integer, 60 | j integer, 61 | t text, 62 | PRIMARY KEY (i, j) 63 | ); 64 | ALTER TABLE tbl_with_dropped_toast CLUSTER ON tbl_with_dropped_toast_pkey; 65 | 66 | CREATE TABLE tbl_badindex ( 67 | id integer PRIMARY KEY, 68 | n integer 69 | ); 70 | 71 | CREATE TABLE tbl_idxopts ( 72 | i integer PRIMARY KEY, 73 | t text 74 | ); 75 | CREATE INDEX idxopts_t ON tbl_idxopts (t DESC NULLS LAST) WHERE (t != 'aaa'); 76 | 77 | -- Use this table to play with attribute options too 78 | ALTER TABLE tbl_idxopts ALTER i SET STATISTICS 1; 79 | ALTER TABLE tbl_idxopts ALTER t SET (n_distinct = -0.5); 80 | CREATE TABLE tbl_with_toast ( 81 | i integer PRIMARY KEY, 82 | c text 83 | ); 84 | ALTER TABLE tbl_with_toast SET (AUTOVACUUM_VACUUM_SCALE_FACTOR = 30, AUTOVACUUM_VACUUM_THRESHOLD = 300); 85 | ALTER TABLE tbl_with_toast SET (TOAST.AUTOVACUUM_VACUUM_SCALE_FACTOR = 40, TOAST.AUTOVACUUM_VACUUM_THRESHOLD = 400); 86 | CREATE TABLE tbl_with_mod_column_storage ( 87 | id integer PRIMARY KEY, 88 | c text 89 | ); 90 | ALTER TABLE tbl_with_mod_column_storage ALTER c SET STORAGE MAIN; 91 | 92 | CREATE TABLE tbl_order (c int primary key); 93 | -- 94 | -- insert data 95 | -- 96 | 97 | INSERT INTO tbl_cluster VALUES(1, '2008-12-31 10:00:00', 'admin'); 98 | INSERT INTO tbl_cluster VALUES(2, '2008-01-01 00:00:00', 'king'); 99 | INSERT INTO tbl_cluster VALUES(3, '2008-03-04 12:00:00', 'joker'); 100 | INSERT INTO tbl_cluster VALUES(4, '2008-03-05 15:00:00', 'queen'); 101 | INSERT INTO tbl_cluster VALUES(5, '2008-01-01 00:30:00', sqrt(2::numeric(1000,999))::text || sqrt(3::numeric(1000,999))::text); 102 | 103 | INSERT INTO tbl_only_pkey VALUES(1, 'abc'); 104 | INSERT INTO tbl_only_pkey VALUES(2, 'def'); 105 | 106 | INSERT INTO tbl_only_ckey VALUES(1, '2008-01-01 00:00:00', 'abc'); 107 | INSERT INTO tbl_only_ckey VALUES(2, '2008-02-01 00:00:00', 'def'); 108 | 109 | INSERT INTO tbl_gistkey VALUES(1, '<(1,2),3>'); 110 | INSERT INTO tbl_gistkey VALUES(2, '<(4,5),6>'); 111 | 112 | INSERT INTO tbl_with_dropped_column VALUES('d1', 'c1', 2, 'd2', 'c2', 'd3'); 113 | INSERT INTO tbl_with_dropped_column VALUES('d1', 'c1', 1, 'd2', 'c2', 'd3'); 114 | ALTER TABLE tbl_with_dropped_column DROP COLUMN d1; 115 | ALTER TABLE tbl_with_dropped_column DROP COLUMN d2; 116 | ALTER TABLE tbl_with_dropped_column DROP COLUMN d3; 117 | ALTER TABLE tbl_with_dropped_column ADD COLUMN c3 text; 118 | 119 | INSERT INTO tbl_with_view VALUES('c1', 2, 'c2'); 120 | INSERT INTO tbl_with_view VALUES('c1', 1, 'c2'); 121 | CREATE VIEW view_for_tbl AS 122 | SELECT * FROM tbl_with_view; 123 | 124 | INSERT INTO tbl_with_dropped_toast VALUES(1, 10, 'abc'); 125 | INSERT INTO tbl_with_dropped_toast VALUES(2, 20, sqrt(2::numeric(1000,999))::text || sqrt(3::numeric(1000,999))::text); 126 | ALTER TABLE tbl_with_dropped_toast DROP COLUMN t; 127 | 128 | INSERT INTO tbl_badindex VALUES(1, 10); 129 | INSERT INTO tbl_badindex VALUES(2, 10); 130 | 131 | -- insert data that is always stored into the toast table if column type is extended. 132 | SELECT setseed(0); INSERT INTO tbl_with_mod_column_storage SELECT 1, array_to_string(ARRAY(SELECT chr((random() * (127 - 32) + 32)::int) FROM generate_series(1, 3 * 1024) code), ''); 133 | 134 | --- This will fail 135 | \set VERBOSITY terse 136 | CREATE UNIQUE INDEX CONCURRENTLY idx_badindex_n ON tbl_badindex (n); 137 | 138 | INSERT INTO tbl_idxopts VALUES (0, 'abc'), (1, 'aaa'), (2, NULL), (3, 'bbb'); 139 | 140 | -- Insert no-ordered data 141 | INSERT INTO tbl_order SELECT generate_series(100, 51, -1); 142 | CLUSTER tbl_order USING tbl_order_pkey; 143 | INSERT INTO tbl_order SELECT generate_series(50, 1, -1); 144 | 145 | -- 146 | -- before 147 | -- 148 | 149 | SELECT * FROM tbl_with_dropped_column; 150 | SELECT * FROM view_for_tbl; 151 | SELECT * FROM tbl_with_dropped_toast; 152 | -------------------------------------------------------------------------------- /regress/expected/migrate_setup.out: -------------------------------------------------------------------------------- 1 | SET client_min_messages = warning; 2 | -- 3 | -- create tables 4 | -- 5 | CREATE TABLE tbl_cluster ( 6 | col1 int, 7 | "time" timestamp, 8 | ","")" text, 9 | PRIMARY KEY (","")", col1) WITH (fillfactor = 75) 10 | ) WITH (fillfactor = 70); 11 | CREATE INDEX ","") cluster" ON tbl_cluster ("time", length(","")"), ","")" text_pattern_ops) WITH (fillfactor = 75); 12 | ALTER TABLE tbl_cluster CLUSTER ON ","") cluster"; 13 | CREATE TABLE tbl_only_pkey ( 14 | col1 int PRIMARY KEY, 15 | ","")" text 16 | ); 17 | CREATE TABLE tbl_only_ckey ( 18 | col1 int, 19 | col2 timestamp, 20 | ","")" text 21 | ) WITH (fillfactor = 70); 22 | CREATE INDEX cidx_only_ckey ON tbl_only_ckey (col2, ","")"); 23 | ALTER TABLE tbl_only_ckey CLUSTER ON cidx_only_ckey; 24 | CREATE TABLE tbl_gistkey ( 25 | id integer PRIMARY KEY, 26 | c circle 27 | ); 28 | CREATE INDEX cidx_circle ON tbl_gistkey USING gist (c); 29 | ALTER TABLE tbl_gistkey CLUSTER ON cidx_circle; 30 | CREATE TABLE tbl_with_dropped_column ( 31 | d1 text, 32 | c1 text, 33 | id integer PRIMARY KEY, 34 | d2 text, 35 | c2 text, 36 | d3 text 37 | ); 38 | ALTER INDEX tbl_with_dropped_column_pkey SET (fillfactor = 75); 39 | ALTER TABLE tbl_with_dropped_column CLUSTER ON tbl_with_dropped_column_pkey; 40 | CREATE INDEX idx_c1c2 ON tbl_with_dropped_column (c1, c2) WITH (fillfactor = 75); 41 | CREATE INDEX idx_c2c1 ON tbl_with_dropped_column (c2, c1); 42 | CREATE TABLE tbl_with_view ( 43 | c1 text, 44 | id integer PRIMARY KEY, 45 | c2 text 46 | ); 47 | CREATE TABLE tbl_with_dropped_toast ( 48 | i integer, 49 | j integer, 50 | t text, 51 | PRIMARY KEY (i, j) 52 | ); 53 | ALTER TABLE tbl_with_dropped_toast CLUSTER ON tbl_with_dropped_toast_pkey; 54 | CREATE TABLE tbl_badindex ( 55 | id integer PRIMARY KEY, 56 | n integer 57 | ); 58 | CREATE TABLE tbl_idxopts ( 59 | i integer PRIMARY KEY, 60 | t text 61 | ); 62 | CREATE INDEX idxopts_t ON tbl_idxopts (t DESC NULLS LAST) WHERE (t != 'aaa'); 63 | -- Use this table to play with attribute options too 64 | ALTER TABLE tbl_idxopts ALTER i SET STATISTICS 1; 65 | ALTER TABLE tbl_idxopts ALTER t SET (n_distinct = -0.5); 66 | CREATE TABLE tbl_with_toast ( 67 | i integer PRIMARY KEY, 68 | c text 69 | ); 70 | ALTER TABLE tbl_with_toast SET (AUTOVACUUM_VACUUM_SCALE_FACTOR = 30, AUTOVACUUM_VACUUM_THRESHOLD = 300); 71 | ALTER TABLE tbl_with_toast SET (TOAST.AUTOVACUUM_VACUUM_SCALE_FACTOR = 40, TOAST.AUTOVACUUM_VACUUM_THRESHOLD = 400); 72 | CREATE TABLE tbl_with_mod_column_storage ( 73 | id integer PRIMARY KEY, 74 | c text 75 | ); 76 | ALTER TABLE tbl_with_mod_column_storage ALTER c SET STORAGE MAIN; 77 | CREATE TABLE tbl_order (c int primary key); 78 | -- 79 | -- insert data 80 | -- 81 | INSERT INTO tbl_cluster VALUES(1, '2008-12-31 10:00:00', 'admin'); 82 | INSERT INTO tbl_cluster VALUES(2, '2008-01-01 00:00:00', 'king'); 83 | INSERT INTO tbl_cluster VALUES(3, '2008-03-04 12:00:00', 'joker'); 84 | INSERT INTO tbl_cluster VALUES(4, '2008-03-05 15:00:00', 'queen'); 85 | INSERT INTO tbl_cluster VALUES(5, '2008-01-01 00:30:00', sqrt(2::numeric(1000,999))::text || sqrt(3::numeric(1000,999))::text); 86 | INSERT INTO tbl_only_pkey VALUES(1, 'abc'); 87 | INSERT INTO tbl_only_pkey VALUES(2, 'def'); 88 | INSERT INTO tbl_only_ckey VALUES(1, '2008-01-01 00:00:00', 'abc'); 89 | INSERT INTO tbl_only_ckey VALUES(2, '2008-02-01 00:00:00', 'def'); 90 | INSERT INTO tbl_gistkey VALUES(1, '<(1,2),3>'); 91 | INSERT INTO tbl_gistkey VALUES(2, '<(4,5),6>'); 92 | INSERT INTO tbl_with_dropped_column VALUES('d1', 'c1', 2, 'd2', 'c2', 'd3'); 93 | INSERT INTO tbl_with_dropped_column VALUES('d1', 'c1', 1, 'd2', 'c2', 'd3'); 94 | ALTER TABLE tbl_with_dropped_column DROP COLUMN d1; 95 | ALTER TABLE tbl_with_dropped_column DROP COLUMN d2; 96 | ALTER TABLE tbl_with_dropped_column DROP COLUMN d3; 97 | ALTER TABLE tbl_with_dropped_column ADD COLUMN c3 text; 98 | INSERT INTO tbl_with_view VALUES('c1', 2, 'c2'); 99 | INSERT INTO tbl_with_view VALUES('c1', 1, 'c2'); 100 | CREATE VIEW view_for_tbl AS 101 | SELECT * FROM tbl_with_view; 102 | INSERT INTO tbl_with_dropped_toast VALUES(1, 10, 'abc'); 103 | INSERT INTO tbl_with_dropped_toast VALUES(2, 20, sqrt(2::numeric(1000,999))::text || sqrt(3::numeric(1000,999))::text); 104 | ALTER TABLE tbl_with_dropped_toast DROP COLUMN t; 105 | INSERT INTO tbl_badindex VALUES(1, 10); 106 | INSERT INTO tbl_badindex VALUES(2, 10); 107 | -- insert data that is always stored into the toast table if column type is extended. 108 | SELECT setseed(0); INSERT INTO tbl_with_mod_column_storage SELECT 1, array_to_string(ARRAY(SELECT chr((random() * (127 - 32) + 32)::int) FROM generate_series(1, 3 * 1024) code), ''); 109 | setseed 110 | --------- 111 | 112 | (1 row) 113 | 114 | --- This will fail 115 | \set VERBOSITY terse 116 | CREATE UNIQUE INDEX CONCURRENTLY idx_badindex_n ON tbl_badindex (n); 117 | ERROR: could not create unique index "idx_badindex_n" 118 | INSERT INTO tbl_idxopts VALUES (0, 'abc'), (1, 'aaa'), (2, NULL), (3, 'bbb'); 119 | -- Insert no-ordered data 120 | INSERT INTO tbl_order SELECT generate_series(100, 51, -1); 121 | CLUSTER tbl_order USING tbl_order_pkey; 122 | INSERT INTO tbl_order SELECT generate_series(50, 1, -1); 123 | -- 124 | -- before 125 | -- 126 | SELECT * FROM tbl_with_dropped_column; 127 | c1 | id | c2 | c3 128 | ----+----+----+---- 129 | c1 | 2 | c2 | 130 | c1 | 1 | c2 | 131 | (2 rows) 132 | 133 | SELECT * FROM view_for_tbl; 134 | c1 | id | c2 135 | ----+----+---- 136 | c1 | 2 | c2 137 | c1 | 1 | c2 138 | (2 rows) 139 | 140 | SELECT * FROM tbl_with_dropped_toast; 141 | i | j 142 | ---+---- 143 | 1 | 10 144 | 2 | 20 145 | (2 rows) 146 | 147 | -------------------------------------------------------------------------------- /regress/sql/check.sql: -------------------------------------------------------------------------------- 1 | SET client_min_messages = warning; 2 | 3 | SELECT col1, to_char("time", 'YYYY-MM-DD HH24:MI:SS'), ","")" FROM tbl_cluster ORDER BY 1, 2; 4 | SELECT * FROM tbl_only_ckey ORDER BY 1; 5 | SELECT * FROM tbl_only_pkey ORDER BY 1; 6 | SELECT * FROM tbl_gistkey ORDER BY 1; 7 | 8 | SET enable_seqscan = on; 9 | SET enable_indexscan = off; 10 | SELECT * FROM tbl_with_dropped_column ; 11 | SELECT * FROM view_for_tbl ORDER BY 1, 2; 12 | SELECT * FROM tbl_with_dropped_toast; 13 | SET enable_seqscan = off; 14 | SET enable_indexscan = on; 15 | SELECT * FROM tbl_with_dropped_column ORDER BY 1, 2; 16 | SELECT * FROM view_for_tbl; 17 | SELECT * FROM tbl_with_dropped_toast; 18 | RESET enable_seqscan; 19 | RESET enable_indexscan; 20 | -- check if storage option for both table and TOAST table didn't go away. 21 | SELECT CASE relkind 22 | WHEN 'r' THEN relname 23 | WHEN 't' THEN 'toast_table' 24 | END as table, 25 | reloptions 26 | FROM pg_class 27 | WHERE relname = 'tbl_with_toast' OR relname = 'pg_toast_' || 'tbl_with_toast'::regclass::oid 28 | ORDER BY 1; 29 | SELECT pg_relation_size(reltoastrelid) = 0 as check_toast_rel_size FROM pg_class WHERE relname = 'tbl_with_mod_column_storage'; 30 | 31 | -- 32 | -- check broken links or orphan toast relations 33 | -- 34 | SELECT oid, relname 35 | FROM pg_class 36 | WHERE relkind = 't' 37 | AND oid NOT IN (SELECT reltoastrelid FROM pg_class WHERE relkind = 'r'); 38 | 39 | SELECT oid, relname 40 | FROM pg_class 41 | WHERE relkind = 'r' 42 | AND reltoastrelid <> 0 43 | AND reltoastrelid NOT IN (SELECT oid FROM pg_class WHERE relkind = 't'); 44 | 45 | -- check columns options 46 | SELECT attname, attstattarget, attoptions 47 | FROM pg_attribute 48 | WHERE attrelid = 'tbl_idxopts'::regclass 49 | AND attnum > 0 50 | ORDER BY attnum; 51 | 52 | -- 53 | -- NOT NULL UNIQUE 54 | -- 55 | CREATE TABLE tbl_nn (col1 int NOT NULL, col2 int NOT NULL); 56 | CREATE TABLE tbl_uk (col1 int NOT NULL, col2 int , UNIQUE(col1, col2)); 57 | CREATE TABLE tbl_nn_uk (col1 int NOT NULL, col2 int NOT NULL, UNIQUE(col1, col2)); 58 | CREATE TABLE tbl_pk_uk (col1 int NOT NULL, col2 int NOT NULL, PRIMARY KEY(col1, col2), UNIQUE(col2, col1)); 59 | CREATE TABLE tbl_nn_puk (col1 int NOT NULL, col2 int NOT NULL); 60 | CREATE UNIQUE INDEX tbl_nn_puk_pcol1_idx ON tbl_nn_puk(col1) WHERE col1 < 10; 61 | \! pg_migrate --execute --alter='ADD COLUMN a1 INT' --dbname=contrib_regression --table=tbl_nn 62 | -- => WARNING 63 | \! pg_migrate --execute --alter='ADD COLUMN a1 INT' --dbname=contrib_regression --table=tbl_uk 64 | -- => WARNING 65 | \! pg_migrate --execute --alter='ADD COLUMN a1 INT' --dbname=contrib_regression --table=tbl_nn_uk 66 | -- => OK 67 | \! pg_migrate --execute --alter='ADD COLUMN a1 INT' --dbname=contrib_regression --table=tbl_pk_uk 68 | -- => OK 69 | \! pg_migrate --execute --alter='ADD COLUMN a1 INT' --dbname=contrib_regression --table=tbl_nn_puk 70 | -- => WARNING 71 | 72 | -- 73 | -- Triggers handling 74 | -- 75 | CREATE FUNCTION trgtest() RETURNS trigger AS 76 | $$BEGIN RETURN NEW; END$$ 77 | LANGUAGE plpgsql; 78 | CREATE TABLE trg1 (id integer PRIMARY KEY); 79 | CREATE TRIGGER repack_trigger_1 AFTER UPDATE ON trg1 FOR EACH ROW EXECUTE PROCEDURE trgtest(); 80 | \! pg_migrate --execute --alter='ADD COLUMN a1 INT' --dbname=contrib_regression --table=trg1 81 | CREATE TABLE trg2 (id integer PRIMARY KEY); 82 | CREATE TRIGGER repack_trigger AFTER UPDATE ON trg2 FOR EACH ROW EXECUTE PROCEDURE trgtest(); 83 | \! pg_migrate --execute --alter='ADD COLUMN a1 INT' --dbname=contrib_regression --table=trg2 84 | CREATE TABLE trg3 (id integer PRIMARY KEY); 85 | CREATE TRIGGER repack_trigger_1 BEFORE UPDATE ON trg3 FOR EACH ROW EXECUTE PROCEDURE trgtest(); 86 | \! pg_migrate --execute --alter='ADD COLUMN a1 INT' --dbname=contrib_regression --table=trg3 87 | 88 | 89 | -- 90 | -- Dry run 91 | -- 92 | \! pg_migrate --alter='ADD COLUMN a1 INT' --dbname=contrib_regression --table=tbl_cluster 93 | 94 | 95 | -- Test --schema 96 | -- 97 | CREATE SCHEMA test_schema1; 98 | CREATE TABLE test_schema1.tbl1 (id INTEGER PRIMARY KEY); 99 | CREATE TABLE test_schema1.tbl2 (id INTEGER PRIMARY KEY); 100 | CREATE SCHEMA test_schema2; 101 | CREATE TABLE test_schema2.tbl1 (id INTEGER PRIMARY KEY); 102 | CREATE TABLE test_schema2.tbl2 (id INTEGER PRIMARY KEY); 103 | -- => OK 104 | \! pg_migrate --execute --alter='ADD COLUMN a1 INT' --dbname=contrib_regression --table=test_schema1.tbl1 105 | 106 | 107 | -- 108 | -- don't kill backend 109 | -- 110 | \! pg_migrate --execute --alter='ADD COLUMN dkb1 INT' --dbname=contrib_regression --table=tbl_cluster --no-kill-backend 111 | 112 | 113 | -- 114 | -- table inheritance check 115 | -- 116 | CREATE TABLE parent_a(val integer primary key); 117 | CREATE TABLE child_a_1(val integer primary key) INHERITS(parent_a); 118 | CREATE TABLE child_a_2(val integer primary key) INHERITS(parent_a); 119 | CREATE TABLE parent_b(val integer primary key, i1 int NOT NULL); 120 | CREATE TABLE child_b_1(val integer primary key) INHERITS(parent_b); 121 | CREATE TABLE child_b_2(val integer primary key) INHERITS(parent_b); 122 | -- => OK 123 | \! pg_migrate --execute --alter='ADD COLUMN a1 INT' --dbname=contrib_regression --table=parent_a 124 | -- => OK 125 | \! pg_migrate --execute --alter='ADD COLUMN a1 TEXT' --dbname=contrib_regression --table=child_a_1 126 | -- => ERROR 127 | -- TODO non deterministic output \! pg_migrate --execute --alter='NO INHERIT parent_a' --dbname=contrib_regression --table=child_a_2 128 | -- => ERROR 129 | -- TODO non deterministic output \! pg_migrate --execute --alter='ADD COLUMN i1 TEXT' --dbname=contrib_regression --table=child_b_1 130 | -------------------------------------------------------------------------------- /.github/workflows/test.yml: -------------------------------------------------------------------------------- 1 | name: CI 2 | on: [push, pull_request] 3 | jobs: 4 | test: 5 | strategy: 6 | matrix: 7 | pg: 8 | # - 15 9 | - 14 10 | - 13 11 | - 12 12 | - 11 13 | - 10 14 | - 9.6 15 | name: 🐘 PostgreSQL ${{ matrix.pg }} 16 | runs-on: ubuntu-latest 17 | # container: pgxn/pgxn-tools 18 | container: ghcr.io/phillbaker/docker-pgxn-tools:latest 19 | env: 20 | PROFILE: "-std=c99" 21 | TEST_PATH: "./bin" 22 | steps: 23 | - name: Start PostgreSQL ${{ matrix.pg }} 24 | run: pg-start ${{ matrix.pg }} 25 | - name: Check out the repo 26 | uses: actions/checkout@v2 27 | - name: Add local bin to path 28 | run: | 29 | echo "$GITHUB_WORKSPACE/bin" >> $GITHUB_PATH 30 | - name: Set tablespace to pg user 31 | run: | 32 | mkdir -p /tmp/pg-migrate-tablespace 33 | # if running as root, change the owner to postgres user 34 | if [ "$(id -u)" = "0" ]; then 35 | chown postgres:postgres /tmp/pg-migrate-tablespace 36 | fi 37 | - name: Test on PostgreSQL ${{ matrix.pg }} 38 | run: pg-build-test 39 | release: 40 | if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') 41 | needs: [test] 42 | name: Release on GitHub and PGXN 43 | runs-on: ubuntu-latest 44 | # container: pgxn/pgxn-tools 45 | container: ghcr.io/phillbaker/docker-pgxn-tools:latest 46 | env: 47 | GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} 48 | outputs: 49 | upload_url: ${{ steps.release.outputs.upload_url }} 50 | steps: 51 | - name: Check out the repo 52 | uses: actions/checkout@v2 53 | - name: Bundle the Release 54 | id: bundle 55 | run: pgxn-bundle 56 | - name: Release on PGXN 57 | env: 58 | PGXN_USERNAME: ${{ secrets.PGXN_USERNAME }} 59 | PGXN_PASSWORD: ${{ secrets.PGXN_PASSWORD }} 60 | run: pgxn-release 61 | - name: Create GitHub Release 62 | id: release 63 | uses: actions/create-release@v1 64 | with: 65 | tag_name: ${{ github.ref }} 66 | release_name: ${{ github.ref }} 67 | # draft: true 68 | # prerelease: false 69 | - name: Upload Release Asset 70 | uses: actions/upload-release-asset@v1 71 | with: 72 | # Reference the upload URL and bundle name from previous steps. 73 | upload_url: ${{ steps.release.outputs.upload_url }} 74 | asset_path: ./${{ steps.bundle.outputs.bundle }} 75 | asset_name: ${{ steps.bundle.outputs.bundle }} 76 | asset_content_type: application/zip 77 | package: 78 | if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') 79 | runs-on: ubuntu-latest 80 | needs: [release] 81 | name: Build Linux Packages 82 | strategy: 83 | matrix: 84 | os: [linux] 85 | arch: [x86_64] 86 | format: [deb] # rpm 87 | pg: 88 | - 14 89 | - 13 90 | - 12 91 | - 11 92 | - 10 93 | - 9.6 94 | container: ghcr.io/phillbaker/docker-pgxn-tools:latest 95 | env: 96 | PG: ${{ matrix.pg }} 97 | VERSION: ${{ github.ref }} 98 | ARCH: ${{ matrix.arch }} 99 | OS: ${{ matrix.os }} 100 | PROFILE: "-std=c99" 101 | steps: 102 | - id: get_version 103 | uses: battila7/get-version-action@v2 104 | - run: echo "VERSION=${{ steps.get_version.outputs.version-without-v }}" >> $GITHUB_ENV 105 | - name: Install PostgreSQL ${{ matrix.pg }} 106 | run: pg-start ${{ matrix.pg }} 107 | - name: Check out the repo 108 | uses: actions/checkout@v2 109 | - name: Install nfpm 110 | run: | 111 | echo 'deb [trusted=yes] https://repo.goreleaser.com/apt/ /' | sudo tee /etc/apt/sources.list.d/goreleaser.list 112 | sudo apt update 113 | sudo apt-get -y install nfpm 114 | - name: Build the project 115 | run: | 116 | make all 117 | - name: Build nfpm config file 118 | run: | 119 | sudo apt-get -y install --no-install-recommends gettext-base 120 | envsubst < nfpm.yaml.tmpl > nfpm.yaml 121 | - name: Build packages 122 | run: | 123 | nfpm pkg --packager deb 124 | mkdir -p deb 125 | mv *.deb deb/ 126 | - name: Upload deb 127 | uses: actions/upload-artifact@v3 128 | with: 129 | name: deb-${{ github.ref_name }} 130 | path: deb/*.deb 131 | package_release: 132 | if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') 133 | runs-on: ubuntu-latest 134 | needs: [package] 135 | name: Release Linux Packages 136 | container: ghcr.io/phillbaker/docker-pgxn-tools:latest 137 | steps: 138 | - name: Check out the repo 139 | uses: actions/checkout@v2 140 | - name: Pull 141 | uses: actions/download-artifact@v3 142 | with: 143 | name: deb-${{ github.ref_name }} 144 | path: deb 145 | - name: Install apt tools 146 | run: | 147 | sudo apt update 148 | sudo apt-get -y install --no-install-recommends python3-pip python3-apt gnupg apt-utils 149 | - name: Import GPG key 150 | id: import_gpg 151 | uses: crazy-max/ghaction-import-gpg@v4 152 | with: 153 | gpg_private_key: ${{ secrets.GPG_PRIVATE_KEY }} 154 | - name: Push to apt repo 155 | run: | 156 | mkdir -p ~/.gnupg 157 | # use fork with fixes 158 | pip install git+https://github.com/X4BNet/github-apt-repos.git 159 | github-apt-repos \ 160 | --github-token ${{ secrets.GITHUB_TOKEN }} \ 161 | --github-apt-repo phillbaker/pg_migrate \ 162 | --gpg-pub-key ./ci/key.gpg.pub \ 163 | --deb-dir=deb \ 164 | --github-release-prefix=release 165 | -------------------------------------------------------------------------------- /bin/pgut/pgut.h: -------------------------------------------------------------------------------- 1 | /*------------------------------------------------------------------------- 2 | * pgut.h 3 | * 4 | * Portions Copyright (c) 2008-2011, NIPPON TELEGRAPH AND TELEPHONE CORPORATION 5 | * Portions Copyright (c) 2011, Itagaki Takahiro 6 | * Portions Copyright (c) 2012-2020, The Reorg Development Team 7 | *------------------------------------------------------------------------- 8 | */ 9 | 10 | #ifndef PGUT_H 11 | #define PGUT_H 12 | 13 | #include "c.h" 14 | #include 15 | 16 | #ifndef WIN32 17 | #include 18 | #include 19 | #endif 20 | 21 | #include "libpq-fe.h" 22 | #include "pqexpbuffer.h" 23 | #include "utils/elog.h" 24 | 25 | #define INFINITE_STR "INFINITE" 26 | 27 | #ifdef _MSC_VER 28 | #define __attribute__(x) 29 | #endif 30 | 31 | typedef enum YesNo 32 | { 33 | DEFAULT, 34 | NO, 35 | YES 36 | } YesNo; 37 | 38 | typedef void (*pgut_atexit_callback)(bool fatal, void *userdata); 39 | 40 | /* 41 | * pgut client variables and functions 42 | */ 43 | extern const char *PROGRAM_NAME; 44 | extern const char *PROGRAM_VERSION; 45 | extern const char *PROGRAM_URL; 46 | extern const char *PROGRAM_ISSUES; 47 | 48 | /* 49 | * pgut framework variables and functions 50 | */ 51 | extern bool interrupted; 52 | extern int pgut_log_level; 53 | extern int pgut_abort_level; 54 | extern bool pgut_echo; 55 | 56 | extern void pgut_init(int argc, char **argv); 57 | extern void pgut_atexit_push(pgut_atexit_callback callback, void *userdata); 58 | extern void pgut_atexit_pop(pgut_atexit_callback callback, void *userdata); 59 | extern void pgut_putenv(const char *key, const char *value); 60 | 61 | /* 62 | * Database connections 63 | */ 64 | extern PGconn *pgut_connect(const char *info, YesNo prompt, int elevel); 65 | extern void pgut_disconnect(PGconn *conn); 66 | extern void pgut_disconnect_all(void); 67 | extern PGresult *pgut_execute(PGconn* conn, const char *query, int nParams, const char **params); 68 | PGresult *pgut_execute_elevel(PGconn* conn, const char *query, int nParams, const char **params, int elevel); 69 | extern ExecStatusType pgut_command(PGconn* conn, const char *query, int nParams, const char **params); 70 | extern bool pgut_commit(PGconn *conn); 71 | extern void pgut_rollback(PGconn *conn); 72 | extern bool pgut_send(PGconn* conn, const char *query, int nParams, const char **params); 73 | extern int pgut_wait(int num, PGconn *connections[], struct timeval *timeout); 74 | 75 | /* 76 | * memory allocators 77 | */ 78 | extern void *pgut_malloc(size_t size); 79 | extern void *pgut_realloc(void *p, size_t size); 80 | extern char *pgut_strdup(const char *str); 81 | extern char *strdup_with_len(const char *str, size_t len); 82 | extern char *strdup_trim(const char *str); 83 | 84 | #define pgut_new(type) ((type *) pgut_malloc(sizeof(type))) 85 | #define pgut_newarray(type, n) ((type *) pgut_malloc(sizeof(type) * (n))) 86 | #define pgut_newvar(type, m, n) ((type *) pgut_malloc(offsetof(type, m) + (n))) 87 | 88 | /* 89 | * file operations 90 | */ 91 | extern FILE *pgut_fopen(const char *path, const char *mode); 92 | extern bool pgut_mkdir(const char *path); 93 | 94 | /* 95 | * elog 96 | */ 97 | #define E_PG_CONNECT (-1) /* PostgreSQL connection error */ 98 | #define E_PG_COMMAND (-2) /* PostgreSQL query or command error */ 99 | 100 | #undef elog 101 | #undef ereport 102 | #define ereport(elevel, rest) \ 103 | (pgut_errstart(elevel) ? (pgut_errfinish rest) : (void) 0) 104 | 105 | extern void elog(int elevel, const char *fmt, ...) 106 | __attribute__((format(printf, 2, 3))); 107 | extern const char *format_elevel(int elevel); 108 | extern int parse_elevel(const char *value); 109 | extern int errcode_errno(void); 110 | extern bool log_required(int elevel, int log_min_level); 111 | extern bool pgut_errstart(int elevel); 112 | extern void pgut_errfinish(int dummy, ...); 113 | extern void pgut_error(int elevel, int code, const char *msg, const char *detail); 114 | 115 | /* 116 | * CHECK_FOR_INTERRUPTS 117 | */ 118 | #undef CHECK_FOR_INTERRUPTS 119 | extern void CHECK_FOR_INTERRUPTS(void); 120 | 121 | /* 122 | * Assert 123 | */ 124 | #undef Assert 125 | #undef AssertArg 126 | #undef AssertMacro 127 | 128 | #ifdef USE_ASSERT_CHECKING 129 | #define Assert(x) assert(x) 130 | #define AssertArg(x) assert(x) 131 | #define AssertMacro(x) assert(x) 132 | #else 133 | #define Assert(x) ((void) 0) 134 | #define AssertArg(x) ((void) 0) 135 | #define AssertMacro(x) ((void) 0) 136 | #endif 137 | 138 | /* 139 | * StringInfo and string operations 140 | */ 141 | #define STRINGINFO_H 142 | 143 | #define StringInfoData PQExpBufferData 144 | #define StringInfo PQExpBuffer 145 | #define makeStringInfo createPQExpBuffer 146 | #define initStringInfo initPQExpBuffer 147 | #define freeStringInfo destroyPQExpBuffer 148 | #define termStringInfo termPQExpBuffer 149 | #define resetStringInfo resetPQExpBuffer 150 | #define enlargeStringInfo enlargePQExpBuffer 151 | #define printfStringInfo printfPQExpBuffer /* reset + append */ 152 | #define appendStringInfo appendPQExpBuffer 153 | #define appendStringInfoString appendPQExpBufferStr 154 | #define appendStringInfoChar appendPQExpBufferChar 155 | #define appendBinaryStringInfo appendBinaryPQExpBuffer 156 | 157 | extern bool pgut_appendStringInfoVA(StringInfo str, const char *fmt, va_list args) 158 | __attribute__((format(printf, 2, 0))); 159 | extern int appendStringInfoFile(StringInfo str, FILE *fp); 160 | extern int appendStringInfoFd(StringInfo str, int fd); 161 | 162 | extern bool parse_bool(const char *value, bool *result); 163 | extern bool parse_bool_with_len(const char *value, size_t len, bool *result); 164 | extern bool parse_int32(const char *value, int32 *result); 165 | extern bool parse_uint32(const char *value, uint32 *result); 166 | extern bool parse_int64(const char *value, int64 *result); 167 | extern bool parse_uint64(const char *value, uint64 *result); 168 | extern bool parse_time(const char *value, time_t *time); 169 | 170 | #define IsSpace(c) (isspace((unsigned char)(c))) 171 | #define IsAlpha(c) (isalpha((unsigned char)(c))) 172 | #define IsAlnum(c) (isalnum((unsigned char)(c))) 173 | #define IsIdentHead(c) (IsAlpha(c) || (c) == '_') 174 | #define IsIdentBody(c) (IsAlnum(c) || (c) == '_') 175 | #define ToLower(c) (tolower((unsigned char)(c))) 176 | #define ToUpper(c) (toupper((unsigned char)(c))) 177 | 178 | /* linked list of string values and helper functions, stolen from pg_dump. */ 179 | typedef struct SimpleStringListCell 180 | { 181 | struct SimpleStringListCell *next; 182 | char val[1]; /* VARIABLE LENGTH FIELD */ 183 | } SimpleStringListCell; 184 | 185 | typedef struct SimpleStringList 186 | { 187 | SimpleStringListCell *head; 188 | SimpleStringListCell *tail; 189 | } SimpleStringList; 190 | 191 | extern void simple_string_list_append(SimpleStringList *list, const char *val); 192 | extern bool simple_string_list_member(SimpleStringList *list, const char *val); 193 | extern size_t simple_string_list_size(SimpleStringList list); 194 | 195 | 196 | /* 197 | * socket operations 198 | */ 199 | extern int wait_for_socket(int sock, struct timeval *timeout); 200 | extern int wait_for_sockets(int nfds, fd_set *fds, struct timeval *timeout); 201 | 202 | #ifdef WIN32 203 | extern int sleep(unsigned int seconds); 204 | extern int usleep(unsigned int usec); 205 | #endif 206 | 207 | #endif /* PGUT_H */ 208 | -------------------------------------------------------------------------------- /regress/expected/tablespace_1.out: -------------------------------------------------------------------------------- 1 | SET client_min_messages = warning; 2 | -- 3 | -- Tablespace features tests 4 | -- 5 | DROP TABLESPACE IF EXISTS testts; 6 | \! mkdir -p /tmp/pg-migrate-tablespace 7 | CREATE TABLESPACE testts LOCATION '/tmp/pg-migrate-tablespace'; 8 | SELECT spcname FROM pg_tablespace WHERE spcname = 'testts'; 9 | spcname 10 | --------- 11 | testts 12 | (1 row) 13 | 14 | -- If the query above failed you must create the 'testts' tablespace; 15 | CREATE TABLE testts1 (id serial primary key, data text); 16 | CREATE INDEX testts1_partial_idx on testts1 (id) where (id > 0); 17 | CREATE INDEX testts1_with_idx on testts1 (id) with (fillfactor=80); 18 | INSERT INTO testts1 (data) values ('a'); 19 | INSERT INTO testts1 (data) values ('b'); 20 | INSERT INTO testts1 (data) values ('c'); 21 | -- check the indexes definitions 22 | SELECT regexp_replace( 23 | migrate.migrate_indexdef(indexrelid, 'testts1'::regclass, NULL, false, 'hash'), 24 | '_[0-9]+', '_OID', 'g') 25 | FROM pg_index i join pg_class c ON c.oid = indexrelid 26 | WHERE indrelid = 'testts1'::regclass ORDER BY relname; 27 | regexp_replace 28 | ------------------------------------------------------------------------------------------------- 29 | CREATE INDEX testts1_partial_idx_hash ON migrate.table_OID USING btree (id) WHERE (id > 0) 30 | CREATE UNIQUE INDEX testts1_pkey_hash ON migrate.table_OID USING btree (id) 31 | CREATE INDEX testts1_with_idx_hash ON migrate.table_OID USING btree (id) WITH (fillfactor='80') 32 | (3 rows) 33 | 34 | SELECT regexp_replace( 35 | migrate.migrate_indexdef(indexrelid, 'testts1'::regclass, 'foo', false, 'hash'), 36 | '_[0-9]+', '_OID', 'g') 37 | FROM pg_index i join pg_class c ON c.oid = indexrelid 38 | WHERE indrelid = 'testts1'::regclass ORDER BY relname; 39 | regexp_replace 40 | ---------------------------------------------------------------------------------------------------------------- 41 | CREATE INDEX testts1_partial_idx_hash ON migrate.table_OID USING btree (id) TABLESPACE foo WHERE (id > 0) 42 | CREATE UNIQUE INDEX testts1_pkey_hash ON migrate.table_OID USING btree (id) TABLESPACE foo 43 | CREATE INDEX testts1_with_idx_hash ON migrate.table_OID USING btree (id) WITH (fillfactor='80') TABLESPACE foo 44 | (3 rows) 45 | 46 | SELECT regexp_replace( 47 | migrate.migrate_indexdef(indexrelid, 'testts1'::regclass, NULL, true, 'hash'), 48 | '_[0-9]+', '_OID', 'g') 49 | FROM pg_index i join pg_class c ON c.oid = indexrelid 50 | WHERE indrelid = 'testts1'::regclass ORDER BY relname; 51 | regexp_replace 52 | ----------------------------------------------------------------------------------------------------------- 53 | CREATE INDEX CONCURRENTLY testts1_partial_idx_hash ON public.testts1 USING btree (id) WHERE (id > 0) 54 | CREATE UNIQUE INDEX CONCURRENTLY testts1_pkey_hash ON public.testts1 USING btree (id) 55 | CREATE INDEX CONCURRENTLY testts1_with_idx_hash ON public.testts1 USING btree (id) WITH (fillfactor='80') 56 | (3 rows) 57 | 58 | SELECT regexp_replace( 59 | migrate.migrate_indexdef(indexrelid, 'testts1'::regclass, 'foo', true, 'hash'), 60 | '_[0-9]+', '_OID', 'g') 61 | FROM pg_index i join pg_class c ON c.oid = indexrelid 62 | WHERE indrelid = 'testts1'::regclass ORDER BY relname; 63 | regexp_replace 64 | -------------------------------------------------------------------------------------------------------------------------- 65 | CREATE INDEX CONCURRENTLY testts1_partial_idx_hash ON public.testts1 USING btree (id) TABLESPACE foo WHERE (id > 0) 66 | CREATE UNIQUE INDEX CONCURRENTLY testts1_pkey_hash ON public.testts1 USING btree (id) TABLESPACE foo 67 | CREATE INDEX CONCURRENTLY testts1_with_idx_hash ON public.testts1 USING btree (id) WITH (fillfactor='80') TABLESPACE foo 68 | (3 rows) 69 | 70 | -- can specify the tablespace, other than default 71 | \! pg_migrate --dbname=contrib_regression --table=testts1 --tablespace testts --alter='ADD COLUMN a1 INT' --execute 72 | INFO: migrating table "public.testts1" 73 | INFO: altering table with: ADD COLUMN a1 INT 74 | SELECT relname, spcname 75 | FROM pg_class JOIN pg_tablespace ts ON ts.oid = reltablespace 76 | WHERE relname ~ '^testts1' AND NOT relname ~ '^testts1_pre_migrate' 77 | ORDER BY relname; 78 | relname | spcname 79 | ---------------------------+--------- 80 | testts1 | testts 81 | testts1_partial_idx_adf63 | testts 82 | testts1_pkey_adf63 | testts 83 | testts1_with_idx_adf63 | testts 84 | (4 rows) 85 | 86 | SELECT * from testts1 order by id; 87 | id | data | a1 88 | ----+------+---- 89 | 1 | a | 90 | 2 | b | 91 | 3 | c | 92 | (3 rows) 93 | 94 | -- tablespace stays where it is 95 | \! pg_migrate --dbname=contrib_regression --table=testts1 --alter='ADD COLUMN a2 INT' --execute 96 | INFO: migrating table "public.testts1" 97 | INFO: altering table with: ADD COLUMN a2 INT 98 | SELECT relname, spcname 99 | FROM pg_class JOIN pg_tablespace ts ON ts.oid = reltablespace 100 | WHERE relname ~ '^testts1' AND NOT relname ~ '^testts1_pre_migrate' 101 | ORDER BY relname; 102 | relname | spcname 103 | ---------------------------------+--------- 104 | testts1 | testts 105 | testts1_partial_idx_adf63 | testts 106 | testts1_partial_idx_adf63_adf63 | testts 107 | testts1_pkey_adf63 | testts 108 | testts1_pkey_adf63_adf63 | testts 109 | testts1_with_idx_adf63 | testts 110 | testts1_with_idx_adf63_adf63 | testts 111 | (7 rows) 112 | 113 | -- can move the tablespace back to default 114 | \! pg_migrate --dbname=contrib_regression --table=testts1 -s pg_default --alter='ADD COLUMN a3 INT' --execute 115 | INFO: migrating table "public.testts1" 116 | INFO: altering table with: ADD COLUMN a3 INT 117 | SELECT relname, spcname 118 | FROM pg_class JOIN pg_tablespace ts ON ts.oid = reltablespace 119 | WHERE relname ~ '^testts1' AND NOT relname ~ '^testts1_pre_migrate' 120 | ORDER BY relname; 121 | relname | spcname 122 | ---------------------------------+--------- 123 | testts1_partial_idx_adf63 | testts 124 | testts1_partial_idx_adf63_adf63 | testts 125 | testts1_pkey_adf63 | testts 126 | testts1_pkey_adf63_adf63 | testts 127 | testts1_with_idx_adf63 | testts 128 | testts1_with_idx_adf63_adf63 | testts 129 | (6 rows) 130 | 131 | -- can move the table together with the indexes 132 | \! pg_migrate --dbname=contrib_regression --table=testts1 --tablespace testts --alter='ADD COLUMN a4 INT' --execute 133 | INFO: migrating table "public.testts1" 134 | INFO: altering table with: ADD COLUMN a4 INT 135 | SELECT relname, spcname 136 | FROM pg_class JOIN pg_tablespace ts ON ts.oid = reltablespace 137 | WHERE relname ~ '^testts1' AND NOT relname ~ '^testts1_pre_migrate' 138 | ORDER BY relname; 139 | relname | spcname 140 | ---------------------------------------------+--------- 141 | testts1 | testts 142 | testts1_partial_idx_adf63 | testts 143 | testts1_partial_idx_adf63_adf63 | testts 144 | testts1_partial_idx_adf63_adf63_adf63_adf63 | testts 145 | testts1_pkey_adf63 | testts 146 | testts1_pkey_adf63_adf63 | testts 147 | testts1_pkey_adf63_adf63_adf63_adf63 | testts 148 | testts1_with_idx_adf63 | testts 149 | testts1_with_idx_adf63_adf63 | testts 150 | testts1_with_idx_adf63_adf63_adf63_adf63 | testts 151 | (10 rows) 152 | 153 | -------------------------------------------------------------------------------- /regress/expected/tablespace.out: -------------------------------------------------------------------------------- 1 | SET client_min_messages = warning; 2 | -- 3 | -- Tablespace features tests 4 | -- 5 | DROP TABLESPACE IF EXISTS testts; 6 | \! mkdir -p /tmp/pg-migrate-tablespace 7 | CREATE TABLESPACE testts LOCATION '/tmp/pg-migrate-tablespace'; 8 | SELECT spcname FROM pg_tablespace WHERE spcname = 'testts'; 9 | spcname 10 | --------- 11 | testts 12 | (1 row) 13 | 14 | -- If the query above failed you must create the 'testts' tablespace; 15 | CREATE TABLE testts1 (id serial primary key, data text); 16 | CREATE INDEX testts1_partial_idx on testts1 (id) where (id > 0); 17 | CREATE INDEX testts1_with_idx on testts1 (id) with (fillfactor=80); 18 | INSERT INTO testts1 (data) values ('a'); 19 | INSERT INTO testts1 (data) values ('b'); 20 | INSERT INTO testts1 (data) values ('c'); 21 | -- check the indexes definitions 22 | SELECT regexp_replace( 23 | migrate.migrate_indexdef(indexrelid, 'testts1'::regclass, NULL, false, 'hash'), 24 | '_[0-9]+', '_OID', 'g') 25 | FROM pg_index i join pg_class c ON c.oid = indexrelid 26 | WHERE indrelid = 'testts1'::regclass ORDER BY relname; 27 | regexp_replace 28 | ----------------------------------------------------------------------------------------------------------------------- 29 | CREATE INDEX testts1_partial_idx_hash ON migrate.table_OID USING btree (id) TABLESPACE pg_default WHERE (id > 0) 30 | CREATE UNIQUE INDEX testts1_pkey_hash ON migrate.table_OID USING btree (id) TABLESPACE pg_default 31 | CREATE INDEX testts1_with_idx_hash ON migrate.table_OID USING btree (id) WITH (fillfactor='80') TABLESPACE pg_default 32 | (3 rows) 33 | 34 | SELECT regexp_replace( 35 | migrate.migrate_indexdef(indexrelid, 'testts1'::regclass, 'foo', false, 'hash'), 36 | '_[0-9]+', '_OID', 'g') 37 | FROM pg_index i join pg_class c ON c.oid = indexrelid 38 | WHERE indrelid = 'testts1'::regclass ORDER BY relname; 39 | regexp_replace 40 | ---------------------------------------------------------------------------------------------------------------- 41 | CREATE INDEX testts1_partial_idx_hash ON migrate.table_OID USING btree (id) TABLESPACE foo WHERE (id > 0) 42 | CREATE UNIQUE INDEX testts1_pkey_hash ON migrate.table_OID USING btree (id) TABLESPACE foo 43 | CREATE INDEX testts1_with_idx_hash ON migrate.table_OID USING btree (id) WITH (fillfactor='80') TABLESPACE foo 44 | (3 rows) 45 | 46 | SELECT regexp_replace( 47 | migrate.migrate_indexdef(indexrelid, 'testts1'::regclass, NULL, true, 'hash'), 48 | '_[0-9]+', '_OID', 'g') 49 | FROM pg_index i join pg_class c ON c.oid = indexrelid 50 | WHERE indrelid = 'testts1'::regclass ORDER BY relname; 51 | regexp_replace 52 | --------------------------------------------------------------------------------------------------------------------------------- 53 | CREATE INDEX CONCURRENTLY testts1_partial_idx_hash ON public.testts1 USING btree (id) TABLESPACE pg_default WHERE (id > 0) 54 | CREATE UNIQUE INDEX CONCURRENTLY testts1_pkey_hash ON public.testts1 USING btree (id) TABLESPACE pg_default 55 | CREATE INDEX CONCURRENTLY testts1_with_idx_hash ON public.testts1 USING btree (id) WITH (fillfactor='80') TABLESPACE pg_default 56 | (3 rows) 57 | 58 | SELECT regexp_replace( 59 | migrate.migrate_indexdef(indexrelid, 'testts1'::regclass, 'foo', true, 'hash'), 60 | '_[0-9]+', '_OID', 'g') 61 | FROM pg_index i join pg_class c ON c.oid = indexrelid 62 | WHERE indrelid = 'testts1'::regclass ORDER BY relname; 63 | regexp_replace 64 | -------------------------------------------------------------------------------------------------------------------------- 65 | CREATE INDEX CONCURRENTLY testts1_partial_idx_hash ON public.testts1 USING btree (id) TABLESPACE foo WHERE (id > 0) 66 | CREATE UNIQUE INDEX CONCURRENTLY testts1_pkey_hash ON public.testts1 USING btree (id) TABLESPACE foo 67 | CREATE INDEX CONCURRENTLY testts1_with_idx_hash ON public.testts1 USING btree (id) WITH (fillfactor='80') TABLESPACE foo 68 | (3 rows) 69 | 70 | -- can specify the tablespace, other than default 71 | \! pg_migrate --dbname=contrib_regression --table=testts1 --tablespace testts --alter='ADD COLUMN a1 INT' --execute 72 | INFO: migrating table "public.testts1" 73 | INFO: altering table with: ADD COLUMN a1 INT 74 | SELECT relname, spcname 75 | FROM pg_class JOIN pg_tablespace ts ON ts.oid = reltablespace 76 | WHERE relname ~ '^testts1' AND NOT relname ~ '^testts1_pre_migrate' 77 | ORDER BY relname; 78 | relname | spcname 79 | ---------------------------+--------- 80 | testts1 | testts 81 | testts1_partial_idx_adf63 | testts 82 | testts1_pkey_adf63 | testts 83 | testts1_with_idx_adf63 | testts 84 | (4 rows) 85 | 86 | SELECT * from testts1 order by id; 87 | id | data | a1 88 | ----+------+---- 89 | 1 | a | 90 | 2 | b | 91 | 3 | c | 92 | (3 rows) 93 | 94 | -- tablespace stays where it is 95 | \! pg_migrate --dbname=contrib_regression --table=testts1 --alter='ADD COLUMN a2 INT' --execute 96 | INFO: migrating table "public.testts1" 97 | INFO: altering table with: ADD COLUMN a2 INT 98 | SELECT relname, spcname 99 | FROM pg_class JOIN pg_tablespace ts ON ts.oid = reltablespace 100 | WHERE relname ~ '^testts1' AND NOT relname ~ '^testts1_pre_migrate' 101 | ORDER BY relname; 102 | relname | spcname 103 | ---------------------------------+--------- 104 | testts1 | testts 105 | testts1_partial_idx_adf63 | testts 106 | testts1_partial_idx_adf63_adf63 | testts 107 | testts1_pkey_adf63 | testts 108 | testts1_pkey_adf63_adf63 | testts 109 | testts1_with_idx_adf63 | testts 110 | testts1_with_idx_adf63_adf63 | testts 111 | (7 rows) 112 | 113 | -- can move the tablespace back to default 114 | \! pg_migrate --dbname=contrib_regression --table=testts1 -s pg_default --alter='ADD COLUMN a3 INT' --execute 115 | INFO: migrating table "public.testts1" 116 | INFO: altering table with: ADD COLUMN a3 INT 117 | SELECT relname, spcname 118 | FROM pg_class JOIN pg_tablespace ts ON ts.oid = reltablespace 119 | WHERE relname ~ '^testts1' AND NOT relname ~ '^testts1_pre_migrate' 120 | ORDER BY relname; 121 | relname | spcname 122 | ---------------------------------+--------- 123 | testts1_partial_idx_adf63 | testts 124 | testts1_partial_idx_adf63_adf63 | testts 125 | testts1_pkey_adf63 | testts 126 | testts1_pkey_adf63_adf63 | testts 127 | testts1_with_idx_adf63 | testts 128 | testts1_with_idx_adf63_adf63 | testts 129 | (6 rows) 130 | 131 | -- can move the table together with the indexes 132 | \! pg_migrate --dbname=contrib_regression --table=testts1 --tablespace testts --alter='ADD COLUMN a4 INT' --execute 133 | INFO: migrating table "public.testts1" 134 | INFO: altering table with: ADD COLUMN a4 INT 135 | SELECT relname, spcname 136 | FROM pg_class JOIN pg_tablespace ts ON ts.oid = reltablespace 137 | WHERE relname ~ '^testts1' AND NOT relname ~ '^testts1_pre_migrate' 138 | ORDER BY relname; 139 | relname | spcname 140 | ---------------------------------------------+--------- 141 | testts1 | testts 142 | testts1_partial_idx_adf63 | testts 143 | testts1_partial_idx_adf63_adf63 | testts 144 | testts1_partial_idx_adf63_adf63_adf63_adf63 | testts 145 | testts1_pkey_adf63 | testts 146 | testts1_pkey_adf63_adf63 | testts 147 | testts1_pkey_adf63_adf63_adf63_adf63 | testts 148 | testts1_with_idx_adf63 | testts 149 | testts1_with_idx_adf63_adf63 | testts 150 | testts1_with_idx_adf63_adf63_adf63_adf63 | testts 151 | (10 rows) 152 | 153 | -------------------------------------------------------------------------------- /regress/expected/check.out: -------------------------------------------------------------------------------- 1 | SET client_min_messages = warning; 2 | SELECT col1, to_char("time", 'YYYY-MM-DD HH24:MI:SS'), ","")" FROM tbl_cluster ORDER BY 1, 2; 3 | col1 | to_char | ,") 4 | ------+---------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 5 | 1 | 2008-12-31 10:00:00 | admin 6 | 2 | 2008-01-01 00:00:00 | king 7 | 3 | 2008-03-04 12:00:00 | joker 8 | 4 | 2008-03-05 15:00:00 | queen 9 | 5 | 2008-01-01 00:30:00 | 1.4142135623730950488016887242096980785696718753769480731766797379907324784621070388503875343276415727350138462309122970249248360558507372126441214970999358314132226659275055927557999505011527820605714701095599716059702745345968620147285174186408891986095523292304843087143214508397626036279952514079896872533965463318088296406206152583523950547457502877599617298355752203375318570113543746034084988471603868999706990048150305440277903164542478230684929369186215805784631115966687130130156185689872372352885092648612494977154218334204285686060146824720771435854874155657069677653720226485447015858801620758474922657226002085584466521458398893944370926591800311388246468157082630100594858704003186480342194897278290641045072636881313739855256117322040245091227700226941127573627280495738108967504018369868368450725799364729060762996941380475654823728997180326802474420629269124859052181004459842150591120249441341728531478105803603371077309182869314710171111683916581726889419758716582152128229518488471.732050807568877293527446341505872366942805253810380628055806979451933016908800037081146186757248575675626141415406703029969945094998952478811655512094373648528093231902305582067974820101084674923265015312343266903322886650672254668921837971227047131660367861588019049986537379859389467650347506576050756618348129606100947602187190325083145829523959832997789824508288714463832917347224163984587855397667958063818353666110843173780894378316102088305524901670023520711144288695990956365797087168498072899493296484283020786408603988738697537582317317831395992983007838702877053913369563312103707264019249106768231199288375641141422016742752102372994270831059898459475987664288897796147837958390228854852903576033852808064381972344661059689722872865264153822664698420021195484155278441181286534507035191650016689294415480846071277143999762926834629577438361895110127148638746976545982451788550975379013880664961911962222957110555242923723192197738262561631468842032853716682938649611917049738836395495938 10 | (5 rows) 11 | 12 | SELECT * FROM tbl_only_ckey ORDER BY 1; 13 | col1 | col2 | ,") 14 | ------+--------------------------+----- 15 | 1 | Tue Jan 01 00:00:00 2008 | abc 16 | 2 | Fri Feb 01 00:00:00 2008 | def 17 | (2 rows) 18 | 19 | SELECT * FROM tbl_only_pkey ORDER BY 1; 20 | col1 | ,") | a1 21 | ------+-----+---- 22 | 1 | abc | 23 | 2 | def | 24 | (2 rows) 25 | 26 | SELECT * FROM tbl_gistkey ORDER BY 1; 27 | id | c | a1 28 | ----+-----------+---- 29 | 1 | <(1,2),3> | 30 | 2 | <(4,5),6> | 31 | (2 rows) 32 | 33 | SET enable_seqscan = on; 34 | SET enable_indexscan = off; 35 | SELECT * FROM tbl_with_dropped_column ; 36 | c1 | id | c2 | c3 | a1 37 | ----+----+----+----+---- 38 | c1 | 1 | c2 | | 39 | c1 | 2 | c2 | | 40 | (2 rows) 41 | 42 | SELECT * FROM view_for_tbl ORDER BY 1, 2; 43 | c1 | id | c2 44 | ----+----+---- 45 | c1 | 1 | c2 46 | c1 | 2 | c2 47 | (2 rows) 48 | 49 | SELECT * FROM tbl_with_dropped_toast; 50 | i | j | a1 51 | ---+----+---- 52 | 1 | 10 | 53 | 2 | 20 | 54 | (2 rows) 55 | 56 | SET enable_seqscan = off; 57 | SET enable_indexscan = on; 58 | SELECT * FROM tbl_with_dropped_column ORDER BY 1, 2; 59 | c1 | id | c2 | c3 | a1 60 | ----+----+----+----+---- 61 | c1 | 1 | c2 | | 62 | c1 | 2 | c2 | | 63 | (2 rows) 64 | 65 | SELECT * FROM view_for_tbl; 66 | c1 | id | c2 67 | ----+----+---- 68 | c1 | 2 | c2 69 | c1 | 1 | c2 70 | (2 rows) 71 | 72 | SELECT * FROM tbl_with_dropped_toast; 73 | i | j | a1 74 | ---+----+---- 75 | 1 | 10 | 76 | 2 | 20 | 77 | (2 rows) 78 | 79 | RESET enable_seqscan; 80 | RESET enable_indexscan; 81 | -- check if storage option for both table and TOAST table didn't go away. 82 | SELECT CASE relkind 83 | WHEN 'r' THEN relname 84 | WHEN 't' THEN 'toast_table' 85 | END as table, 86 | reloptions 87 | FROM pg_class 88 | WHERE relname = 'tbl_with_toast' OR relname = 'pg_toast_' || 'tbl_with_toast'::regclass::oid 89 | ORDER BY 1; 90 | table | reloptions 91 | ----------------+--------------------------------------------------------------------- 92 | tbl_with_toast | {autovacuum_vacuum_scale_factor=30,autovacuum_vacuum_threshold=300} 93 | toast_table | {autovacuum_vacuum_scale_factor=40,autovacuum_vacuum_threshold=400} 94 | (2 rows) 95 | 96 | SELECT pg_relation_size(reltoastrelid) = 0 as check_toast_rel_size FROM pg_class WHERE relname = 'tbl_with_mod_column_storage'; 97 | check_toast_rel_size 98 | ---------------------- 99 | t 100 | (1 row) 101 | 102 | -- 103 | -- check broken links or orphan toast relations 104 | -- 105 | SELECT oid, relname 106 | FROM pg_class 107 | WHERE relkind = 't' 108 | AND oid NOT IN (SELECT reltoastrelid FROM pg_class WHERE relkind = 'r'); 109 | oid | relname 110 | -----+--------- 111 | (0 rows) 112 | 113 | SELECT oid, relname 114 | FROM pg_class 115 | WHERE relkind = 'r' 116 | AND reltoastrelid <> 0 117 | AND reltoastrelid NOT IN (SELECT oid FROM pg_class WHERE relkind = 't'); 118 | oid | relname 119 | -----+--------- 120 | (0 rows) 121 | 122 | -- check columns options 123 | SELECT attname, attstattarget, attoptions 124 | FROM pg_attribute 125 | WHERE attrelid = 'tbl_idxopts'::regclass 126 | AND attnum > 0 127 | ORDER BY attnum; 128 | attname | attstattarget | attoptions 129 | ---------+---------------+------------------- 130 | i | -1 | 131 | t | -1 | {n_distinct=-0.5} 132 | a1 | -1 | 133 | (3 rows) 134 | 135 | -- 136 | -- NOT NULL UNIQUE 137 | -- 138 | CREATE TABLE tbl_nn (col1 int NOT NULL, col2 int NOT NULL); 139 | CREATE TABLE tbl_uk (col1 int NOT NULL, col2 int , UNIQUE(col1, col2)); 140 | CREATE TABLE tbl_nn_uk (col1 int NOT NULL, col2 int NOT NULL, UNIQUE(col1, col2)); 141 | CREATE TABLE tbl_pk_uk (col1 int NOT NULL, col2 int NOT NULL, PRIMARY KEY(col1, col2), UNIQUE(col2, col1)); 142 | CREATE TABLE tbl_nn_puk (col1 int NOT NULL, col2 int NOT NULL); 143 | CREATE UNIQUE INDEX tbl_nn_puk_pcol1_idx ON tbl_nn_puk(col1) WHERE col1 < 10; 144 | \! pg_migrate --execute --alter='ADD COLUMN a1 INT' --dbname=contrib_regression --table=tbl_nn 145 | WARNING: relation "public.tbl_nn" must have a primary key or not-null unique keys 146 | -- => WARNING 147 | \! pg_migrate --execute --alter='ADD COLUMN a1 INT' --dbname=contrib_regression --table=tbl_uk 148 | WARNING: relation "public.tbl_uk" must have a primary key or not-null unique keys 149 | -- => WARNING 150 | \! pg_migrate --execute --alter='ADD COLUMN a1 INT' --dbname=contrib_regression --table=tbl_nn_uk 151 | INFO: migrating table "public.tbl_nn_uk" 152 | INFO: altering table with: ADD COLUMN a1 INT 153 | -- => OK 154 | \! pg_migrate --execute --alter='ADD COLUMN a1 INT' --dbname=contrib_regression --table=tbl_pk_uk 155 | INFO: migrating table "public.tbl_pk_uk" 156 | INFO: altering table with: ADD COLUMN a1 INT 157 | -- => OK 158 | \! pg_migrate --execute --alter='ADD COLUMN a1 INT' --dbname=contrib_regression --table=tbl_nn_puk 159 | WARNING: relation "public.tbl_nn_puk" must have a primary key or not-null unique keys 160 | -- => WARNING 161 | -- 162 | -- Triggers handling 163 | -- 164 | CREATE FUNCTION trgtest() RETURNS trigger AS 165 | $$BEGIN RETURN NEW; END$$ 166 | LANGUAGE plpgsql; 167 | CREATE TABLE trg1 (id integer PRIMARY KEY); 168 | CREATE TRIGGER repack_trigger_1 AFTER UPDATE ON trg1 FOR EACH ROW EXECUTE PROCEDURE trgtest(); 169 | \! pg_migrate --execute --alter='ADD COLUMN a1 INT' --dbname=contrib_regression --table=trg1 170 | INFO: migrating table "public.trg1" 171 | INFO: altering table with: ADD COLUMN a1 INT 172 | CREATE TABLE trg2 (id integer PRIMARY KEY); 173 | CREATE TRIGGER repack_trigger AFTER UPDATE ON trg2 FOR EACH ROW EXECUTE PROCEDURE trgtest(); 174 | \! pg_migrate --execute --alter='ADD COLUMN a1 INT' --dbname=contrib_regression --table=trg2 175 | INFO: migrating table "public.trg2" 176 | INFO: altering table with: ADD COLUMN a1 INT 177 | CREATE TABLE trg3 (id integer PRIMARY KEY); 178 | CREATE TRIGGER repack_trigger_1 BEFORE UPDATE ON trg3 FOR EACH ROW EXECUTE PROCEDURE trgtest(); 179 | \! pg_migrate --execute --alter='ADD COLUMN a1 INT' --dbname=contrib_regression --table=trg3 180 | INFO: migrating table "public.trg3" 181 | INFO: altering table with: ADD COLUMN a1 INT 182 | -- 183 | -- Dry run 184 | -- 185 | \! pg_migrate --alter='ADD COLUMN a1 INT' --dbname=contrib_regression --table=tbl_cluster 186 | INFO: Dry run enabled, not executing migration, run with --execute to process. 187 | INFO: migrating table "public.tbl_cluster" 188 | -- Test --schema 189 | -- 190 | CREATE SCHEMA test_schema1; 191 | CREATE TABLE test_schema1.tbl1 (id INTEGER PRIMARY KEY); 192 | CREATE TABLE test_schema1.tbl2 (id INTEGER PRIMARY KEY); 193 | CREATE SCHEMA test_schema2; 194 | CREATE TABLE test_schema2.tbl1 (id INTEGER PRIMARY KEY); 195 | CREATE TABLE test_schema2.tbl2 (id INTEGER PRIMARY KEY); 196 | -- => OK 197 | \! pg_migrate --execute --alter='ADD COLUMN a1 INT' --dbname=contrib_regression --table=test_schema1.tbl1 198 | INFO: migrating table "test_schema1.tbl1" 199 | ERROR: query failed: command string is a null pointer 200 | DETAIL: query was: (null) 201 | -- 202 | -- don't kill backend 203 | -- 204 | \! pg_migrate --execute --alter='ADD COLUMN dkb1 INT' --dbname=contrib_regression --table=tbl_cluster --no-kill-backend 205 | INFO: migrating table "public.tbl_cluster" 206 | INFO: altering table with: ADD COLUMN dkb1 INT 207 | -- 208 | -- table inheritance check 209 | -- 210 | CREATE TABLE parent_a(val integer primary key); 211 | CREATE TABLE child_a_1(val integer primary key) INHERITS(parent_a); 212 | CREATE TABLE child_a_2(val integer primary key) INHERITS(parent_a); 213 | CREATE TABLE parent_b(val integer primary key, i1 int NOT NULL); 214 | CREATE TABLE child_b_1(val integer primary key) INHERITS(parent_b); 215 | CREATE TABLE child_b_2(val integer primary key) INHERITS(parent_b); 216 | -- => OK 217 | \! pg_migrate --execute --alter='ADD COLUMN a1 INT' --dbname=contrib_regression --table=parent_a 218 | INFO: migrating table "public.parent_a" 219 | INFO: altering table with: ADD COLUMN a1 INT 220 | -- => OK 221 | \! pg_migrate --execute --alter='ADD COLUMN a1 TEXT' --dbname=contrib_regression --table=child_a_1 222 | INFO: migrating table "public.child_a_1" 223 | INFO: altering table with: ADD COLUMN a1 TEXT 224 | -- => ERROR 225 | -- TODO non deterministic output \! pg_migrate --execute --alter='NO INHERIT parent_a' --dbname=contrib_regression --table=child_a_2 226 | -- => ERROR 227 | -- TODO non deterministic output \! pg_migrate --execute --alter='ADD COLUMN i1 TEXT' --dbname=contrib_regression --table=child_b_1 228 | -------------------------------------------------------------------------------- /lib/pg_migrate.sql.in: -------------------------------------------------------------------------------- 1 | /* 2 | * pg_migrate: lib/pg_migrate.sql.in 3 | * 4 | * Portions Copyright (c) 2008-2011, NIPPON TELEGRAPH AND TELEPHONE CORPORATION 5 | * Portions Copyright (c) 2011, Itagaki Takahiro 6 | * Portions Copyright (c) 2012-2020, The Reorg Development Team 7 | */ 8 | 9 | CREATE SCHEMA migrate; 10 | 11 | CREATE FUNCTION migrate.version() RETURNS text AS 12 | 'MODULE_PATHNAME', 'migrate_version' 13 | LANGUAGE C IMMUTABLE STRICT; 14 | 15 | CREATE FUNCTION migrate.version_sql() RETURNS text AS 16 | $$SELECT 'pg_migrate MIGRATE_VERSION'::text$$ 17 | LANGUAGE SQL IMMUTABLE STRICT; 18 | 19 | -- Always specify search_path to 'pg_catalog' so that we 20 | -- always can get schema-qualified relation name 21 | CREATE FUNCTION migrate.oid2text(oid) RETURNS text AS 22 | $$ 23 | SELECT textin(regclassout($1)); 24 | $$ 25 | LANGUAGE sql STABLE STRICT SET search_path to 'pg_catalog'; 26 | 27 | CREATE FUNCTION migrate.get_index_columns(oid, text) RETURNS text AS 28 | $$ 29 | SELECT coalesce(string_agg(quote_ident(attname), $2), '') 30 | FROM pg_attribute, 31 | (SELECT indrelid, 32 | indkey, 33 | generate_series(0, indnatts-1) AS i 34 | FROM pg_index 35 | WHERE indexrelid = $1 36 | ) AS keys 37 | WHERE attrelid = indrelid 38 | AND attnum = indkey[i]; 39 | $$ 40 | LANGUAGE sql STABLE STRICT; 41 | 42 | CREATE FUNCTION migrate.get_order_by(oid, oid) RETURNS text AS 43 | 'MODULE_PATHNAME', 'migrate_get_order_by' 44 | LANGUAGE C STABLE STRICT; 45 | 46 | CREATE FUNCTION migrate.get_create_index_type(oid, name) RETURNS text AS 47 | $$ 48 | SELECT 'CREATE TYPE ' || $2 || ' AS (' || 49 | coalesce(string_agg(quote_ident(attname) || ' ' || 50 | pg_catalog.format_type(atttypid, atttypmod), ', '), '') || ')' 51 | FROM pg_attribute, 52 | (SELECT indrelid, 53 | indkey, 54 | generate_series(0, indnatts-1) AS i 55 | FROM pg_index 56 | WHERE indexrelid = $1 57 | ) AS keys 58 | WHERE attrelid = indrelid 59 | AND attnum = indkey[i]; 60 | $$ 61 | LANGUAGE sql STABLE STRICT; 62 | 63 | CREATE FUNCTION migrate.get_create_trigger(relid oid, pkid oid) 64 | RETURNS text AS 65 | $$ 66 | SELECT 'CREATE TRIGGER migrate_trigger' || 67 | ' AFTER INSERT OR DELETE OR UPDATE ON ' || migrate.oid2text($1) || 68 | ' FOR EACH ROW EXECUTE PROCEDURE migrate.migrate_trigger(' || 69 | '''INSERT INTO migrate.log_' || $1 || '(pk, row) VALUES(' || 70 | ' CASE WHEN $1 IS NULL THEN NULL ELSE (ROW($1.' || 71 | migrate.get_index_columns($2, ', $1.') || ')::migrate.pk_' || 72 | $1 || ') END, $2)'')'; 73 | $$ 74 | LANGUAGE sql STABLE STRICT; 75 | 76 | CREATE FUNCTION migrate.get_enable_trigger(relid oid) 77 | RETURNS text AS 78 | $$ 79 | SELECT 'ALTER TABLE ' || migrate.oid2text($1) || 80 | ' ENABLE ALWAYS TRIGGER migrate_trigger'; 81 | $$ 82 | LANGUAGE sql STABLE STRICT; 83 | 84 | CREATE FUNCTION migrate.get_assign(oid, text) RETURNS text AS 85 | $$ 86 | SELECT '(' || coalesce(string_agg(quote_ident(attname), ', '), '') || 87 | ') = (' || $2 || '.' || 88 | coalesce(string_agg(quote_ident(attname), ', ' || $2 || '.'), '') || ')' 89 | FROM (SELECT attname FROM pg_attribute 90 | WHERE attrelid = $1 AND attnum > 0 AND NOT attisdropped 91 | ORDER BY attnum) tmp; 92 | $$ 93 | LANGUAGE sql STABLE STRICT; 94 | 95 | CREATE FUNCTION migrate.get_compare_pkey(oid, text) 96 | RETURNS text AS 97 | $$ 98 | SELECT '(' || coalesce(string_agg(quote_ident(attname), ', '), '') || 99 | ') = (' || $2 || '.' || 100 | coalesce(string_agg(quote_ident(attname), ', ' || $2 || '.'), '') || ')' 101 | FROM pg_attribute, 102 | (SELECT indrelid, 103 | indkey, 104 | generate_series(0, indnatts-1) AS i 105 | FROM pg_index 106 | WHERE indexrelid = $1 107 | ) AS keys 108 | WHERE attrelid = indrelid 109 | AND attnum = indkey[i]; 110 | $$ 111 | LANGUAGE sql STABLE STRICT; 112 | 113 | -- Get a column list for SELECT all columns including dropped ones. 114 | -- We use NULLs of integer types for dropped columns (types are not important). 115 | CREATE FUNCTION migrate.get_columns_for_create_as(oid) 116 | RETURNS text AS 117 | $$ 118 | SELECT coalesce(string_agg(c, ','), '') FROM (SELECT 119 | CASE WHEN attisdropped 120 | THEN 'NULL::integer AS ' || quote_ident(attname) 121 | ELSE quote_ident(attname) 122 | END AS c 123 | FROM pg_attribute 124 | WHERE attrelid = $1 AND attnum > 0 ORDER BY attnum 125 | ) AS COL 126 | $$ 127 | LANGUAGE sql STABLE STRICT; 128 | 129 | -- Get a column list for SELECT all columns excluding dropped ones. 130 | CREATE FUNCTION migrate.get_columns_for_insert(oid) 131 | RETURNS text AS 132 | $$ 133 | SELECT coalesce(string_agg(c, ','), '') FROM (SELECT 134 | CASE WHEN attisdropped 135 | THEN NULL -- note string_agg ignore NULLs 136 | ELSE quote_ident(attname) 137 | END AS c 138 | FROM pg_attribute 139 | WHERE attrelid = $1 AND attnum > 0 ORDER BY attnum 140 | ) AS COL 141 | $$ 142 | LANGUAGE sql STABLE STRICT; 143 | 144 | -- Get a SQL text to DROP dropped columns for the table, 145 | -- or NULL if it has no dropped columns. 146 | CREATE FUNCTION migrate.get_drop_columns(oid, text) 147 | RETURNS text AS 148 | $$ 149 | SELECT 150 | 'ALTER TABLE ' || $2 || ' ' || array_to_string(dropped_columns, ', ') 151 | FROM ( 152 | SELECT 153 | array_agg('DROP COLUMN ' || quote_ident(attname)) AS dropped_columns 154 | FROM ( 155 | SELECT * FROM pg_attribute 156 | WHERE attrelid = $1 AND attnum > 0 AND attisdropped 157 | ORDER BY attnum 158 | ) T 159 | ) T 160 | WHERE 161 | array_upper(dropped_columns, 1) > 0 162 | $$ 163 | LANGUAGE sql STABLE STRICT; 164 | 165 | -- Get a comma-separated storage paramter for the table including 166 | -- paramters for the corresponding TOAST table. 167 | -- Note that since oid setting is always not NULL, this function 168 | -- never returns NULL 169 | CREATE FUNCTION migrate.get_storage_param(oid) 170 | RETURNS TEXT AS 171 | $$ 172 | SELECT string_agg(param, ', ') 173 | FROM ( 174 | -- table storage parameter 175 | SELECT unnest(reloptions) as param 176 | FROM pg_class 177 | WHERE oid = $1 178 | UNION ALL 179 | -- TOAST table storage parameter 180 | SELECT ('toast.' || unnest(reloptions)) as param 181 | FROM ( 182 | SELECT reltoastrelid from pg_class where oid = $1 183 | ) as t, 184 | pg_class as c 185 | WHERE c.oid = t.reltoastrelid 186 | UNION ALL 187 | -- table oid 188 | SELECT 'oids = ' || 189 | CASE WHEN relhasoids 190 | THEN 'true' 191 | ELSE 'false' 192 | END 193 | FROM pg_class 194 | WHERE oid = $1 195 | 196 | ) as t 197 | $$ 198 | LANGUAGE sql STABLE STRICT; 199 | 200 | -- GET a SQL text to set column storage option for the table. 201 | CREATE FUNCTION migrate.get_alter_col_storage(oid) 202 | RETURNS text AS 203 | $$ 204 | SELECT 'ALTER TABLE migrate.table_' || $1 || array_to_string(column_storage, ',') 205 | FROM ( 206 | SELECT 207 | array_agg(' ALTER ' || quote_ident(attname) || 208 | CASE attstorage 209 | WHEN 'p' THEN ' SET STORAGE PLAIN' 210 | WHEN 'm' THEN ' SET STORAGE MAIN' 211 | WHEN 'e' THEN ' SET STORAGE EXTERNAL' 212 | WHEN 'x' THEN ' SET STORAGE EXTENDED' 213 | END) AS column_storage 214 | FROM ( 215 | SELECT * 216 | FROM pg_attribute a 217 | JOIN pg_type t on t.oid = atttypid 218 | JOIN pg_class r on r.oid = a.attrelid 219 | JOIN pg_namespace s on s.oid = r.relnamespace 220 | WHERE typstorage <> attstorage 221 | AND attrelid = $1 222 | AND attnum > 0 223 | AND NOT attisdropped 224 | ORDER BY attnum 225 | ) T 226 | ) T 227 | WHERE array_upper(column_storage , 1) > 0 228 | $$ 229 | LANGUAGE sql STABLE STRICT; 230 | 231 | -- GET a SQL text to create a table with defaults and not null settings. 232 | -- new_table_name can include a prefixed schema separated by a period. 233 | CREATE FUNCTION migrate.get_create_table_statement(target_schema varchar, target_table_name varchar, new_table_name varchar, tablespace varchar) 234 | RETURNS text AS 235 | $$ 236 | DECLARE 237 | table_ddl text; 238 | target_oid oid; 239 | column_record record; 240 | BEGIN 241 | FOR column_record IN 242 | SELECT 243 | row_number() OVER () as row_number, 244 | b.oid as table_oid, 245 | quote_ident(b.nspname) as schema_name, 246 | quote_ident(b.relname) as table_name, 247 | quote_ident(a.attname) as column_name, 248 | pg_catalog.format_type(a.atttypid, a.atttypmod) as column_type, 249 | CASE WHEN 250 | (SELECT substring(pg_catalog.pg_get_expr(d.adbin, d.adrelid) for 128) 251 | FROM pg_catalog.pg_attrdef d 252 | WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum AND a.atthasdef) IS NOT NULL THEN 253 | 'DEFAULT ' || (SELECT substring(pg_catalog.pg_get_expr(d.adbin, d.adrelid) for 128) 254 | FROM pg_catalog.pg_attrdef d 255 | WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum AND a.atthasdef) 256 | ELSE 257 | '' 258 | END as column_default_value, 259 | CASE WHEN a.attnotnull = true THEN 260 | 'NOT NULL' 261 | ELSE 262 | 'NULL' 263 | END as column_not_null, 264 | a.attnum as attnum, 265 | e.max_attnum as max_attnum 266 | FROM 267 | pg_catalog.pg_attribute a 268 | INNER JOIN 269 | (SELECT c.oid, 270 | n.nspname, 271 | c.relname 272 | FROM pg_catalog.pg_class c 273 | LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace 274 | WHERE c.relname ~ ('^(' || target_table_name || ')$') 275 | AND n.nspname = target_schema 276 | AND pg_catalog.pg_table_is_visible(c.oid) 277 | ORDER BY 2, 3) b 278 | ON a.attrelid = b.oid 279 | INNER JOIN 280 | (SELECT 281 | a.attrelid, 282 | max(a.attnum) as max_attnum 283 | FROM pg_catalog.pg_attribute a 284 | WHERE a.attnum > 0 285 | AND NOT a.attisdropped 286 | GROUP BY a.attrelid) e 287 | ON a.attrelid=e.attrelid 288 | WHERE a.attnum > 0 289 | AND NOT a.attisdropped 290 | ORDER BY a.attnum 291 | LOOP 292 | IF column_record.row_number = 1 THEN 293 | target_oid := column_record.table_oid; 294 | table_ddl := 'CREATE TABLE ' || new_table_name || ' ('; 295 | ELSE 296 | table_ddl := table_ddl || ','; 297 | END IF; 298 | 299 | IF column_record.attnum <= column_record.max_attnum THEN 300 | table_ddl := table_ddl || chr(10) || 301 | ' ' || column_record.column_name || ' ' || column_record.column_type || ' ' || column_record.column_default_value || ' ' || column_record.column_not_null; 302 | END IF; 303 | END LOOP; 304 | 305 | table_ddl := table_ddl || ') ' || ' WITH (' || migrate.get_storage_param(target_oid) || ') TABLESPACE ' || tablespace || ';'; 306 | RETURN table_ddl; 307 | END; 308 | $$ 309 | LANGUAGE 'plpgsql' COST 100.0 SECURITY INVOKER; 310 | 311 | -- includes not only PRIMARY KEYS but also UNIQUE NOT NULL keys 312 | CREATE VIEW migrate.primary_keys AS 313 | SELECT indrelid, min(indexrelid) AS indexrelid 314 | FROM (SELECT indrelid, indexrelid FROM pg_index 315 | WHERE indisunique 316 | AND indisvalid 317 | AND indpred IS NULL 318 | AND 0 <> ALL(indkey) 319 | AND NOT EXISTS( 320 | SELECT 1 FROM pg_attribute 321 | WHERE attrelid = indrelid 322 | AND attnum = ANY(indkey) 323 | AND NOT attnotnull) 324 | ORDER BY indrelid, indisprimary DESC, indnatts, indkey) tmp 325 | GROUP BY indrelid; 326 | 327 | CREATE VIEW migrate.tables AS 328 | SELECT migrate.oid2text(R.oid) AS relname, 329 | R.oid AS relid, 330 | R.reltoastrelid AS reltoastrelid, 331 | CASE WHEN R.reltoastrelid = 0 THEN 0 ELSE ( 332 | SELECT indexrelid FROM pg_index 333 | WHERE indrelid = R.reltoastrelid 334 | AND indisvalid) END AS reltoastidxid, 335 | N.nspname AS schemaname, 336 | PK.indexrelid AS pkid, 337 | CK.indexrelid AS ckid, 338 | migrate.get_create_index_type(PK.indexrelid, 'migrate.pk_' || R.oid) AS create_pktype, 339 | 'CREATE TABLE migrate.log_' || R.oid || ' (id bigserial PRIMARY KEY, pk migrate.pk_' || R.oid || ', row ' || migrate.oid2text(R.oid) || ')' AS create_log, 340 | migrate.get_create_trigger(R.oid, PK.indexrelid) AS create_trigger, 341 | migrate.get_enable_trigger(R.oid) as enable_trigger, 342 | 'CREATE TABLE migrate.table_' || R.oid || ' WITH (' || migrate.get_storage_param(R.oid) || ') TABLESPACE ' AS create_table_1, 343 | coalesce(quote_ident(S.spcname), 'pg_default') as tablespace_orig, 344 | ' AS SELECT ' || migrate.get_columns_for_create_as(R.oid) || ' FROM ONLY ' || migrate.oid2text(R.oid) AS create_table_2, 345 | 'INSERT INTO migrate.table_' || R.oid || ' SELECT ' || migrate.get_columns_for_insert(R.oid) || ' FROM ONLY ' || migrate.oid2text(R.oid) AS copy_data, 346 | migrate.get_alter_col_storage(R.oid) AS alter_col_storage, 347 | migrate.get_drop_columns(R.oid, 'migrate.table_' || R.oid) AS drop_columns, 348 | 'DELETE FROM migrate.log_' || R.oid AS delete_log, 349 | 'LOCK TABLE ' || migrate.oid2text(R.oid) || ' IN ACCESS EXCLUSIVE MODE' AS lock_table, 350 | migrate.get_order_by(CK.indexrelid, R.oid) AS ckey, 351 | 'SELECT * FROM migrate.log_' || R.oid || ' ORDER BY id LIMIT $1' AS sql_peek, 352 | 'INSERT INTO migrate.table_' || R.oid || ' VALUES ($1.*)' AS sql_insert, 353 | 'DELETE FROM migrate.table_' || R.oid || ' WHERE ' || migrate.get_compare_pkey(PK.indexrelid, '$1') AS sql_delete, 354 | 'UPDATE migrate.table_' || R.oid || ' SET ' || migrate.get_assign(R.oid, '$2') || ' WHERE ' || migrate.get_compare_pkey(PK.indexrelid, '$1') AS sql_update, 355 | 'DELETE FROM migrate.log_' || R.oid || ' WHERE id IN (' AS sql_pop 356 | FROM pg_class R 357 | LEFT JOIN pg_class T ON R.reltoastrelid = T.oid 358 | LEFT JOIN migrate.primary_keys PK 359 | ON R.oid = PK.indrelid 360 | LEFT JOIN (SELECT CKI.* FROM pg_index CKI, pg_class CKT 361 | WHERE CKI.indisvalid 362 | AND CKI.indexrelid = CKT.oid 363 | AND CKI.indisclustered 364 | AND CKT.relam = 403) CK 365 | ON R.oid = CK.indrelid 366 | LEFT JOIN pg_namespace N ON N.oid = R.relnamespace 367 | LEFT JOIN pg_tablespace S ON S.oid = R.reltablespace 368 | WHERE R.relkind = 'r' 369 | AND R.relpersistence = 'p' 370 | AND N.nspname NOT IN ('pg_catalog', 'information_schema') 371 | AND N.nspname NOT LIKE E'pg\\_temp\\_%'; 372 | 373 | CREATE FUNCTION migrate.migrate_indexdef(oid, oid, name, bool, text) RETURNS text AS 374 | 'MODULE_PATHNAME', 'migrate_indexdef' 375 | LANGUAGE C STABLE; 376 | 377 | CREATE FUNCTION migrate.migrate_trigger() RETURNS trigger AS 378 | 'MODULE_PATHNAME', 'migrate_trigger' 379 | LANGUAGE C VOLATILE STRICT SECURITY DEFINER; 380 | 381 | CREATE FUNCTION migrate.conflicted_triggers(oid) RETURNS SETOF name AS 382 | $$ 383 | SELECT tgname FROM pg_trigger 384 | WHERE tgrelid = $1 AND tgname = 'migrate_trigger' 385 | ORDER BY tgname; 386 | $$ 387 | LANGUAGE sql STABLE STRICT; 388 | 389 | CREATE FUNCTION migrate.disable_autovacuum(regclass) RETURNS void AS 390 | 'MODULE_PATHNAME', 'migrate_disable_autovacuum' 391 | LANGUAGE C VOLATILE STRICT; 392 | 393 | CREATE FUNCTION migrate.reset_autovacuum(regclass) RETURNS void AS 394 | 'MODULE_PATHNAME', 'migrate_reset_autovacuum' 395 | LANGUAGE C VOLATILE STRICT; 396 | 397 | CREATE FUNCTION migrate.migrate_apply( 398 | sql_peek cstring, 399 | sql_insert cstring, 400 | sql_delete cstring, 401 | sql_update cstring, 402 | sql_pop cstring, 403 | count integer) 404 | RETURNS integer AS 405 | 'MODULE_PATHNAME', 'migrate_apply' 406 | LANGUAGE C VOLATILE; 407 | 408 | CREATE FUNCTION migrate.migrate_swap(oid) RETURNS void AS 409 | 'MODULE_PATHNAME', 'migrate_swap' 410 | LANGUAGE C VOLATILE STRICT; 411 | 412 | CREATE FUNCTION migrate.migrate_drop(oid, int) RETURNS void AS 413 | 'MODULE_PATHNAME', 'migrate_drop' 414 | LANGUAGE C VOLATILE STRICT; 415 | 416 | CREATE FUNCTION migrate.migrate_index_swap(oid) RETURNS void AS 417 | 'MODULE_PATHNAME', 'migrate_index_swap' 418 | LANGUAGE C STABLE STRICT; 419 | 420 | CREATE FUNCTION migrate.get_table_and_inheritors(regclass) RETURNS regclass[] AS 421 | 'MODULE_PATHNAME', 'migrate_get_table_and_inheritors' 422 | LANGUAGE C STABLE STRICT; 423 | -------------------------------------------------------------------------------- /bin/pgut/pgut-fe.c: -------------------------------------------------------------------------------- 1 | /*------------------------------------------------------------------------- 2 | * pgut-fe.c 3 | * 4 | * Portions Copyright (c) 2008-2011, NIPPON TELEGRAPH AND TELEPHONE CORPORATION 5 | * Portions Copyright (c) 2011, Itagaki Takahiro 6 | * Portions Copyright (c) 2012-2020, The Reorg Development Team 7 | *------------------------------------------------------------------------- 8 | */ 9 | 10 | #define FRONTEND 11 | #include "pgut-fe.h" 12 | 13 | #ifdef HAVE_GETOPT_H 14 | #include 15 | #else 16 | #include 17 | #endif 18 | 19 | #ifndef WIN32 20 | #include 21 | #include 22 | #endif 23 | 24 | char *dbname = NULL; 25 | char *host = NULL; 26 | char *port = NULL; 27 | char *username = NULL; 28 | char *password = NULL; 29 | YesNo prompt_password = DEFAULT; 30 | 31 | PGconn *connection = NULL; 32 | PGconn *conn2 = NULL; 33 | 34 | worker_conns workers = { 35 | .num_workers = 0, 36 | .conns = NULL 37 | }; 38 | 39 | 40 | static bool parse_pair(const char buffer[], char key[], char value[]); 41 | static char *get_username(void); 42 | 43 | 44 | /* 45 | * Set up worker conns which will be used for concurrent index rebuilds. 46 | * 'num_workers' is the desired number of worker connections, i.e. from 47 | * --jobs flag. Due to max_connections we might not actually be able to 48 | * set up that many workers, but don't treat that as a fatal error. 49 | */ 50 | void 51 | setup_workers(int num_workers) 52 | { 53 | StringInfoData buf; 54 | int i; 55 | PGconn *conn; 56 | 57 | elog(DEBUG2, "In setup_workers(), target num_workers = %d", num_workers); 58 | 59 | if (num_workers > 1 && num_workers > workers.num_workers) 60 | { 61 | initStringInfo(&buf); 62 | if (dbname && dbname[0]) 63 | appendStringInfo(&buf, "dbname=%s ", dbname); 64 | if (host && host[0]) 65 | appendStringInfo(&buf, "host=%s ", host); 66 | if (port && port[0]) 67 | appendStringInfo(&buf, "port=%s ", port); 68 | if (username && username[0]) 69 | appendStringInfo(&buf, "user=%s ", username); 70 | if (password && password[0]) 71 | appendStringInfo(&buf, "password=%s ", password); 72 | 73 | if (workers.conns == NULL) 74 | { 75 | elog(NOTICE, "Setting up workers.conns"); 76 | workers.conns = (PGconn **) pgut_malloc(sizeof(PGconn *) * num_workers); 77 | } 78 | else 79 | { 80 | elog(ERROR, "TODO: Implement pool resizing."); 81 | } 82 | 83 | for (i = 0; i < num_workers; i++) 84 | { 85 | /* Don't prompt for password again; we should have gotten 86 | * it already from reconnect(). 87 | */ 88 | elog(DEBUG2, "Setting up worker conn %d", i); 89 | 90 | /* Don't confuse pgut_connections by using pgut_connect() 91 | * 92 | * XXX: could use PQconnectStart() and PQconnectPoll() to 93 | * open these connections in non-blocking manner. 94 | */ 95 | conn = PQconnectdb(buf.data); 96 | if (PQstatus(conn) == CONNECTION_OK) 97 | { 98 | workers.conns[i] = conn; 99 | } 100 | else 101 | { 102 | elog(WARNING, "Unable to set up worker conn #%d: %s", i, 103 | PQerrorMessage(conn)); 104 | break; 105 | } 106 | 107 | /* Hardcode a search path to avoid injections into public or pg_temp */ 108 | pgut_command(conn, "SET search_path TO pg_catalog, pg_temp, public", 0, NULL); 109 | 110 | /* Make sure each worker connection can work in non-blocking 111 | * mode. 112 | */ 113 | if (PQsetnonblocking(workers.conns[i], 1)) 114 | { 115 | elog(ERROR, "Unable to set worker connection %d " 116 | "non-blocking.", i); 117 | } 118 | } 119 | /* In case we bailed out of setting up all workers, record 120 | * how many successful worker conns we actually have. 121 | */ 122 | workers.num_workers = i; 123 | 124 | termStringInfo(&buf); 125 | } 126 | } 127 | 128 | /* Disconnect all our worker conns. */ 129 | void disconnect_workers(void) 130 | { 131 | int i; 132 | 133 | if (!(workers.num_workers)) 134 | elog(DEBUG2, "No workers to disconnect."); 135 | else 136 | { 137 | for (i = 0; i < workers.num_workers; i++) 138 | { 139 | if (workers.conns[i]) 140 | { 141 | elog(DEBUG2, "Disconnecting worker %d.", i); 142 | PQfinish(workers.conns[i]); 143 | workers.conns[i] = NULL; 144 | } 145 | else 146 | { 147 | elog(NOTICE, "Worker %d already disconnected?", i); 148 | } 149 | } 150 | workers.num_workers = 0; 151 | free(workers.conns); 152 | workers.conns = NULL; 153 | } 154 | } 155 | 156 | 157 | /* 158 | * the result is also available with the global variable 'connection'. 159 | */ 160 | void 161 | reconnect(int elevel) 162 | { 163 | StringInfoData buf; 164 | char *new_password; 165 | 166 | disconnect(); 167 | initStringInfo(&buf); 168 | if (dbname && dbname[0]) 169 | appendStringInfo(&buf, "dbname=%s ", dbname); 170 | if (host && host[0]) 171 | appendStringInfo(&buf, "host=%s ", host); 172 | if (port && port[0]) 173 | appendStringInfo(&buf, "port=%s ", port); 174 | if (username && username[0]) 175 | appendStringInfo(&buf, "user=%s ", username); 176 | if (password && password[0]) 177 | appendStringInfo(&buf, "password=%s ", password); 178 | 179 | connection = pgut_connect(buf.data, prompt_password, elevel); 180 | conn2 = pgut_connect(buf.data, prompt_password, elevel); 181 | 182 | /* update password */ 183 | if (connection) 184 | { 185 | new_password = PQpass(connection); 186 | if (new_password && new_password[0] && 187 | (password == NULL || strcmp(new_password, password) != 0)) 188 | { 189 | free(password); 190 | password = pgut_strdup(new_password); 191 | } 192 | } 193 | 194 | termStringInfo(&buf); 195 | } 196 | 197 | void 198 | disconnect(void) 199 | { 200 | if (connection) 201 | { 202 | pgut_disconnect(connection); 203 | connection = NULL; 204 | } 205 | if (conn2) 206 | { 207 | pgut_disconnect(conn2); 208 | conn2 = NULL; 209 | } 210 | disconnect_workers(); 211 | } 212 | 213 | static void 214 | option_from_env(pgut_option options[]) 215 | { 216 | size_t i; 217 | 218 | for (i = 0; options && options[i].type; i++) 219 | { 220 | pgut_option *opt = &options[i]; 221 | char name[256]; 222 | size_t j; 223 | const char *s; 224 | const char *value; 225 | 226 | if (opt->source > SOURCE_ENV || 227 | opt->allowed == SOURCE_DEFAULT || opt->allowed > SOURCE_ENV) 228 | continue; 229 | 230 | for (s = opt->lname, j = 0; *s && j < lengthof(name) - 1; s++, j++) 231 | { 232 | if (strchr("-_ ", *s)) 233 | name[j] = '_'; /* - to _ */ 234 | else 235 | name[j] = toupper(*s); 236 | } 237 | name[j] = '\0'; 238 | 239 | if ((value = getenv(name)) != NULL) 240 | pgut_setopt(opt, value, SOURCE_ENV); 241 | } 242 | } 243 | 244 | /* compare two strings ignore cases and ignore -_ */ 245 | bool 246 | pgut_keyeq(const char *lhs, const char *rhs) 247 | { 248 | for (; *lhs && *rhs; lhs++, rhs++) 249 | { 250 | if (strchr("-_ ", *lhs)) 251 | { 252 | if (!strchr("-_ ", *rhs)) 253 | return false; 254 | } 255 | else if (ToLower(*lhs) != ToLower(*rhs)) 256 | return false; 257 | } 258 | 259 | return *lhs == '\0' && *rhs == '\0'; 260 | } 261 | 262 | void 263 | pgut_setopt(pgut_option *opt, const char *optarg, pgut_optsrc src) 264 | { 265 | const char *message; 266 | 267 | if (opt == NULL) 268 | { 269 | fprintf(stderr, "Try \"%s --help\" for more information.\n", PROGRAM_NAME); 270 | exit(EINVAL); 271 | } 272 | 273 | if (opt->source > src) 274 | { 275 | /* high prior value has been set already. */ 276 | return; 277 | } 278 | else if (src >= SOURCE_CMDLINE && opt->source >= src && opt->type != 'l') 279 | { 280 | /* duplicated option in command line -- don't worry if the option 281 | * type is 'l' i.e. SimpleStringList, since we are allowed to have 282 | * multiples of these. 283 | */ 284 | message = "specified only once"; 285 | } 286 | else 287 | { 288 | /* can be overwritten if non-command line source */ 289 | opt->source = src; 290 | 291 | switch (opt->type) 292 | { 293 | case 'b': 294 | case 'B': 295 | if (optarg == NULL) 296 | { 297 | *((bool *) opt->var) = (opt->type == 'b'); 298 | return; 299 | } 300 | else if (parse_bool(optarg, (bool *) opt->var)) 301 | { 302 | return; 303 | } 304 | message = "a boolean"; 305 | break; 306 | case 'f': 307 | ((pgut_optfn) opt->var)(opt, optarg); 308 | return; 309 | case 'i': 310 | if (parse_int32(optarg, opt->var)) 311 | return; 312 | message = "a 32bit signed integer"; 313 | break; 314 | case 'l': 315 | message = "a List"; 316 | simple_string_list_append(opt->var, optarg); 317 | return; 318 | case 'u': 319 | if (parse_uint32(optarg, opt->var)) 320 | return; 321 | message = "a 32bit unsigned integer"; 322 | break; 323 | case 'I': 324 | if (parse_int64(optarg, opt->var)) 325 | return; 326 | message = "a 64bit signed integer"; 327 | break; 328 | case 'U': 329 | if (parse_uint64(optarg, opt->var)) 330 | return; 331 | message = "a 64bit unsigned integer"; 332 | break; 333 | case 's': 334 | if (opt->source != SOURCE_DEFAULT) 335 | free(*(char **) opt->var); 336 | *(char **) opt->var = pgut_strdup(optarg); 337 | return; 338 | case 't': 339 | if (parse_time(optarg, opt->var)) 340 | return; 341 | message = "a time"; 342 | break; 343 | case 'y': 344 | case 'Y': 345 | if (optarg == NULL) 346 | { 347 | *(YesNo *) opt->var = (opt->type == 'y' ? YES : NO); 348 | return; 349 | } 350 | else 351 | { 352 | bool value; 353 | if (parse_bool(optarg, &value)) 354 | { 355 | *(YesNo *) opt->var = (value ? YES : NO); 356 | return; 357 | } 358 | } 359 | message = "a boolean"; 360 | break; 361 | default: 362 | ereport(ERROR, 363 | (errcode(EINVAL), 364 | errmsg("invalid option type: %c", opt->type))); 365 | return; /* keep compiler quiet */ 366 | } 367 | } 368 | 369 | if (isprint(opt->sname)) 370 | ereport(ERROR, 371 | (errcode(EINVAL), 372 | errmsg("option -%c, --%s should be %s: '%s'", 373 | opt->sname, opt->lname, message, optarg))); 374 | else 375 | ereport(ERROR, 376 | (errcode(EINVAL), 377 | errmsg("option --%s should be %s: '%s'", 378 | opt->lname, message, optarg))); 379 | } 380 | 381 | /* 382 | * Get configuration from configuration file. 383 | */ 384 | void 385 | pgut_readopt(const char *path, pgut_option options[], int elevel) 386 | { 387 | FILE *fp; 388 | char buf[1024]; 389 | char key[1024]; 390 | char value[1024]; 391 | 392 | if (!options) 393 | return; 394 | 395 | if ((fp = pgut_fopen(path, "Rt")) == NULL) 396 | return; 397 | 398 | while (fgets(buf, lengthof(buf), fp)) 399 | { 400 | size_t i; 401 | 402 | for (i = strlen(buf); i > 0 && IsSpace(buf[i - 1]); i--) 403 | buf[i - 1] = '\0'; 404 | 405 | if (parse_pair(buf, key, value)) 406 | { 407 | for (i = 0; options[i].type; i++) 408 | { 409 | pgut_option *opt = &options[i]; 410 | 411 | if (pgut_keyeq(key, opt->lname)) 412 | { 413 | if (opt->allowed == SOURCE_DEFAULT || 414 | opt->allowed > SOURCE_FILE) 415 | elog(elevel, "option %s cannot specified in file", opt->lname); 416 | else if (opt->source <= SOURCE_FILE) 417 | pgut_setopt(opt, value, SOURCE_FILE); 418 | break; 419 | } 420 | } 421 | if (!options[i].type) 422 | elog(elevel, "invalid option \"%s\"", key); 423 | } 424 | } 425 | 426 | fclose(fp); 427 | } 428 | 429 | static const char * 430 | skip_space(const char *str, const char *line) 431 | { 432 | while (IsSpace(*str)) { str++; } 433 | return str; 434 | } 435 | 436 | static const char * 437 | get_next_token(const char *src, char *dst, const char *line) 438 | { 439 | const char *s; 440 | size_t i; 441 | size_t j; 442 | 443 | if ((s = skip_space(src, line)) == NULL) 444 | return NULL; 445 | 446 | /* parse quoted string */ 447 | if (*s == '\'') 448 | { 449 | s++; 450 | for (i = 0, j = 0; s[i] != '\0'; i++) 451 | { 452 | if (s[i] == '\\') 453 | { 454 | i++; 455 | switch (s[i]) 456 | { 457 | case 'b': 458 | dst[j] = '\b'; 459 | break; 460 | case 'f': 461 | dst[j] = '\f'; 462 | break; 463 | case 'n': 464 | dst[j] = '\n'; 465 | break; 466 | case 'r': 467 | dst[j] = '\r'; 468 | break; 469 | case 't': 470 | dst[j] = '\t'; 471 | break; 472 | case '0': 473 | case '1': 474 | case '2': 475 | case '3': 476 | case '4': 477 | case '5': 478 | case '6': 479 | case '7': 480 | { 481 | int k; 482 | long octVal = 0; 483 | 484 | for (k = 0; 485 | s[i + k] >= '0' && s[i + k] <= '7' && k < 3; 486 | k++) 487 | octVal = (octVal << 3) + (s[i + k] - '0'); 488 | i += k - 1; 489 | dst[j] = ((char) octVal); 490 | } 491 | break; 492 | default: 493 | dst[j] = s[i]; 494 | break; 495 | } 496 | } 497 | else if (s[i] == '\'') 498 | { 499 | i++; 500 | /* doubled quote becomes just one quote */ 501 | if (s[i] == '\'') 502 | dst[j] = s[i]; 503 | else 504 | break; 505 | } 506 | else 507 | dst[j] = s[i]; 508 | j++; 509 | } 510 | } 511 | else 512 | { 513 | i = j = strcspn(s, "# \n\r\t\v"); 514 | memcpy(dst, s, j); 515 | } 516 | 517 | dst[j] = '\0'; 518 | return s + i; 519 | } 520 | 521 | static bool 522 | parse_pair(const char buffer[], char key[], char value[]) 523 | { 524 | const char *start; 525 | const char *end; 526 | 527 | key[0] = value[0] = '\0'; 528 | 529 | /* 530 | * parse key 531 | */ 532 | start = buffer; 533 | if ((start = skip_space(start, buffer)) == NULL) 534 | return false; 535 | 536 | end = start + strcspn(start, "=# \n\r\t\v"); 537 | 538 | /* skip blank buffer */ 539 | if (end - start <= 0) 540 | { 541 | if (*start == '=') 542 | elog(WARNING, "syntax error in \"%s\"", buffer); 543 | return false; 544 | } 545 | 546 | /* key found */ 547 | strncpy(key, start, end - start); 548 | key[end - start] = '\0'; 549 | 550 | /* find key and value split char */ 551 | if ((start = skip_space(end, buffer)) == NULL) 552 | return false; 553 | 554 | if (*start != '=') 555 | { 556 | elog(WARNING, "syntax error in \"%s\"", buffer); 557 | return false; 558 | } 559 | 560 | start++; 561 | 562 | /* 563 | * parse value 564 | */ 565 | if ((end = get_next_token(start, value, buffer)) == NULL) 566 | return false; 567 | 568 | if ((start = skip_space(end, buffer)) == NULL) 569 | return false; 570 | 571 | if (*start != '\0' && *start != '#') 572 | { 573 | elog(WARNING, "syntax error in \"%s\"", buffer); 574 | return false; 575 | } 576 | 577 | return true; 578 | } 579 | 580 | /* 581 | * execute - Execute a SQL and return the result. 582 | */ 583 | PGresult * 584 | execute(const char *query, int nParams, const char **params) 585 | { 586 | return pgut_execute(connection, query, nParams, params); 587 | } 588 | 589 | PGresult * 590 | execute_elevel(const char *query, int nParams, const char **params, int elevel) 591 | { 592 | return pgut_execute_elevel(connection, query, nParams, params, elevel); 593 | } 594 | 595 | /* 596 | * command - Execute a SQL and discard the result. 597 | */ 598 | ExecStatusType 599 | command(const char *query, int nParams, const char **params) 600 | { 601 | return pgut_command(connection, query, nParams, params); 602 | } 603 | 604 | static void 605 | set_elevel(pgut_option *opt, const char *arg) 606 | { 607 | pgut_log_level = parse_elevel(arg); 608 | } 609 | 610 | static pgut_option default_options[] = 611 | { 612 | { 'b', 'e', "echo" , &pgut_echo }, 613 | { 'f', 'E', "elevel" , set_elevel }, 614 | { 's', 'd', "dbname" , &dbname }, 615 | { 's', 'h', "host" , &host }, 616 | { 's', 'p', "port" , &port }, 617 | { 's', 'U', "username" , &username }, 618 | { 'Y', 'w', "no-password" , &prompt_password }, 619 | { 'y', 'W', "password" , &prompt_password }, 620 | { 0 } 621 | }; 622 | 623 | static size_t 624 | option_length(const pgut_option opts[]) 625 | { 626 | size_t len; 627 | for (len = 0; opts && opts[len].type; len++) { } 628 | return len; 629 | } 630 | 631 | static pgut_option * 632 | option_find(int c, pgut_option opts1[], pgut_option opts2[]) 633 | { 634 | size_t i; 635 | 636 | for (i = 0; opts1 && opts1[i].type; i++) 637 | if (opts1[i].sname == c) 638 | return &opts1[i]; 639 | for (i = 0; opts2 && opts2[i].type; i++) 640 | if (opts2[i].sname == c) 641 | return &opts2[i]; 642 | 643 | return NULL; /* not found */ 644 | } 645 | 646 | /* 647 | * Returns the current user name. 648 | */ 649 | static char * 650 | get_username(void) 651 | { 652 | char *ret; 653 | 654 | #ifndef WIN32 655 | struct passwd *pw; 656 | 657 | pw = getpwuid(geteuid()); 658 | ret = (pw ? pw->pw_name : NULL); 659 | #else 660 | static char username[128]; /* remains after function execute */ 661 | DWORD len = sizeof(username) - 1; 662 | 663 | if (GetUserNameA(username, &len)) 664 | ret = username; 665 | else 666 | { 667 | _dosmaperr(GetLastError()); 668 | ret = NULL; 669 | } 670 | #endif 671 | 672 | if (ret == NULL) 673 | ereport(ERROR, 674 | (errcode_errno(), 675 | errmsg("could not get current user name: "))); 676 | return ret; 677 | } 678 | 679 | static int 680 | option_has_arg(char type) 681 | { 682 | switch (type) 683 | { 684 | case 'b': 685 | case 'B': 686 | case 'y': 687 | case 'Y': 688 | return no_argument; 689 | default: 690 | return required_argument; 691 | } 692 | } 693 | 694 | static void 695 | option_copy(struct option dst[], const pgut_option opts[], size_t len) 696 | { 697 | size_t i; 698 | 699 | for (i = 0; i < len; i++) 700 | { 701 | dst[i].name = opts[i].lname; 702 | dst[i].has_arg = option_has_arg(opts[i].type); 703 | dst[i].flag = NULL; 704 | dst[i].val = opts[i].sname; 705 | } 706 | } 707 | 708 | static struct option * 709 | option_merge(const pgut_option opts1[], const pgut_option opts2[]) 710 | { 711 | struct option *result; 712 | size_t len1 = option_length(opts1); 713 | size_t len2 = option_length(opts2); 714 | size_t n = len1 + len2; 715 | 716 | result = pgut_newarray(struct option, n + 1); 717 | option_copy(result, opts1, len1); 718 | option_copy(result + len1, opts2, len2); 719 | memset(&result[n], 0, sizeof(pgut_option)); 720 | 721 | return result; 722 | } 723 | 724 | static char * 725 | longopts_to_optstring(const struct option opts[]) 726 | { 727 | size_t len; 728 | char *result; 729 | char *s; 730 | 731 | for (len = 0; opts[len].name; len++) { } 732 | result = pgut_malloc(len * 2 + 1); 733 | 734 | s = result; 735 | for (len = 0; opts[len].name; len++) 736 | { 737 | if (!isprint(opts[len].val)) 738 | continue; 739 | *s++ = opts[len].val; 740 | if (opts[len].has_arg != no_argument) 741 | *s++ = ':'; 742 | } 743 | *s = '\0'; 744 | 745 | return result; 746 | } 747 | 748 | int 749 | pgut_getopt(int argc, char **argv, pgut_option options[]) 750 | { 751 | int c; 752 | int optindex = 0; 753 | char *optstring; 754 | struct option *longopts; 755 | pgut_option *opt; 756 | 757 | pgut_init(argc, argv); 758 | 759 | /* Help message and version are handled at first. */ 760 | if (argc > 1) 761 | { 762 | if (strcmp(argv[1], "--help") == 0 || strcmp(argv[1], "-?") == 0) 763 | { 764 | help(true); 765 | exit(0); 766 | } 767 | if (strcmp(argv[1], "--version") == 0 || strcmp(argv[1], "-V") == 0) 768 | { 769 | printf("%s %s\n", PROGRAM_NAME, PROGRAM_VERSION); 770 | exit(0); 771 | } 772 | if (strcmp(argv[1], "--configuration") == 0) 773 | { 774 | printf("%s\n", PG_VERSION_STR); 775 | exit(0); 776 | } 777 | } 778 | 779 | /* Merge default and user options. */ 780 | longopts = option_merge(default_options, options); 781 | optstring = longopts_to_optstring(longopts); 782 | 783 | /* Assign named options */ 784 | while ((c = getopt_long(argc, argv, optstring, longopts, &optindex)) != -1) 785 | { 786 | opt = option_find(c, default_options, options); 787 | pgut_setopt(opt, optarg, SOURCE_CMDLINE); 788 | } 789 | 790 | /* Read environment variables */ 791 | option_from_env(options); 792 | (void) (dbname || 793 | (dbname = getenv("PGDATABASE")) || 794 | (dbname = getenv("PGUSER")) || 795 | (dbname = get_username())); 796 | 797 | return optind; 798 | } 799 | 800 | void 801 | help(bool details) 802 | { 803 | pgut_help(details); 804 | 805 | if (details) 806 | { 807 | printf("\nConnection options:\n"); 808 | printf(" -d, --dbname=DBNAME database to connect\n"); 809 | printf(" -h, --host=HOSTNAME database server host or socket directory\n"); 810 | printf(" -p, --port=PORT database server port\n"); 811 | printf(" -U, --username=USERNAME user name to connect as\n"); 812 | printf(" -w, --no-password never prompt for password\n"); 813 | printf(" -W, --password force password prompt\n"); 814 | } 815 | 816 | printf("\nGeneric options:\n"); 817 | if (details) 818 | { 819 | printf(" -e, --echo echo queries\n"); 820 | printf(" -E, --elevel=LEVEL set output message level\n"); 821 | } 822 | printf(" --help show this help, then exit\n"); 823 | printf(" --version output version information, then exit\n"); 824 | 825 | if (details && (PROGRAM_URL || PROGRAM_ISSUES)) 826 | { 827 | printf("\n"); 828 | if (PROGRAM_URL) 829 | printf("Read the website for details: <%s>.\n", PROGRAM_URL); 830 | if (PROGRAM_ISSUES) 831 | printf("Report bugs to <%s>.\n", PROGRAM_ISSUES); 832 | } 833 | } 834 | -------------------------------------------------------------------------------- /doc/pg_repack.rst: -------------------------------------------------------------------------------- 1 | pg_repack -- Reorganize tables in PostgreSQL databases with minimal locks 2 | ========================================================================= 3 | 4 | .. contents:: 5 | :depth: 1 6 | :backlinks: none 7 | 8 | pg_repack_ is a PostgreSQL extension which lets you remove bloat from 9 | tables and indexes, and optionally restore the physical order of clustered 10 | indexes. Unlike CLUSTER_ and `VACUUM FULL`_ it works online, without 11 | holding an exclusive lock on the processed tables during processing. 12 | pg_repack is efficient to boot, with performance comparable to using 13 | CLUSTER directly. 14 | 15 | pg_repack is a fork of the previous pg_reorg_ project. Please check the 16 | `project page`_ for bug report and development information. 17 | 18 | You can choose one of the following methods to reorganize: 19 | 20 | * Online CLUSTER (ordered by cluster index) 21 | * Ordered by specified columns 22 | * Online VACUUM FULL (packing rows only) 23 | * Rebuild or relocate only the indexes of a table 24 | 25 | NOTICE: 26 | 27 | * Only superusers can use the utility. 28 | * Target table must have a PRIMARY KEY, or at least a UNIQUE total index on a 29 | NOT NULL column. 30 | 31 | .. _pg_repack: https://reorg.github.io/pg_repack 32 | .. _CLUSTER: http://www.postgresql.org/docs/current/static/sql-cluster.html 33 | .. _VACUUM FULL: VACUUM_ 34 | .. _VACUUM: http://www.postgresql.org/docs/current/static/sql-vacuum.html 35 | .. _project page: https://github.com/reorg/pg_repack 36 | .. _pg_reorg: https://github.com/reorg/pg_reorg 37 | 38 | 39 | Requirements 40 | ------------ 41 | 42 | PostgreSQL versions 43 | PostgreSQL 9.4, 9.5, 9.6, 10, 11, 12, 13, 14 44 | 45 | Disks 46 | Performing a full-table repack requires free disk space about twice as 47 | large as the target table(s) and its indexes. For example, if the total 48 | size of the tables and indexes to be reorganized is 1GB, an additional 2GB 49 | of disk space is required. 50 | 51 | 52 | Download 53 | -------- 54 | 55 | You can `download pg_repack`__ from the PGXN website. Unpack the archive and 56 | follow the installation_ instructions. 57 | 58 | .. __: http://pgxn.org/dist/pg_repack/ 59 | 60 | Alternatively you can use the `PGXN Client`_ to download, compile and install 61 | the package; use:: 62 | 63 | $ pgxn install pg_repack 64 | 65 | Check the `pgxn install documentation`__ for the options available. 66 | 67 | .. _PGXN Client: https://pgxn.github.io/pgxnclient/ 68 | .. __: https://pgxn.github.io/pgxnclient/usage.html#pgxn-install 69 | 70 | 71 | Installation 72 | ------------ 73 | 74 | pg_repack can be built with ``make`` on UNIX or Linux. The PGXS build 75 | framework is used automatically. Before building, you might need to install 76 | the PostgreSQL development packages (``postgresql-devel``, etc.) and add the 77 | directory containing ``pg_config`` to your ``$PATH``. Then you can run:: 78 | 79 | $ cd pg_repack 80 | $ make 81 | $ sudo make install 82 | 83 | You can also use Microsoft Visual C++ 2010 to build the program on Windows. 84 | There are project files in the ``msvc`` folder. 85 | 86 | After installation, load the pg_repack extension in the database you want to 87 | process. pg_repack is packaged as an extension, so you can execute:: 88 | 89 | $ psql -c "CREATE EXTENSION pg_repack" -d your_database 90 | 91 | You can remove pg_repack using ``DROP EXTENSION pg_repack`` or just dropping 92 | the ``repack`` schema. 93 | 94 | If you are upgrading from a previous version of pg_repack or pg_reorg, just 95 | drop the old version from the database as explained above and install the new 96 | version. 97 | 98 | 99 | Usage 100 | ----- 101 | 102 | :: 103 | 104 | pg_repack [OPTION]... [DBNAME] 105 | 106 | The following options can be specified in ``OPTIONS``. 107 | 108 | Options: 109 | -a,--alter=ALTERSQL 110 | -t, --table=TABLE repack specific table only 111 | 112 | -s, --schema=SCHEMA move repacked tables to a new tablespace 113 | --execute defaults to false 114 | --max-lag 115 | --max-load 116 | --alter-foreign-keys-method=[rebuild_constraints,drop_swap,none] 117 | --skip-check-alter 118 | ... 119 | 120 | 121 | 122 | 123 | 124 | 125 | -a, --all repack all databases 126 | 127 | -I, --parent-table=TABLE repack specific parent table and its inheritors 128 | -S, --moveidx move repacked indexes to *TBLSPC* too 129 | -o, --order-by=COLUMNS order by columns instead of cluster keys 130 | -n, --no-order do vacuum full instead of cluster 131 | -N, --dry-run print what would have been repacked and exit 132 | -j, --jobs=NUM Use this many parallel jobs for each table 133 | -i, --index=INDEX move only the specified index 134 | -x, --only-indexes move only indexes of the specified table 135 | -T, --wait-timeout=SECS timeout to cancel other backends on conflict 136 | -D, --no-kill-backend don't kill other backends when timed out 137 | -Z, --no-analyze don't analyze at end 138 | -k, --no-superuser-check skip superuser checks in client 139 | -C, --exclude-extension don't repack tables which belong to specific extension 140 | 141 | Connection options: 142 | -d, --dbname=DBNAME database to connect 143 | -h, --host=HOSTNAME database server host or socket directory 144 | -p, --port=PORT database server port 145 | -U, --username=USERNAME user name to connect as 146 | -w, --no-password never prompt for password 147 | -W, --password force password prompt 148 | 149 | Generic options: 150 | -e, --echo echo queries 151 | -E, --elevel=LEVEL set output message level 152 | --help show this help, then exit 153 | --version output version information, then exit 154 | 155 | 156 | Reorg Options 157 | ^^^^^^^^^^^^^ 158 | 159 | ``-a``, ``--all`` 160 | Attempt to repack all the databases of the cluster. Databases where the 161 | ``pg_repack`` extension is not installed will be skipped. 162 | 163 | ``-t TABLE``, ``--table=TABLE`` 164 | Reorganize the specified table(s) only. Multiple tables may be 165 | reorganized by writing multiple ``-t`` switches. By default, all eligible 166 | tables in the target databases are reorganized. 167 | 168 | ``-I TABLE``, ``--parent-table=TABLE`` 169 | Reorganize both the specified table(s) and its inheritors. Multiple 170 | table hierarchies may be reorganized by writing multiple ``-I`` switches. 171 | 172 | ``-c``, ``--schema`` 173 | Repack the tables in the specified schema(s) only. Multiple schemas may 174 | be repacked by writing multiple ``-c`` switches. May be used in 175 | conjunction with ``--tablespace`` to move tables to a different tablespace. 176 | 177 | ``-o COLUMNS [,...]``, ``--order-by=COLUMNS [,...]`` 178 | Perform an online CLUSTER ordered by the specified columns. 179 | 180 | ``-n``, ``--no-order`` 181 | Perform an online VACUUM FULL. Since version 1.2 this is the default for 182 | non-clustered tables. 183 | 184 | ``-N``, ``--dry-run`` 185 | List what would be repacked and exit. 186 | 187 | ``-j``, ``--jobs`` 188 | Create the specified number of extra connections to PostgreSQL, and 189 | use these extra connections to parallelize the rebuild of indexes 190 | on each table. Parallel index builds are only supported for full-table 191 | repacks, not with ``--index`` or ``--only-indexes`` options. If your 192 | PostgreSQL server has extra cores and disk I/O available, this can be a 193 | useful way to speed up pg_repack. 194 | 195 | ``-s TBLSPC``, ``--tablespace=TBLSPC`` 196 | Move the repacked tables to the specified tablespace: essentially an 197 | online version of ``ALTER TABLE ... SET TABLESPACE``. The tables' indexes 198 | are left in the original tablespace unless ``--moveidx`` is specified too. 199 | 200 | ``-S``, ``--moveidx`` 201 | Also move the indexes of the repacked tables to the tablespace specified 202 | by the ``--tablespace`` option. 203 | 204 | ``-i``, ``--index`` 205 | Repack the specified index(es) only. Multiple indexes may be repacked 206 | by writing multiple ``-i`` switches. May be used in conjunction with 207 | ``--tablespace`` to move the index to a different tablespace. 208 | 209 | ``-x``, ``--only-indexes`` 210 | Repack only the indexes of the specified table(s), which must be specified 211 | with the ``--table`` or ``--parent-table`` options. 212 | 213 | ``-T SECS``, ``--wait-timeout=SECS`` 214 | pg_repack needs to take an exclusive lock at the end of the 215 | reorganization. This setting controls how many seconds pg_repack will 216 | wait to acquire this lock. If the lock cannot be taken after this duration 217 | and ``--no-kill-backend`` option is not specified, pg_repack will forcibly 218 | cancel the conflicting queries. If you are using PostgreSQL version 8.4 219 | or newer, pg_repack will fall back to using pg_terminate_backend() to 220 | disconnect any remaining backends after twice this timeout has passed. 221 | The default is 60 seconds. 222 | 223 | ``-D``, ``--no-kill-backend`` 224 | Skip to repack table if the lock cannot be taken for duration specified 225 | ``--wait-timeout``, instead of cancelling conflicting queries. The default 226 | is false. 227 | 228 | ``-Z``, ``--no-analyze`` 229 | Disable ANALYZE after a full-table reorganization. If not specified, run 230 | ANALYZE after the reorganization. 231 | 232 | ``-k``, ``--no-superuser-check`` 233 | Skip the superuser checks in the client. This setting is useful for using 234 | pg_repack on platforms that support running it as non-superusers. 235 | 236 | ``-C``, ``--exclude-extension`` 237 | Skip tables that belong to the specified extension(s). Some extensions 238 | may heavily depend on such tables at planning time etc. 239 | 240 | Connection Options 241 | ^^^^^^^^^^^^^^^^^^ 242 | 243 | Options to connect to servers. You cannot use ``--all`` and ``--dbname`` or 244 | ``--table`` or ``--parent-table`` together. 245 | 246 | ``-a``, ``--all`` 247 | Reorganize all databases. 248 | 249 | ``-d DBNAME``, ``--dbname=DBNAME`` 250 | Specifies the name of the database to be reorganized. If this is not 251 | specified and ``-a`` (or ``--all``) is not used, the database name is read 252 | from the environment variable PGDATABASE. If that is not set, the user 253 | name specified for the connection is used. 254 | 255 | ``-h HOSTNAME``, ``--host=HOSTNAME`` 256 | Specifies the host name of the machine on which the server is running. If 257 | the value begins with a slash, it is used as the directory for the Unix 258 | domain socket. 259 | 260 | ``-p PORT``, ``--port=PORT`` 261 | Specifies the TCP port or local Unix domain socket file extension on which 262 | the server is listening for connections. 263 | 264 | ``-U USERNAME``, ``--username=USERNAME`` 265 | User name to connect as. 266 | 267 | ``-w``, ``--no-password`` 268 | Never issue a password prompt. If the server requires password 269 | authentication and a password is not available by other means such as a 270 | ``.pgpass`` file, the connection attempt will fail. This option can be 271 | useful in batch jobs and scripts where no user is present to enter a 272 | password. 273 | 274 | ``-W``, ``--password`` 275 | Force the program to prompt for a password before connecting to a 276 | database. 277 | 278 | This option is never essential, since the program will automatically 279 | prompt for a password if the server demands password authentication. 280 | However, pg_repack will waste a connection attempt finding out that the 281 | server wants a password. In some cases it is worth typing ``-W`` to avoid 282 | the extra connection attempt. 283 | 284 | 285 | Generic Options 286 | ^^^^^^^^^^^^^^^ 287 | 288 | ``-e``, ``--echo`` 289 | Echo commands sent to server. 290 | 291 | ``-E LEVEL``, ``--elevel=LEVEL`` 292 | Choose the output message level from ``DEBUG``, ``INFO``, ``NOTICE``, 293 | ``WARNING``, ``ERROR``, ``LOG``, ``FATAL``, and ``PANIC``. The default is 294 | ``INFO``. 295 | 296 | ``--help`` 297 | Show usage of the program. 298 | 299 | ``--version`` 300 | Show the version number of the program. 301 | 302 | 303 | Environment 304 | ----------- 305 | 306 | ``PGDATABASE``, ``PGHOST``, ``PGPORT``, ``PGUSER`` 307 | Default connection parameters 308 | 309 | This utility, like most other PostgreSQL utilities, also uses the 310 | environment variables supported by libpq (see `Environment Variables`__). 311 | 312 | .. __: http://www.postgresql.org/docs/current/static/libpq-envars.html 313 | 314 | 315 | Examples 316 | -------- 317 | 318 | Perform an online CLUSTER of all the clustered tables in the database 319 | ``test``, and perform an online VACUUM FULL of all the non-clustered tables:: 320 | 321 | $ pg_repack test 322 | 323 | Perform an online VACUUM FULL on the tables ``foo`` and ``bar`` in the 324 | database ``test`` (an eventual cluster index is ignored):: 325 | 326 | $ pg_repack --no-order --table foo --table bar test 327 | 328 | Move all indexes of table ``foo`` to tablespace ``tbs``:: 329 | 330 | $ pg_repack -d test --table foo --only-indexes --tablespace tbs 331 | 332 | Move the specified index to tablespace ``tbs``:: 333 | 334 | $ pg_repack -d test --index idx --tablespace tbs 335 | 336 | 337 | Diagnostics 338 | ----------- 339 | 340 | Error messages are reported when pg_repack fails. The following list shows the 341 | cause of errors. 342 | 343 | You need to cleanup by hand after fatal errors. To cleanup, just remove 344 | pg_repack from the database and install it again: for PostgreSQL 9.1 and 345 | following execute ``DROP EXTENSION pg_repack CASCADE`` in the database where 346 | the error occurred, followed by ``CREATE EXTENSION pg_repack``; for previous 347 | version load the script ``$SHAREDIR/contrib/uninstall_pg_repack.sql`` into the 348 | database where the error occured and then load 349 | ``$SHAREDIR/contrib/pg_repack.sql`` again. 350 | 351 | .. class:: diag 352 | 353 | INFO: database "db" skipped: pg_repack VER is not installed in the database 354 | pg_repack is not installed in the database when the ``--all`` option is 355 | specified. 356 | 357 | Create the pg_repack extension in the database. 358 | 359 | ERROR: pg_repack VER is not installed in the database 360 | pg_repack is not installed in the database specified by ``--dbname``. 361 | 362 | Create the pg_repack extension in the database. 363 | 364 | ERROR: program 'pg_repack V1' does not match database library 'pg_repack V2' 365 | There is a mismatch between the ``pg_repack`` binary and the database 366 | library (``.so`` or ``.dll``). 367 | 368 | The mismatch could be due to the wrong binary in the ``$PATH`` or the 369 | wrong database being addressed. Check the program directory and the 370 | database; if they are what expected you may need to repeat pg_repack 371 | installation. 372 | 373 | ERROR: extension 'pg_repack V1' required, found extension 'pg_repack V2' 374 | The SQL extension found in the database does not match the version 375 | required by the pg_repack program. 376 | 377 | You should drop the extension from the database and reload it as described 378 | in the installation_ section. 379 | 380 | ERROR: relation "table" must have a primary key or not-null unique keys 381 | The target table doesn't have a PRIMARY KEY or any UNIQUE constraints 382 | defined. 383 | 384 | Define a PRIMARY KEY or a UNIQUE constraint on the table. 385 | 386 | ERROR: query failed: ERROR: column "col" does not exist 387 | The target table doesn't have columns specified by ``--order-by`` option. 388 | 389 | Specify existing columns. 390 | 391 | WARNING: the table "tbl" already has a trigger called repack_trigger 392 | The trigger was probably installed during a previous attempt to run 393 | pg_repack on the table which was interrupted and for some reason failed 394 | to clean up the temporary objects. 395 | 396 | You can remove all the temporary objects by dropping and re-creating the 397 | extension: see the installation_ section for the details. 398 | 399 | ERROR: Another pg_repack command may be running on the table. Please try again later. 400 | There is a chance of deadlock when two concurrent pg_repack commands are 401 | run on the same table. So, try to run the command after some time. 402 | 403 | WARNING: Cannot create index "schema"."index_xxxxx", already exists 404 | DETAIL: An invalid index may have been left behind by a previous pg_repack 405 | on the table which was interrupted. Please use DROP INDEX 406 | "schema"."index_xxxxx" to remove this index and try again. 407 | 408 | A temporary index apparently created by pg_repack has been left behind, and 409 | we do not want to risk dropping this index ourselves. If the index was in 410 | fact created by an old pg_repack job which didn't get cleaned up, you 411 | should just use DROP INDEX and try the repack command again. 412 | 413 | 414 | Restrictions 415 | ------------ 416 | 417 | pg_repack comes with the following restrictions. 418 | 419 | Temp tables 420 | ^^^^^^^^^^^ 421 | 422 | pg_repack cannot reorganize temp tables. 423 | 424 | GiST indexes 425 | ^^^^^^^^^^^^ 426 | 427 | pg_repack cannot cluster tables by GiST indexes. 428 | 429 | DDL commands 430 | ^^^^^^^^^^^^ 431 | 432 | You will not be able to perform DDL commands of the target table(s) **except** 433 | VACUUM or ANALYZE while pg_repack is working. pg_repack will hold an 434 | ACCESS SHARE lock on the target table during a full-table repack, to enforce 435 | this restriction. 436 | 437 | If you are using version 1.1.8 or earlier, you must not attempt to perform any 438 | DDL commands on the target table(s) while pg_repack is running. In many cases 439 | pg_repack would fail and rollback correctly, but there were some cases in these 440 | earlier versions which could result in data corruption. 441 | 442 | 443 | Details 444 | ------- 445 | 446 | Full Table Repacks 447 | ^^^^^^^^^^^^^^^^^^ 448 | 449 | To perform a full-table repack, pg_repack will: 450 | 451 | 1. create a log table to record changes made to the original table 452 | 2. add a trigger onto the original table, logging INSERTs, UPDATEs and DELETEs into our log table 453 | 3. create a new table containing all the rows in the old table 454 | 4. build indexes on this new table 455 | 5. apply all changes which have accrued in the log table to the new table 456 | 6. swap the tables, including indexes and toast tables, using the system catalogs 457 | 7. drop the original table 458 | 459 | pg_repack will only hold an ACCESS EXCLUSIVE lock for a short period during 460 | initial setup (steps 1 and 2 above) and during the final swap-and-drop phase 461 | (steps 6 and 7). For the rest of its time, pg_repack only needs 462 | to hold an ACCESS SHARE lock on the original table, meaning INSERTs, UPDATEs, 463 | and DELETEs may proceed as usual. 464 | 465 | 466 | Index Only Repacks 467 | ^^^^^^^^^^^^^^^^^^ 468 | 469 | To perform an index-only repack, pg_repack will: 470 | 471 | 1. create new indexes on the table using CONCURRENTLY matching the definitions of the old indexes 472 | 2. swap out the old for the new indexes in the catalogs 473 | 3. drop the old indexes 474 | 475 | Creating indexes concurrently comes with a few caveats, please see `the documentation`__ for details. 476 | 477 | .. __: http://www.postgresql.org/docs/current/static/sql-createindex.html#SQL-CREATEINDEX-CONCURRENTLY 478 | 479 | 480 | Releases 481 | -------- 482 | 483 | * pg_repack 1.4.7 484 | 485 | * Added support for PostgreSQL 14 486 | 487 | * pg_repack 1.4.6 488 | 489 | * Added support for PostgreSQL 13 490 | * Dropped support for PostgreSQL before 9.4 491 | 492 | * pg_repack 1.4.5 493 | 494 | * Added support for PostgreSQL 12 495 | * Fixed parallel processing for indexes with operators from public schema 496 | 497 | * pg_repack 1.4.4 498 | 499 | * Added support for PostgreSQL 11 (issue #181) 500 | * Remove duplicate password prompt (issue #184) 501 | 502 | * pg_repack 1.4.3 503 | 504 | * Fixed possible CVE-2018-1058 attack paths (issue #168) 505 | * Fixed "unexpected index definition" after CVE-2018-1058 changes in 506 | PostgreSQL (issue #169) 507 | * Fixed build with recent Ubuntu packages (issue #179) 508 | 509 | * pg_repack 1.4.2 510 | 511 | * added PostgreSQL 10 support (issue #120) 512 | * fixed error DROP INDEX CONCURRENTLY cannot run inside a transaction block 513 | (issue #129) 514 | 515 | * pg_repack 1.4.1 516 | 517 | * fixed broken ``--order-by`` option (issue #138) 518 | 519 | * pg_repack 1.4 520 | 521 | * added support for PostgreSQL 9.6, dropped support for versions before 9.1 522 | * use ``AFTER`` trigger to solve concurrency problems with ``INSERT 523 | CONFLICT`` (issue #106) 524 | * added ``--no-kill-backend`` option (issue #108) 525 | * added ``--no-superuser-check`` option (issue #114) 526 | * added ``--exclude-extension`` option (#97) 527 | * added ``--parent-table`` option (#117) 528 | * restore TOAST storage parameters on repacked tables (issue #10) 529 | * restore columns storage types in repacked tables (issue #94) 530 | 531 | * pg_repack 1.3.4 532 | 533 | * grab exclusive lock before dropping original table (issue #81) 534 | * do not attempt to repack unlogged tables (issue #71) 535 | 536 | * pg_repack 1.3.3 537 | 538 | * Added support for PostgreSQL 9.5 539 | * Fixed possible deadlock when pg_repack command is interrupted (issue #55) 540 | * Fixed exit code for when pg_repack is invoked with ``--help`` and 541 | ``--version`` 542 | * Added Japanese language user manual 543 | 544 | * pg_repack 1.3.2 545 | 546 | * Fixed to clean up temporary objects when pg_repack command is interrupted. 547 | * Fixed possible crash when pg_repack shared library is loaded alongside 548 | pg_statsinfo (issue #43). 549 | 550 | * pg_repack 1.3.1 551 | 552 | * Added support for PostgreSQL 9.4. 553 | 554 | * pg_repack 1.3 555 | 556 | * Added ``--schema`` to repack only the specified schema (issue #20). 557 | * Added ``--dry-run`` to do a dry run (issue #21). 558 | * Fixed advisory locking for >2B OID values (issue #30). 559 | * Avoid possible deadlock when other sessions lock a to-be-repacked 560 | table (issue #32). 561 | * Performance improvement for performing sql_pop DELETEs many-at-a-time. 562 | * Attempt to avoid pg_repack taking forever when dealing with a 563 | constant heavy stream of changes to a table. 564 | 565 | * pg_repack 1.2 566 | 567 | * Support PostgreSQL 9.3. 568 | * Added ``--tablespace`` and ``--moveidx`` options to perform online 569 | SET TABLESPACE. 570 | * Added ``--index`` to repack the specified index only. 571 | * Added ``--only-indexes`` to repack only the indexes of the specified table 572 | * Added ``--jobs`` option for parallel operation. 573 | * Don't require ``--no-order`` to perform a VACUUM FULL on non-clustered 574 | tables (pg_repack issue #6). 575 | * Don't wait for locks held in other databases (pg_repack issue #11). 576 | * Bugfix: correctly handle key indexes with options such as DESC, NULL 577 | FIRST/LAST, COLLATE (pg_repack issue #3). 578 | * Fixed data corruption bug on delete (pg_repack issue #23). 579 | * More helpful program output and error messages. 580 | 581 | * pg_repack 1.1.8 582 | 583 | * Added support for PostgreSQL 9.2. 584 | * Added support for CREATE EXTENSION on PostgreSQL 9.1 and following. 585 | * Give user feedback while waiting for transactions to finish (pg_reorg 586 | issue #5). 587 | * Bugfix: Allow running on newly promoted streaming replication slaves 588 | (pg_reorg issue #1). 589 | * Bugfix: Fix interaction between pg_repack and Slony 2.0/2.1 (pg_reorg 590 | issue #4) 591 | * Bugfix: Properly escape column names (pg_reorg issue #6). 592 | * Bugfix: Avoid recreating invalid indexes, or choosing them as key 593 | (pg_reorg issue #9). 594 | * Bugfix: Never choose a partial index as primary key (pg_reorg issue #22). 595 | 596 | * pg_reorg 1.1.7 (2011-08-07) 597 | 598 | * Bugfix: VIEWs and FUNCTIONs could be corrupted that used a reorganized 599 | table which has a dropped column. 600 | * Supports PostgreSQL 9.1 and 9.2dev. (but EXTENSION is not yet) 601 | 602 | 603 | See Also 604 | -------- 605 | 606 | * `clusterdb `__ 607 | * `vacuumdb `__ 608 | --------------------------------------------------------------------------------