Вы находитесь на странице: 1из 36

How to Upgrade PostgreSQL10 to PostgreSQL11 with Zero

Downtime
Sebastian Insausti
December 07, 2018

Posted in:
Upgrades & Patches Become a PostgreSQL DBA

Historically, the hardest task when working with PostgreSQL has been dealing with the upgrades. The most
intuitive upgrade way you can think of is to generate a replica in a new version and perform a failover of the
application into it. With PostgreSQL, this was simply not possible in a native way. To accomplish upgrades
you needed to think of other ways of upgrading, such as using pg_upgrade, dumping and restoring, or using
some third party tools like Slony or Bucardo, all of them having their own caveats.

Why was this? Because of the way PostgreSQL implements replication.

PostgreSQL built-in streaming replication is what is called physical: it will replicate the changes on a byte-by-
byte level, creating an identical copy of the database in another server. This method has a lot of limitations
when thinking of an upgrade, as you simply cannot create a replica in a different server version or even in a
different architecture.

So, here is where PostgreSQL 10 becomes a game changer. With these new versions 10 and 11,
PostgreSQL implements built-in logical replication which, in contrast with physical replication, you can
replicate between different major versions of PostgreSQL. This, of course, opens a new door for upgrading
strategies.

In this blog, let's see how we can upgrade our PostgreSQL 10 to PostgreSQL 11 with zero downtime using
logical replication. First of all, let's go through an introduction to logical replication.

What is logical replication?


Logical replication is a method of replicating data objects and their changes, based upon their replication
identity (usually a primary key). It is based on a publish and subscribe mode, where one or more subscribers
subscribe to one or more publications on a publisher node.

A publication is a set of changes generated from a table or a group of tables (also referred to as replication
set). The node where a publication is defined is referred to as publisher. A subscription is the downstream
side of logical replication. The node where a subscription is defined is referred to as the subscriber, and it
defines the connection to another database and set of publications (one or more) to which it wants to
subscribe. Subscribers pull data from the publications they subscribe to.

Logical replication is built with an architecture similar to physical streaming replication. It is implemented by
"walsender" and "apply" processes. The walsender process starts logical decoding of the WAL and loads the
standard logical decoding plugin. The plugin transforms the changes read from WAL to the logical replication
protocol and filters the data according to the publication specification. The data is then continuously
transferred using the streaming replication protocol to the apply worker, which maps the data to local tables
and applies the individual changes as they are received, in a correct transactional order.
Logical Replication Diagram

Logical replication starts by taking a snapshot of the data on the publisher database and copying that to the
subscriber. The initial data in the existing subscribed tables are snapshotted and copied in a parallel instance
of a special kind of apply process. This process will create its own temporary replication slot and copy the
existing data. Once the existing data is copied, the worker enters synchronization mode, which ensures that
the table is brought up to a synchronized state with the main apply process by streaming any changes that
happened during the initial data copy using standard logical replication. Once the synchronization is done, the
control of the replication of the table is given back to the main apply process where the replication continues
as normal. The changes on the publisher are sent to the subscriber as they occur in real-time.

You can find more about logical replication in the following blogs:

 An Overview of Logical Replication in PostgreSQL


 PostgreSQL Streaming Replication vs Logical Replication

How to upgrade PostgreSQL 10 to PostgreSQL 11 using logical


replication
So, now that we know what this new feature is about, we can think about how we can use it to solve the
upgrade issue.

We are going to configure logical replication between two different major versions of PostgreSQL (10 and 11),
and of course, after you have this working, it is only a matter of performing an application failover into the
database with the newer version.

We are going to perform the following steps to put logical replication to work:

 Configure the publisher node


 Configure the subscriber node
 Create the subscriber user
 Create a publication
 Create the table structure in the subscriber
 Create the subscription
 Check the replication status

So let’s start.

On the publisher side, we are going to configure the following parameters in the postgresql.conf file:

 listen_addresses: What IP address(es) to listen on. We'll use '*' for all.
 wal_level: Determines how much information is written to the WAL. We are going to set it to logical.
 max_replication_slots: Specifies the maximum number of replication slots that the server can
support. It must be set to at least the number of subscriptions expected to connect, plus some reserve for
table synchronization.
 max_wal_senders: Specifies the maximum number of concurrent connections from standby servers or
streaming base backup clients. It should be set to at least the same as max_replication_slots plus the
number of physical replicas that are connected at the same time.

Keep in mind that some of these parameters required a restart of PostgreSQL service to apply.

The pg_hba.conf file also needs to be adjusted to allow replication. We need to allow the replication user to
connect to the database.
So based on this, let’s configure our publisher (in this case our PostgreSQL 10 server) as follows:

 postgresql.conf:

1 listen_addresses = '*'

2 wal_level = logical

3 max_wal_senders = 8

4 max_replication_slots = 4

 pg_hba.conf:

1 # TYPE DATABASE USER ADDRESS METHOD

2 host all rep 192.168.100.144/32 md5

We must change the user (in our example rep), which will be used for replication, and the IP
address 192.168.100.144/32 for the IP that corresponds to our PostgreSQL 11.

On the subscriber side, it also requires the max_replication_slots to be set. In this case, it should be
set to at least the number of subscriptions that will be added to the subscriber.

The other parameters that also need to be set here are:

 max_logical_replication_workers: Specifies the maximum number of logical replication workers.


This includes both apply workers and table synchronization workers. Logical replication workers are taken
from the pool defined by max_worker_processes. It must be set to at least the number of subscriptions,
again plus some reserve for the table synchronization.
 max_worker_processes: Sets the maximum number of background processes that the system can
support. It may need to be adjusted to accommodate for replication workers, at
least max_logical_replication_workers + 1. This parameter requires a PostgreSQL restart.

So, we must configure our subscriber (in this case our PostgreSQL 11 server) as follows:

 postgresql.conf:

1 listen_addresses = '*'

2 max_replication_slots = 4

3 max_logical_replication_workers = 4

4 max_worker_processes = 8

As this PostgreSQL 11 will be our new master soon, we should consider adding
the wal_level and archive_modeparameters in this step, to avoid a new restart of the service later.
1 wal_level = logical

2 archive_mode = on

These parameters will be useful if we want to add a new replication slave or for using PITR backups.

In the publisher, we must create the user with which our subscriber will connect:

1 world=# CREATE ROLE rep WITH LOGIN PASSWORD '*****' REPLICATION;

2 CREATE ROLE

The role used for the replication connection must have the REPLICATION attribute. Access for the role must
be configured in pg_hba.conf and it must have the LOGIN attribute.

In order to be able to copy the initial data, the role used for the replication connection must have
the SELECT privilege on a published table.

1 world=# GRANT SELECT ON ALL TABLES IN SCHEMA public to rep;

2 GRANT

We'll create pub1 publication in the publisher node, for all the tables:

1 world=# CREATE PUBLICATION pub1 FOR ALL TABLES;

2 CREATE PUBLICATION

The user that will create a publication must have the CREATE privilege in the database, but to create a
publication that publishes all tables automatically, the user must be a superuser.

To confirm the publication created we are going to use the pg_publication catalog. This catalog contains
information about all publications created in the database.

world=# SELECT * FROM pg_publication;


1
-[ RECORD 1 ]+------
2
pubname | pub1
3
pubowner | 16384
4
puballtables | t
5 pubinsert | t

6 pubupdate | t

pubdelete | t
7

Column descriptions:

 pubname: Name of the publication.


 pubowner: Owner of the publication.
 puballtables: If true, this publication automatically includes all tables in the database, including any that
will be created in the future.
 pubinsert: If true, INSERT operations are replicated for tables in the publication.
 pubupdate: If true, UPDATE operations are replicated for tables in the publication.
 pubdelete: If true, DELETE operations are replicated for tables in the publication.

As the schema is not replicated, we must take a backup in PostgreSQL 10 and restore it in our PostgreSQL
11. The backup will only be taken for the schema, since the information will be replicated in the initial transfer.

In PostgreSQL 10:

1 $ pg_dumpall -s > schema.sql

In PostgreSQL 11:

1 $ psql -d postgres -f schema.sql

Once we have our schema in PostgreSQL 11, we create the subscription, replacing the values
of host, dbname, user, and password with those that correspond to our environment.

PostgreSQL 11:

1 world=# CREATE SUBSCRIPTION sub1 CONNECTION 'host=192.168.100.143 dbname=world user=re


pub1;
2 NOTICE: created replication slot "sub1" on publisher

3 CREATE SUBSCRIPTION

The above will start the replication process, which synchronizes the initial table contents of the tables in the
publication and then starts replicating incremental changes to those tables.
The user creating a subscription must be a superuser. The subscription apply process will run in the local
database with the privileges of a superuser.

To verify the created subscription we can use then pg_stat_subscription catalog. This view will contain
one row per subscription for the main worker (with null PID if the worker is not running), and additional rows
for workers handling the initial data copy of the subscribed tables.

1
world=# SELECT * FROM pg_stat_subscription;
2
-[ RECORD 1 ]---------+------------------------------
3
subid | 16428
4 subname | sub1

5 pid | 1111

6 relid |

7 received_lsn | 0/172AF90

last_msg_send_time | 2018-12-05 22:11:45.195963+00


8
last_msg_receipt_time | 2018-12-05 22:11:45.196065+00
9
latest_end_lsn | 0/172AF90
10
latest_end_time | 2018-12-05 22:11:45.195963+00
11

Column descriptions:

 subid: OID of the subscription.


 subname: Name of the subscription.
 pid: Process ID of the subscription worker process.
 relid: OID of the relation that the worker is synchronizing; null for the main apply worker.
 received_lsn: Last write-ahead log location received, the initial value of this field being 0.
 last_msg_send_time: Send time of last message received from origin WAL sender.
 last_msg_receipt_time: Receipt time of last message received from origin WAL sender.
 latest_end_lsn: Last write-ahead log location reported to origin WAL sender.
 latest_end_time: Time of last write-ahead log location reported to origin WAL sender.

To verify the status of replication in the master we can use pg_stat_replication:

world=# SELECT * FROM pg_stat_replication;


1
-[ RECORD 1 ]----+------------------------------
2
pid | 1178
3
usesysid | 16427
4 usename | rep

5 application_name | sub1

client_addr | 192.168.100.144
6
client_hostname |
7
client_port | 58270
8
backend_start | 2018-12-05 22:11:45.097539+00
9
backend_xmin |
10
state | streaming
11 sent_lsn | 0/172AF90

12 write_lsn | 0/172AF90

13 flush_lsn | 0/172AF90

14 replay_lsn | 0/172AF90

write_lag |
15
flush_lag |
16
replay_lag |
17
sync_priority | 0
18
sync_state | async
19

20

21

Column descriptions:

 pid: Process ID of a WAL sender process.


 usesysid: OID of the user logged into this WAL sender process.
 usename: Name of the user logged into this WAL sender process.
 application_name: Name of the application that is connected to this WAL sender.
 client_addr: IP address of the client connected to this WAL sender. If this field is null, it indicates that
the client is connected via a Unix socket on the server machine.
 client_hostname: Hostname of the connected client, as reported by a reverse DNS lookup of
client_addr. This field will only be non-null for IP connections, and only when log_hostname is enabled.
 client_port: TCP port number that the client is using for communication with this WAL sender, or -1 if a
Unix socket is used.
 backend_start: Time when this process was started.
 backend_xmin: This standby's xmin horizon reported by hot_standby_feedback.
 state: Current WAL sender state. The possible values are: startup, catchup, streaming, backup and
stopping.
 sent_lsn: Last write-ahead log location sent on this connection.
 write_lsn: Last write-ahead log location written to disk by this standby server.
 flush_lsn: Last write-ahead log location flushed to disk by this standby server.
 replay_lsn: Last write-ahead log location replayed into the database on this standby server.
 write_lag: Time elapsed between flushing recent WAL locally and receiving notification that this standby
server has written it (but not yet flushed it or applied it).
 flush_lag: Time elapsed between flushing recent WAL locally and receiving notification that this standby
server has written and flushed it (but not yet applied it).
 replay_lag: Time elapsed between flushing recent WAL locally and receiving notification that this
standby server has written, flushed and applied it.
 sync_priority: Priority of this standby server for being chosen as the synchronous standby in a priority-
based synchronous replication.
 sync_state: Synchronous state of this standby server. The possible values are async, potential, sync,
quorum.

To verify when the initial transfer is finished we can see the PostgreSQL log on the subscriber:

1 2018-12-05 22:11:45.096 UTC [1111] LOG: logical replication apply worker for subscrip

2018-12-05 22:11:45.103 UTC [1112] LOG: logical replication table synchronization wo


2
2018-12-05 22:11:45.114 UTC [1113] LOG: logical replication table synchronization wo
3
2018-12-05 22:11:45.156 UTC [1112] LOG: logical replication table synchronization wo
4
2018-12-05 22:11:45.162 UTC [1114] LOG: logical replication table synchronization wo
5 started

2018-12-05 22:11:45.168 UTC [1113] LOG: logical replication table synchronization wo


6
2018-12-05 22:11:45.206 UTC [1114] LOG: logical replication table synchronization wo
7 finished

Or checking the srsubstate variable on pg_subscription_rel catalog. This catalog contains the state
for each replicated relation in each subscription.

world=# SELECT * FROM pg_subscription_rel;


1
-[ RECORD 1 ]---------
2
srsubid | 16428
3
srrelid | 16387
4
srsubstate | r
5
srsublsn | 0/172AF20
6 -[ RECORD 2 ]---------

7 srsubid | 16428
8 srrelid | 16393

9 srsubstate | r

srsublsn | 0/172AF58
10
-[ RECORD 3 ]---------
11
srsubid | 16428
12
srrelid | 16400
13
srsubstate | r
14
srsublsn | 0/172AF90
15

16

Column descriptions:

 srsubid: Reference to subscription.


 srrelid: Reference to relation.
 srsubstate: State code: i = initialize, d = data is being copied, s = synchronized, r = ready (normal
replication).
 srsublsn: End LSN for s and r states.

We can insert some test records in our PostgreSQL 10 and validate that we have them in our PostgreSQL 11:

PostgreSQL 10:

world=# INSERT INTO city (id,name,countrycode,district,population) VALUES


1 (5001,'city1','USA','District1',10000);

2 INSERT 0 1

world=# INSERT INTO city (id,name,countrycode,district,population) VALUES


3 (5002,'city2','ITA','District2',20000);
4 INSERT 0 1

5 world=# INSERT INTO city (id,name,countrycode,district,population) VALUES


(5003,'city3','CHN','District3',30000);
6
INSERT 0 1

PostgreSQL 11:

1 world=# SELECT * FROM city WHERE id>5000;

2 id | name | countrycode | district | population


3 ------+-------+-------------+-----------+------------

4 5001 | city1 | USA | District1 | 10000

5002 | city2 | ITA | District2 | 20000


5
5003 | city3 | CHN | District3 | 30000
6
(3 rows)
7

At this point, we have everything ready to point our application to our PostgreSQL 11.

For this, first of all, we need to confirm that we don't have replication lag.

On the master:

1 world=# SELECT application_name, pg_wal_lsn_diff(pg_current_wal_lsn(), replay_lsn) l


pg_stat_replication;
2
-[ RECORD 1 ]----+-----
3 application_name | sub1

4 lag | 0

And now, we only need to change our endpoint from our application or load balancer (if we have one) to the
new PostgreSQL 11 server.

If we have a load balancer like HAProxy, we can configure it using the PostgreSQL 10 as active and the
PostgreSQL 11 as backup, in this way:
HAProxy Status View

So, if you just shutdown the master in PostgreSQL 10, the backup server, in this case in PostgreSQL 11,
starts to receive the traffic in a transparent way for the user/application.

At the end of the migration, we can delete the subscription in our new master in PostgreSQL 11:

1 world=# DROP SUBSCRIPTION sub1;

2 NOTICE: dropped replication slot "sub1" on publisher

3 DROP SUBSCRIPTION

And verify that it is removed correctly:


1 world=# SELECT * FROM pg_subscription_rel;

2 (0 rows)

3 world=# SELECT * FROM pg_stat_subscription;

4 (0 rows)

Download the Whitepaper Today

PostgreSQL Management & Automation with


ClusterControl
Learn about what you need to know to deploy, monitor, manage and scale PostgreSQL

Download the Whitepaper

Limitations

Related resources
PostgreSQL Streaming Replication - a Deep Dive
Before using the logical replication, please keep in mind the following limitations:

 The database schema and DDL commands are not replicated. The initial schema can be copied using
pg_dump --schema-only.
 Sequence data is not replicated. The data in serial or identity columns backed by sequences will be
replicated as part of the table, but the sequence itself would still show the start value on the subscriber.
 Replication of TRUNCATE commands is supported, but some care must be taken when truncating groups
of tables connected by foreign keys. When replicating a truncate action, the subscriber will truncate the
same group of tables that was truncated on the publisher, either explicitly specified or implicitly collected
via CASCADE, minus tables that are not part of the subscription. This will work correctly if all affected
tables are part of the same subscription. But if some tables to be truncated on the subscriber have foreign-
key links to tables that are not part of the same (or any) subscription, then the application of the truncate
action on the subscriber will fail.
 Large objects are not replicated. There is no workaround for that, other than storing data in normal tables.
 Replication is only possible from base tables to base tables. That is, the tables on the publication and on
the subscription side must be normal tables, not views, materialized views, partition root tables, or foreign
tables. In the case of partitions, you can replicate a partition hierarchy one-to-one, but you cannot currently
replicate to a differently partitioned setup.

Conclusion
Keeping your PostgreSQL server up to date by performing regular upgrades has been a necessary but
difficult task until PostgreSQL 10 version.
In this blog we made a brief introduction to logical replication, a PostgreSQL feature introduced natively in
version 10, and we have shown you how it can help you accomplish this challenge with a zero downtime
strategy.

Upgrading Postgres to the latest version on Centos 7 Server


October 21, 2016/0 Comments/in centos 7, postgres, Technology /by Tim Norton

I recently upgraded postgres to the latest version and I think the steps can help someone else
too. So here they are. This assumes Centos 7 server installed.

First, take a backup of database


First SSH into the server.

Now connect to postgres:


sudo -i -u postgres
Now do a dump of the entire install:
pg_dumpall > outputfile

I usually use all_db.sql as the filename, but you can use whatever.

Since the file gets dropped into the pgsql folder, good to move it out of there into your home
directory, since we are removing old versions of postgres. Use CTRL^d to get out of postgres.

Then do a mv:
sudo mv var/bin/pgsql/all_db.sql all_db.sql
Make sure the file is in your home folder:
ls

Does the file get listed? Good if yes. If not then you didn’t follow the above exactly.

Next, remove postgres


Next remove all installs of postgres:
sudo yum -y remove postgres\*

Now Install PG latest version


Right now postgres 9.6 is the latest version, but in the future you can follow the same steps
replacing 96 or 9.6 with 97 or 9.7 etc.

First add the latest version to your rpm for install using yum. Find latest packages here:

https://yum.postgresql.org/repopackages.php

Copy the link for the OS you have.


…in this case I have centos 7. So I copied this link:
https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-7-x86_64/pgdg-centos96-
9.6-3.noarch.rpm
Now build the rpm command by changing
https://download.postgresql.org/pub/repos/yum

to
http://yum.postgresql.org

… in the next command.

Now run it:


sudo rpm -Uvh http://yum.postgresql.org/9.6/redhat/rhel-7-x86_64/pgdg-centos96-9.6-
3.noarch.rpm
Then run the install command:
sudo yum install postgresql96-server postgresql96
Initialize with this:
sudo /usr/pgsql-9.6/bin/postgresql96-setup initdb
Start postgres:
sudo systemctl start postgresql-9.6
sudo systemctl enable postgresql-9.6

Change Configuration Files


If this install of postgres needs to be accessible to the outside world, you can open it up. We’re
using firewall rules on the server which opens port 5432 only to certain IP addresses. So without
that additional step, you would want to be careful with the below.

Find the configuration file.


sudo -i -u postgres
psql
SHOW config_file;

The location of the configuration file is listed now. So open it for editing. (use your location, the
below is mine)
sudo vim var/lib/pgsql/9.6/data/postgresql.conf
Scroll to “CONNECTIONS AND AUTHENTICATION” section and find this line:
#listen_addresses = 'localhost' # what IP address(es) to listen on;
Edit this line as follows:
listen_addresses = '*' # what IP address(es) to listen on;
Exit the file saving changes.
ESC
:wq
Next edit the pg_hba.conf file, in the same folder:
sudo vim var/lib/pgsql/9.6/data/pg_hba.conf
Scroll to the bottom of the file and add these lines if they don’t already exist:
#IPv4 remote connections (all users and IP addresses):
host all all 0.0.0.0/0 md5

On the second line beginning with ‘host’ make sure there is no # added. You want it to read as
above.

Exit saving changes.

Now restart postgres.


sudo service postgresql-9.6 restart

Add users
If you had different users setup in postgres, add them again.
CREATE USER user_name WITH PASSWORD 'pass_word';

Replace user_name and pass_word with yours.

Make the user a superuser if that’s what you want.


ALTER USER user_name WITH SUPERUSER;

All done!

Mentions: Thanks to http://tecadmin.net/install-postgresql-9-5-on-centos/ for help with the rpm


and install steps.

How to Install PostgreSQL 11 on


CentOS/RHEL 7/6
Written by Rahul, Updated on April 12, 2019

PostgreSQL postgres, PostgreSQL, psql

PostgreSQL 11 Released. It is an open source object-relational, highly


scalable, SQL-compliant database management system. PostgreSQL is
developed at the University of California at Berkeley Computer Science
Department. This article will help you to install PostgreSQL 11 on
CentOS/RHEL 7/6 system.
This article has been tested on CentOS Linux release 7.5
Step 1 – Configure Yum Repository

Firstly you need to configure the PostgreSQL repository in your system. Use
one of the below commands as per your operating system version.

rpm -Uvh https://yum.postgresql.org/11/redhat/rhel-7-x86_64/pgdg-redhat-repo-


latest.noarch.rpm

rpm -Uvh https://yum.postgresql.org/11/redhat/rhel-6-x86_64/pgdg-redhat-repo-


latest.noarch.rpm

For more details visit PostgreSQL repositories link page where you can get
repository package rpm for various operating systems.

Step 2 – Install PostgreSQL 11 on CentOS

After enabling PostgreSQL yum repository in your system use following


command to install PostgreSQL 11 on your system with yum package
manager.

yum install postgresql11-server

This will also install some additional required packages on your system. Enter
y to confirm and complete the installation process.
Step 3 – Initialize PGDATA

After that, you need to initialize the PostgreSQL instance. In other words,
this will create a data directory and other configuration files on your system.
To initialize database use below command.

/usr/pgsql-11/bin/postgresql-11-setup initdb

Above command will take some time to initialize PostgreSQL first


time. PGDATAenvironment variable contains the path of data directory.
PostgreSQL 11 default data directory location is /var/lib/pgsql/11/data

Setp 4 – Start PostgreSQL Server

To start PostgreSQL service using the following command as per your


operating systems. Also, enable PostgreSQL service to autostart on system
boot.

CentOS/RHEL – 7
systemctl enable postgresql-11.service

systemctl start postgresql-11.service

CentOS/RHEL – 6

service postgresql-11 start

chkconfig postgresql-11 on

Step 5 – Verify PostgreSQL Installation

After completing the above all steps. Your PostgreSQL 11 server is ready to
use. Log in to postfix instance to verify the connection.

su - postgres -c "psql"

psql (11.0)

Type "help" for help.

postgres=#

You may create a password for user postgres for security purpose.

postgres=# \password postgres

In conclusion, You have successfully installed the PostgreSQL database


server on CentOS/RHEL 7/6 system.

UPGRADE POSTGRESQL 10 To 11
Posted on October 30, 2018 by Engin Yilmaz
We will upgrade POSTGRESQL 10 To 11 on CENTOS in this article. Before I start the upgrade process,
I want to make the following critical warning:

You will need to re-configure your postgresql.conf and pg_hba.conf files. Because the files will be reset
after upgrade.

Install PostgreSQL 11
We are installing Postgresql 11 with the following commands.

1 [root@postgres eng]# yum install https://download.postgresql.org/pub/repos/yum/11/redhat/rhel-7-x86_64/pgdg-centos11-11-2.noarch.rpm


2
3 [root@postgres eng]# yum install postgresql11
4
5 [root@postgres eng]# yum install postgresql11-server

initdb
We are performing the initdb operation in the Postgresql 11 database.

1 [root@postgres eng]# /usr/pgsql-11/bin/postgresql-11-setup initdb


2 Initializing database … OK

Checking Whether the Upgrade is Applicable


We check the applicability of the upgrade with the command below. (with postgresql user)

1 -bash-4.2$ /usr/pgsql-1/bin/pg_upgrade –old-bindir=/usr/pgsql-10/bin/ –new-bindir=/usr/pgsql-11/bin/ –old-datadir=/var/lib/pgsql/10/data


2 –new-datadir=/var/lib/pgsql/11/data –check
3 pgsql-10/ pgsql-11/
4 -bash-4.2$ /usr/pgsql-11/bin/pg_upgrade –old-bindir=/usr/pgsql-10/bin/ –new-bindir=/usr/pgsql-11/bin/ –old-
5 datadir=/var/lib/pgsql/10/data –new-datadir=/var/lib/pgsql/11/data –check
6 Performing Consistency Checks on Old Live Server
7 ————————————————
8 Checking cluster versions ok
9 Checking database user is the install user ok
10 Checking database connection settings ok
11 Checking for prepared transactions ok
12 Checking for reg* data types in user tables ok
13 Checking for contrib/isn with bigint-passing mismatch ok
14 Checking for presence of required libraries ok
15 Checking database user is the install user ok
16 Checking for prepared transactions ok

*Clusters are compatible*

Stop PostgreSQL 10
If we perform the upgrade without stopping Postgresql 10, we get the following error.

-bash-4.2$ /usr/pgsql-11/bin/pg_upgrade –old-bindir=/usr/pgsql-10/bin/ –new-bindir=/usr/pgsql-11/bin/ –old-datadir=/var/lib/pgsql/10/data


1
–new-datadir=/var/lib/pgsql/11/data
2
3
There seems to be a postmaster servicing the old cluster.
4
Please shutdown that postmaster and try again.
5
Failure, exiting
6
-bash-4.2$ logout
7
[root@postgres eng]# systemctl stop postgresql-10.service

We are stopping Postgresql 10 (with root user)

1 [root@postgres eng]# systemctl stop postgresql-10.service

Upgrade
Then we switch to postgres user and run the upgrade command.(with Postgres user)

1 -bash-4.2$ /usr/pgsql-11/bin/pg_upgrade –old-bindir=/usr/pgsql-10/bin/ –new-bindir=/usr/pgsql-11/bin/ –old-


2 datadir=/var/lib/pgsql/10/data –new-datadir=/var/lib/pgsql/11/data
3 Performing Consistency Checks
4 —————————–
5 Checking cluster versions ok
6 Checking database user is the install user ok
7 Checking database connection settings ok
8 Checking for prepared transactions ok
9 Checking for reg* data types in user tables ok
10 Checking for contrib/isn with bigint-passing mismatch ok
11 Creating dump of global objects ok
12 Creating dump of database schemas
13 ok
14 Checking for presence of required libraries ok
15 Checking database user is the install user ok
16 Checking for prepared transactions ok
17
18 If pg_upgrade fails after this point, you must re-initdb the
19 new cluster before continuing.
20
21 Performing Upgrade
22 ——————
23 Analyzing all rows in the new cluster ok
24 Freezing all rows in the new cluster ok
25 Deleting files from new pg_xact ok
26 Copying old pg_xact to new server ok
27 Setting next transaction ID and epoch for new cluster ok
28 Deleting files from new pg_multixact/offsets ok
29 Copying old pg_multixact/offsets to new server ok
30 Deleting files from new pg_multixact/members ok
31 Copying old pg_multixact/members to new server ok
32 Setting next multixact ID and offset for new cluster ok
33 Resetting WAL archives ok
34 Setting frozenxid and minmxid counters in new cluster ok
35 Restoring global objects in the new cluster ok
36 Restoring database schemas in the new cluster
37 ok
38 Copying user relation files
39 ok
40 Setting next OID for new cluster ok
41 Sync data directory to disk ok
42 Creating script to analyze new cluster ok
43 Creating script to delete old cluster ok
44
45 Upgrade Complete
46 —————-
47 Optimizer statistics are not transferred by pg_upgrade so,
48 once you start the new server, consider running:
49 ./analyze_new_cluster.sh
50
51 Running this script will delete the old cluster’s data files:
52 ./delete_old_cluster.sh
-bash-4.2$

We are running the following commands before executing the commands requested from us.

1 [root@postgres eng]# systemctl enable postgresql-11.service


2
3 Created symlink from /etc/systemd/system/multi-user.target.wants/postgresql-11.service to /usr/lib/systemd/system/postgresql-11.service.
4
5 [root@postgres eng]# systemctl start postgresql-11.service

Then we run the commands requested from us.

1 -bash-4.2$ ./analyze_new_cluster.sh
2 This script will generate minimal optimizer statistics rapidly
3 so your system is usable, and then gather statistics twice more
4 with increasing accuracy. When it is done, your system will
5 have the default level of optimizer statistics.
6
7 If you have used ALTER TABLE to modify the statistics target for
8 any tables, you might want to remove them and restore them after
9 running this script because they will delay fast statistics generation.
10
11 If you would like default statistics as quickly as possible, cancel
12 this script and run:
13 “/usr/pgsql-11/bin/vacuumdb” –all –analyze-only
14
15 vacuumdb: processing database “postgres”: Generating minimal optimizer statistics (1 target)
16 vacuumdb: processing database “template1”: Generating minimal optimizer statistics (1 target)
17 vacuumdb: processing database “postgres”: Generating medium optimizer statistics (10 targets)
18 vacuumdb: processing database “template1”: Generating medium optimizer statistics (10 targets)
19 vacuumdb: processing database “postgres”: Generating default (full) optimizer statistics
20 vacuumdb: processing database “template1”: Generating default (full) optimizer statistics
21
22 Done

Delete Old Cluster


Then run the below command. But, be aware that this command delete old cluster. Thats why, be sure
that postgresql 11 is working before delete the old cluster.

1 -bash-4.2$ ./delete_old_cluster.sh

We remove all packages related to Postgresql10. (with root user)

1 [root@postgres eng]# rpm -qa | grep postgresql


2
3 yum remove postgresql10-10.5-1PGDG.rhel7.x86_64
4
5 yum remove postgresql10-libs-10.5-1PGDG.rhel7.x86_64

Upgrade has completed. You can connect to the database and see version 11.

1 -bash-4.2$ psql
2
3 psql (11.0)
4
5 Type “help” for help.
6
7 postgres=#

2794 total views, 8 views today

PostgreSQL
could not write to log file “pg_upgrade_internal.log”, Upgrade PostgreSQL, upgrade postgresql
centos, upgrade to postgres 11

Post navigation
Previous Post
Policy-Based Management(Check Heap Tables in Databases-Table Facet)

Next Post
Tips when using ‘replace into’ and ‘insert … on duplicate key update…’ in MySQL

3 thoughts on “UPGRADE POSTGRESQL 10 To 11”

1. Jim says:

February 15, 2019 at 7:20 am

There’s some typos, it should have double dash “–”


/usr/pgsql-10/bin/pg_upgrade –old-bindir=/usr/pgsql-10/bin/ –new-bindir=/usr/pgsql-
11/bin/ –old-datadir=/var/lib/pgsql/10/data –new-datadir=/var/lib/pgsql/11/data –check

also I’m getting this error after executing that:


could not write to log file “pg_upgrade_internal.log”
Failure, exiting

Any ideas on how to workaround that?

Reply

2. Engin Yilmaz says:

February 15, 2019 at 8:53 am

Hi Jim,
Could you please write operating system name and version?
Also, which user did you try upgrade check command? postgres or other?
Best Regards,

Reply

3. dbtut says:

March 1, 2019 at 12:51 pm

you should run the below command on tmp. That is, it should be as follows.

su – postgres

cd /tmp
then execute command:

/usr/pgsql-10/bin/pg_upgrade –old-bindir=/usr/pgsql-10/bin/ –new-bindir=/usr/pgsql-


11/bin/ –old-datadir=/var/lib/pgsql/10/data –new-datadir=/var/lib/pgsql/11/data –check

Reply

Upgrade Postgresql from 9.3 to


11
Posted on 12. 2. 2019 by Vratislav Hutsky Posted in Operating systems

I had to update an already unsupported Postgresql instance running on version 9.3 to the
latest version 11. Luckily, this process is quite straightforward and all the hard lifting is done by the postgresql
update tool.

The only problem there was that the database in question had bloated over the years to such an extent that it
was not possible to have two separate data directories for both versions, 9.3 and 11, which is the normal way
to go about this – you install the newer version, run pg_upgrade tool against the old data directory and it takes
the data, converts them to the new format and saves them in a new directory of the newly installed postgresql
version. This is usually more safe as you can go back to the older postgresql version if something went
wrong; the old data directory is till intact. If that’s not possible, however, because you don’t have enough
space on your system, you can use –link parameter that makes pg_upgrade tool reclaim the old directory as
its own.

Here are the steps that I had to take:

1. enable repository with the new version, as I was on Centos, I followed the instructions from
here https://yum.postgresql.org/
2. install new packages

1 yum install postgresql11 postgresql11-contrib postgresql11-devel postgresql11-libs postgresql11-server

3. stop and disable the older version service


4. initialize the new data directory, note that you need to do this as postgres user:

1 /usr/pgsql-11/bin/initdb -D /var/lib/pgsql/11/data"
5. check consistency between the two versions (again, as postgres user):

/usr/pgsql-11/bin/pg_upgrade --old-bindir=/usr/pgsql-9.3/bin/ --new-bindir=/usr/pgsql-11/bin/ --old-datadir=/var/lib/pgsql/9.3/data/ --new-


1
datadir=/var/lib/pgsql/11/data/ --check

6. and finally run the upgrade itself (again, as postgres user):

/usr/pgsql-11/bin/pg_upgrade --old-bindir=/usr/pgsql-9.3/bin/ --new-bindir=/usr/pgsql-11/bin/ --old-datadir=/var/lib/pgsql/9.3/data/ --new-


1
datadir=/var/lib/pgsql/11/data/ --link

7. run analyze and delete scripts

Needless to say, it’s a very good idea to backup your database before the upgrade and and to have an
emergency plan in case things go wrong.

Upgrading to PostgreSQL 11 with Logical


Replication
September 6, 2018/in 2ndQuadrant, Eisentraut's PlanetPostgreSQL, PostgreSQL /by Peter Eisentraut

It’s time.

About a year ago, we published PostgreSQL 10 with support for native logical replication. One of the
uses of logical replication is to allow low- or no-downtime upgrading between PostgreSQL major
versions. Until now, PostgreSQL 10 was the only PostgreSQL release with native logical replication, so
there weren’t many opportunities for upgrading in this way. (Logical replication can also be used for
moving data between instances on different operating systems or CPU architectures or with different low-
level configuration settings such as block size or locale — sidegrading if you will.) Now that PostgreSQL
11 is near, there will be more reasons to make use of this functionality.

Let’s first compare the three main ways to upgrade a PostgreSQL installation:
 pg_dump and restore
 pg_upgrade
 logical replication

We can compare these methods in terms of robustness, speed, required downtime, and restrictions (and
more, but we have to stop somewhere for this article).

pg_dump and restore is arguably the most robust method, since it’s the most tested and has been in use
for decades. It also has very few restrictions in terms of what it can handle. It is possible to construct
databases that cannot be dumped and restored, mostly involving particular object dependency
relationships, but those are rare and usually involve discouraged practices.
The problem with the dump and restore method is of course that it effectively requires downtime for the
whole time the dump and restore operations run. While the source database is still readable and writable
while the process runs, any updates to the source database after the start of the dump will be lost.
pg_upgrade improves on the pg_dump process by moving over the data files directly without having to
dump them out into a logical textual form. Note that pg_upgrade still uses pg_dump internally to copy the
schema, but not the data. When pg_upgrade was new, its robustness was questioned, and it did upgrade
some databases incorrectly. But pg_upgrade is now quite mature and well tested, so one does not need to
hesitate about using it for that reason anymore. While pg_upgrade runs, the database system is down. But
one can make a choice about how long pg_upgrade runs. In the default copy mode, the total run time is
composed of the time to dump and restore the schema (which is usually very fast, unless one has
thousands of tables or other objects) plus the time to copy the data files, which depends on how big the
database is (and the I/O system, file system, etc.).

In the optional link mode, the data files are instead hard-linked to the new data directory, so that the time
is merely the time to perform a short kernel operation per file instead of copying every byte. The
drawback is that if anything goes wrong with the upgrade or you need to fall back to the old installation,
this operation will have destroyed your old database. (I’m working on a best-of-both-worlds solution for
PostgreSQL 12 using reflinks or file clone operations on supported file systems.)

Logical replication is the newest of the bunch here, so it will probably take some time to work out the
kinks. If you don’t have time to explore and investigate, this might not be the way to go right now. (Of
course, people have been using other non-core logical replication solutions such as Slony, Londiste, and
pglogical for upgrading PostgreSQL for many years, so there is a lot of experience with the principles, if
not with the particulars.)
The advantage of using logical replication to upgrade is that the application can continue to run against
the old instance while the data synchronization happens. There only needs to be a small outage while the
client connections are switched over. So while an upgrade using logical replication is probably slower
start to end than using pg_upgrade in copy mode (and definitely slower than using hardlink mode), it
doesn’t matter very much since the actual downtime can be much shorter.

Note that logical replication currently doesn’t replicate schema changes. In this proposed upgrade
procedure, the schema is still copied over via pg_dump, but subsequent schema changes are not carried
over. Upgrading with logical replication also has a few other restrictions. Certain operations are not
captured by logical replication: large objects, TRUNCATE, sequence changes. We will discuss
workarounds for these issues later.

If you have any physical standbys (and if not, why don’t you?), there are also some differences to
consider between the methods. With either method, you need to build new physical standbys for the
upgraded instance. With dump and restore as well as with logical replication, they can be put in place
before the upgrade starts so that the standby will be mostly ready once the restore or logical replication
initial sync is complete, subject to replication delay.

With pg_upgrade, the new standbys have to be created after the upgrade of the primary is complete. (The
pg_upgrade documentation describes this in further detail.) If you rely on physical standbys for high-
availability, the standbys ought to be in place before you switch to the new instance, so the setup of the
standbys could affect your overall timing calculations.

But back to logical replication. Here is how upgrading with logical replication can be done:

0. The old instance must be prepared for logical replication. This requires some configurations settings as
described under http://www.postgresql.org/docs/10/static/logical-replication-
config.html (mainly wal_level = logical. If it turns out you need to make those changes, they will
require a server restart. So check this well ahead of time. Also check that pg_hba.conf on the old
instance is set up to accept connections from the new instance. (Changing that only requires a reload.)

1. Install the new PostgreSQL version. You need at least the server package and the client package that
contains pg_dump. Many packagings now allow installing multiple versions side by side. If you are
running virtual machines or cloud instances, it’s worth considering installing the new instance on a new
host.
2. Set up a new instance, that is, run initdb. The new instance can have different settings than the old one,
for example locale, WAL segment size, or checksumming. (Why not use this opportunity to turn on data
checksums?)

3. Before you start the new instance, you might need to change some configuration settings. If the
instance runs on the same host as the old instance, you need to set a different port number. Also, carry
over any custom changes you have made in postgresql.conf on your old instance, such as memory
settings, max_connections, etc. Similarly, make pg_hba.conf settings appropriate to your environment.
You can usually start by copying over the pg_hba.conf file from the old instance. If you want to use
SSL, set that up now.

4. Start the new (empty) instance and check that it works to your satisfaction. If you set up the new
instance on a new host, check at this point that you can make a database connection (using psql) from the
new host to the old database instance. We will need that in the subsequent steps.

5. Copy over the schema definitions with pg_dumpall. (Or you can do it with pg_dump for each database
separately, but then don’t forget global objects such as roles.)

pg_dumpall -s >schemadump.sql

psql -d postgres -f schemadump.sql

Any schema changes after this point will not be migrated. You would have to manage those yourself. In
many cases, you can just apply the changing DDL on both hosts, but running commands that change the
table structure during an upgrade is probably a challenge too far.

6. In each database in the source instance, create a publication that captures all tables:

CREATE PUBLICATION p_upgrade FOR ALL TABLES;

Logical replication works separately in each database, so this needs to be repeated in each database. On
the other hand, you don’t have to upgrade all databases at once, so you can do this one database at a time
or even not upgrade some databases.

7. In each database in the target instance, create a subscription that subscribes to the just-created
publication. Be sure to match the source and target databases correctly.

CREATE SUBSCRIPTION s_upgrade CONNECTION 'host=oldhost port=oldport dbname=dbname ...'

PUBLICATION p_upgrade;

Set the connection parameters as appropriate.


8. Now you wait until the subscriptions have copied over the initial data and have fully caught up with the
publisher. You can check the initial sync status of each table in a subscription in the system
catalog pg_subscription_rel (look for r = ready in column srsubstate). The overall status of the
replication can be checked in pg_stat_replication on the sending side
and pg_stat_subscription on the receiving side.
9. As mentioned above, sequence changes are not replicated. One possible workaround for this is to copy
over the sequence values using pg_dump. You can get a dump of the current sequent values using
something like this:

pg_dump -d dbname --data-only -t '*_seq' >seq-data.sql

(This assumes that the sequence names all match *_seq and no tables match that name. In more
complicated cases you could also go the route of creating a full dump and extracing the sequence data
from the dump’s table of contents.)
Since the sequences might advance as you do this, perhaps munge the seq-data.sqlfile to add a bit of
slack to the numbers.

Then restore that file to the new database using psql.

10. Showtime: Switch the applications to the new instances. This requires some thinking ahead of time. In
the simplest scenario, you stop your application programs, change the connection settings, restart. If you
use a connection proxy, you can switch over the connection there. You can also switch client applications
one by one, perhaps to test things out a bit or ease the load on the new system. This will work as long as
the applications still pointing to the old server and those pointing to the new server don’t make conflicting
writes. (In that case you would be running a multimaster system, at least for a short time, and that is
another order of complexity.)

11. When the upgrade is complete, you can tear down the replication setup. In each database on the new
instance, run

DROP SUBSCRIPTION s_upgrade;

If you have already shut down the old instance, this will fail because it won’t be able to reach the remote
server to drop the replication slot. See the DROP SUBSCRIPTION man page for how to proceed in this
situation.

You can also drop the publications on the source instance, but that is not necessary since a publication
does not retain an resources.

12. Finally, remove the old instances if you don’t need them any longer.

Some additional comments on workarounds for things that logical replication does not support. If you are
using large objects, you can move them over using pg_dump, of course as long as they don’t change
during the upgrade process. This is a significant limitation, so if you are a heavy user of large objects,
then this method might not be for you. If your application issues TRUNCATE during the upgrade
process, those actions will not be replicated. Perhaps you can tweak your application to prevent it from
doing that for the time of the upgrade, or you can substitute a DELETE instead. PostgreSQL 11 will
support replicating TRUNCATE, but that will only work if both the source and the destination instance
are PostgreSQL 11 or newer.

Some closing comments that really apply to all upgrade undertakings:


 Applications and all database client programs should be tested against a new major PostgreSQL version
before being put into production.
 To that end, you should also test the upgrade procedure before executing it in the production
environment.
 Write things down or better script and automate as much as possible.
 Make sure your backup setup, monitoring systems, and any maintenance tools and scripts are adjusted
appropriately during the upgrade procedure. Ideally, these should be in place and verified before the
switchover is done.

With that in mind, good luck and please share your experiences.

Share this entry




You might also like

Looking forward to PGDay India 2016

[Video] Introduction to JSON data types in PostgreSQL

Webinar: Security and Compliance with PostgreSQL [Follow Up]

How do PostgreSQL security_barrier views work?

PostgreSQL 11: Partitioning Evolution from Postgres 9.6 to 11

Sequential UUID Generators

5REPLIES

1.

Bruce Momjian says:


September 7, 2018 at 7:26 pm

Just to clarify, pg_upgrade modifies the old cluster only if link mode is used _and_ you start the new server. If pg_upgrade
fails while it is running, it does not modify the old server, even in link mode.

Reply

2.

Sergei Lavrov says:


September 8, 2018 at 6:47 pm

“With pg_upgrade, the new standbys have to be created after the upgrade of the primary is complete.” – you can convert
streaming replication standys with pg_upgrade too. It’s not necessary to recreate standbys from scratch. So downtime will be
similar as you pg_update master.
Reply

o
Peter Eisentraut says:
September 11, 2018 at 10:24 pm

The documentation for this states that the standbys are to be upgraded after the main part of the upgrade of the
primary is done. So it will take a bit longer than just doing the primary. It depends on the individual
circumstances, of course.

Reply

3.

Telford Tendys says:


November 28, 2018 at 7:39 am

I tested an upgrade with 8.1G of data in /var/lib/pgsql/10/data/base/ and two versions of PostgreSQL running on the same
server, different ports (postgresql10-server-10.6-1PGDG.rhel7.x86_64 upgrading to postgresql11-server-11.1-
1PGDG.rhel7.x86_64). Using the simplistic command “pg_dumpall | psql –port=5444” gets the job done in less than 30
minutes. As you already mention above, this requires application downtime since changes made during the dump cannot
reliably be synchronized afterwards.

I also tested using the logical replication PUBLICATION / SUBSCRIPTION method, getting only the schema from
“pg_dumpall -s” and found this method was working, but very slow. After 24 hours running (on exactly the same server as
previous test) with intense I/O activity, some of the data was visible on the receiver but a lot was still missing and I gave up
and shut down the test. What’s more the size of /var/lib/pgsql/11/data/base/ had been growing, it had reached more than 21G in
size. Although the source server was still operational, trying to use our application was very sluggish during replication.

Note that pg_dumpall is smart enough to move all the CREATE INDEX commands to after row data has been copied, while
your logical replication method described above attempts to first install the full schema (including CREATE INDEX) and then
afterwards push through the row data. I’m not sure that’s the only source of inefficiency but most people avoid doing it that
way. It would be possible to hand edit the schema as a work around, or if all else fails use perl.

The best solution I can imagine would be if the pg_dumpall established a checkpoint and simultaneously created a named slot
aligned with that checkpoint. After that, it would dump everything up to the checkpoint and then walk away, leaving the slot
sitting there ready to use for logical replication. On the new database you would use CREATE SUBSCRIPTION and grab the
existing slot, bringing the replication up to date. Look at the way pg_basebackup works when doing a physical level replication
onto a secondary server and then provide equivalent features in the pg_dumpall to get the equivalent results happening at the
logical level replication. That’s my suggestion.

Reply

4.

Steven Winfield says:


December 3, 2018 at 9:11 pm

Thanks for this post. I’ve just been through an upgrade of our main production servers using this method and thought I’d share
some findings. The cluster that was upgraded is around 1TB.

1. In the initial loading phase there’s no real harm in setting “fsync” and “synchronous_commit” to false for a bit of a speed up,
so long as you’re willing to run the risk of having to start from scratch should you suffer a power outage.

2. If you want to change the locale of your cluster then you will need to edit the “CREATE DATABASE” statements in the sql
dumps made in step #5 above, even if you are specifying “–locale” to initdb in step 2.
3. For another (potentially huge) speed up: don’t create any unnecessary constraints or indexes before firing up logical
replication – all you need are the correct columns (i.e. CREATE TABLE statements) and primary keys to be in place. Luckily
this can be done with pg_dump(all) and grep:

a. Use “pg_dumpall –globals-only > globals.sql” to dump global objects.


b. Use “pg_dump –create –schema-only –section=pre-data -d dbname > dbname_predata.sql” for each database to dump just
the table (and custom type) definitions.
c. Use “pg_dump –schema-only –section=post-data -d dbname > dbname_postdata.sql” for each database to dump the
constraints, triggers, and indexes.
d. Use “grep -B 1 ‘PRIMARY KEY’ dbname_postdata.sql > dbname_pkeys.sql” to extract the primary key definitions from
the *_postdata.sql files.
e. In the new cluster execute globals.sql, all dbname_predata.sql, and all dbname_pkeys.sql – the latter will require switching
to the correct database.

For reference, only the table definitions are needed for the “initialization” phase for each table (“i” in pg_subscription_rel,
when the backend is executing “COPY FROM table TO STDOUT”), but without the primary key you will get replication
errors about the new incoming rows having no identity when the COPY finishes and real-time replication begins.

4. After all data has been copied you can then run the *_postdata.sql scripts via psql. Don’t worry about the duplicate primary
key definitions – psql will print an error message but will happily carry on with the rest of the DDL. You can run the scripts for
all dbs at the same time, but for a given db the indexes, constraints, and triggers will be built sequentially – thankfully,
CREATE INDEX has been parallelized in v11 so you may want to tweak your parallel worker settings to make best use of this.

5. If your original cluster is doing binary replication to a slave – especially one in a different geographical location – it’s a
good idea to set up binary replication from your new v11 instance to one in the other location too, and make sure they are
connected before starting the logical replication from you v10 instance to your v11.

6. As mentioned in the post, sequences are not replicated. Be aware that the command – as given – will only dump the contents
of sequences that match *_id_seq **in the public schema**. Using ” -t ‘*.*_seq’ ” will match all schemas. Alternatively, this
command should copy across the values of all sequences, no matter their name:

For each database execute:


psql -h old_host -p old_port -d old_db -t -c “select ‘SELECT pg_catalog.setval(”’ || quote_ident(schemaname) || ‘.’ ||
quote_ident(sequencename) || ”’, ‘ || coalesce(last_value, 1) || ‘, ‘ || case when last_value is null then ‘false);’ else ‘true);’ end
from pg_sequences” | psql -h new_host -p new_port -d new_db -a

If you want to add some slack to the sequence values then change this to “coalesce(last_value + N, N)” where N is, say, 1000.

And that’s it. The only issues we had afterwards was a custom logical replication plugin that failed to initialize when I began
adding replication slots to the new db (complaints of a missing symbol “AllocSetContextCreate”), which wasn’t caught during
testing.

Hope this helps someone!

Reply

Fast Upgrade of Legacy PostgreSQL with


Minimum Downtime Using pg_upgrade
12Apr2019

By Avinash Vallarapu, Jobin Augustine, Fernando Laudares Camargos and Nickolay Ihalainen PostgreSQL Upgrade 0 Comments

When you need to upgrade your PostgreSQL databases, there are a number of options
available to you. In this post we’ll take a look at how you can upgrade PostgreSQL versions using pg_upgrade, a built-in
tool that allows in-place upgrade of your software. Using pg_upgrade allows you, potentially, to minimize your
downtime, an essential consideration for many organizations. It also allows you to perform a postgres upgrade with very
minimal effort.

In our previous posts, we discussed various methods and tools that can help us perform a PostgreSQL upgrade –
(1) pg_dumpall, (2) pg_dump and pg_restore with pg_dumpall, (3) logical replication and pglogical, and (4) slony.
Methods 1 and 2 can involve additional downtime compared to the approaches taken in 3 and 4. Whilst performing an
upgrade using logical replication or slony may be time consuming and require a lot of monitoring, it can be worth it if
you can minimize downtime. If you have large databases that are busy with a lot of transactions, you may be better
served using logical replication or slony.

This post is the fifth of our Upgrading or Migrating Your Legacy PostgreSQL to Newer PostgreSQL Versions series.
These posts lead up to a live webinar, where we’ll be exploring different methods available to upgrade your PostgreSQL
databases. If it’s beyond the live webinar date when you read this, you’ll find the recording at that same link.

pg_upgrade
pg_upgrade (formerly pg_migrator – until PostgreSQL 8.4) is a built-in tool that helps in upgrading a legacy PostgreSQL
server to a newer version without the need of a dump and restore. The oldest version from when you can upgrade your
PostgreSQL using pg_upgrade is 8.4.x. It is capable of performing faster upgrades by taking into consideration that
system tables are the ones that undergo the most change between two major versions. The internal data storage format is
less often affected.

In fact, in one of our tests we were able to perform an upgrade of a 2 TB database server from PostgreSQL 9.6.5 to 11.1
in less than 10 seconds. Now that is fast!

Overview of the process


To understand how it works, consider a PostgreSQL server running on 9.3.3 that needs to upgrade to PostgreSQL 11.
You should install the latest binaries for your new PostgreSQL version on the server – let’s say PostgreSQL 11.2 –
before you begin the upgrade process.

Preparation and consistency checks


Once you have installed the new PostgreSQL version, initialize a new data directory using the new binaries and start it
on another port i.e. a different port to the one used by PostgreSQL 9.3.3, in our example. Use pg_upgrade to perform
consistency checks between the two servers – PG 9.3.3 and PG 11.2 – running on two different ports. If you get any
errors, such as a missing extension, you need to to fix these before you proceeding to the upgrade. Once the consistency
check has been passed, you can proceed.

Here is how the log looks if you should get an error while performing consistency checks.

Shell
1 $ /usr/pgsql-11/bin/pg_upgrade -b /usr/pgsql-9.3/bin -B /usr/pgsql-11/bin -d /var/lib/pgsql/9.3/data -D /var/lib/pgsql/11/data_new -c

2 Performing Consistency Checks on Old Live Server

3 ------------------------------------------------

4 Checking cluster versions ok

5 Checking database user is the install user ok

6 Checking database connection settings ok

7 Checking for prepared transactions ok

8 Checking for reg* data types in user tables ok

9 Checking for contrib/isn with bigint-passing mismatch ok

10 Checking for invalid "unknown" user columns ok

11 Checking for hash indexes ok

12 Checking for roles starting with "pg_" ok

13 Checking for incompatible "line" data type ok

14 Checking for presence of required libraries fatal

15

16 Your installation references loadable libraries that are missing from the

17 new installation. You can add these libraries to the new installation,

18 or remove the functions using them from the old installation. A list of

19 problem libraries is in the file:

20 loadable_libraries.txt

21

22 Failure, exiting

23

24 $ cat loadable_libraries.txt

25 could not load library "$libdir/pg_repack": ERROR: could not access file "$libdir/pg_repack": No such file or directory

To proceed beyond the error, in this example you’d need to install this missing extension pg_repack for the new
PostgreSQL version, and rerun the check to make sure that you receive no errors and all the checks are passed.

Carrying out the upgrade


Once passed, you can proceed in one of two ways. One option is to let pg_upgrade copy the datafiles of the old data
directory to the new data directory initialized by the new PostgreSQL version. The second option is to let pg_upgrade
use hard links instead of copying data files. Copying a database of several terabytes may be time consuming. Using the
hard links method makes the process really quick as it does not involve copying files.

To use hard links with pg_upgrade, you pass an additional argument -k as you can see in the following command.

Shell

1 $ /usr/pgsql-11/bin/pg_upgrade -b /usr/pgsql-9.3/bin -B /usr/pgsql-11/bin -d /var/lib/pgsql/9.3/data -D /var/lib/pgsql/11/data_new -k

In the Unix file system, a file or a directory is a link to an inode (index node) that stores metadata (disk block location,
attributes, etc) of the data stored in them. Each inode is identified by an integer or an inode number. When you use
pg_upgrade with hard links, it internally creates another file/directory in the new data directory that links to the same
inode as it was in the older data directory for that file/directory. So, it skips the physical copy of the objects, but creates
each object and links them to the same inode.

This reduces the disk IO and avoids the need for additional space in the server. An important point to note is that this
option works only when you are upgrading your PostgreSQL on the same file system. This means, for example, if you
want to upgrade to a new or a faster disk during the database upgrade, the hard link option does not work. In that case,
you would need to use the file copy method.

So far, we have seen a high level overview of how pg_upgrade with hard links help you to perform an upgrade with
lowest possible downtime. Come see more in action during our Webinar. And don’t forget at Percona Live in Austin,
May 28-30 2019, we’ll have two days of PostgreSQL content in a postgres dedicated track.


Elephant image based on photo from Pexels

Вам также может понравиться