PostgreSQL High Availability on OneOps (2)


(Disclaimer: the blogs posted here only represent the author’s respective, OneOps project does not guarantee the support or warranty of any code, tutorial, documentation discussed here)

In my last posting, I introduced the Governor based Postgres on OneOps, which is an automated approach of deploying Postgres with High Availability (HA). Since then, I started a new round of development on the Governor Postgres cookbook, Etcd cookbook and HAProxy cookbook to take the benefits of the multi-layer clouds from OneOps, so that the Postgres pack on OneOps is one big step towards production readiness!

From last Postgres HA blog, the scenario we demonstrated was – all Postgres instances were in the same cloud or data center, so the limitation is: what if the entire cloud or data center is down?

In this blog, I will introduce a seamless HA failover solution across multi-layer clouds or data centers to guarantee the availability of Postgres, even after a sudden failure of an entire data center.

OneOps offers a concept of multi-layer clouds, technically speaking, the “primary” cloud and “secondary” cloud. If an application pack could be deployed over both primary and secondary clouds, the application instances in the secondary clouds (secondary instances) typically serve as the backup of the instances in primary clouds (primary instances). Here is one more difference from network’s point of view:

Primary Cloud: Global Load Balancer (global vip) forwards all the traffic to primary clouds.

Secondary Cloud: All secondary clouds are disabled and does not receive traffic from Global Load Balancer.

Moreover, if the application is stateless, e.g. REST application, it is not required to have the secondary instances to replicate the states or data from primary instances, which is the simplest case for OneOps pack developer.

Otherwise the secondary instances are supposed to replicate states or data from the primary instances, so that when the system admins flip the primary and secondary clouds (because of primary clouds are down), the secondary instances could immediately start to receive traffic with the consistent states or data. In our case, Postgres belongs to this case and the following is the architectural picture:


From above picture, when a client wants to connect to Postgres, its connection URL should be the FQDN of Global Load Balancer (GLB). Then the request is forwarded to HAproxy which helps to look up the Postgres master in primary cloud. Then the client could directly talk to the master forward, without having to go through the GLB and HAProxy (until the master failover happens).

In terms of data replication, Governor-based Postgres uses Stream Replication and all Postgres slaves directly and independently replicate data from the master. However when the master fails, only the slaves in primary clouds could be elected as new master, while the slaves in secondary clouds have no chance for the master election. The reason is: only the IP addresses from the primary cloud will be covered under OneOps GLB and the clients  should always use GLB for connecting Postgres master. If a slave Postgres instance from secondary cloud is elected to the new master, the user will not be able to connect to it.

After all Postgres instances in primary clouds fail at one shot (e.g. power outage) or by sequence, the primary and secondary clouds need to be flipped so that Postgres service will keep available.

In most cases, the primary and secondary clouds are mapped to different geographical data centers so that the outage of one data center will not bring down both primary and secondary clouds. The flip between primary and secondary clouds is triggered on OneOps UI, possibly by system admin or on-call engineers.

Now let’s quickly go through the deployment process and then see how the seamless failover solution will work.

Deploy Governor-based PostgreSQL on OneOps

Similar as the steps in the last blog, we create a Governor-based Postgres platform, add our SSH key into the user component. The difference comes from: when creating the environment, we need to at least one cloud as primary cloud, and at least one cloud as secondary cloud, which means at least two clouds should be created on OneOps beforehand. In this demo, I will use 1 cloud as primary and 1 cloud as secondary. (Again, “Availability Mode” should be set to “Redundant”)

Screen Shot 2016-07-28 at 4.57.52 PM

After creating the environment (before “Commit & Deploy”), we could click the Postgres platform name to review and change the “scaling” factor. In this demo, we use 3 compute per cloud, totally 6 compute for two clouds (primary and secondary). In practice, I personally recommend to use at least 3 compute for primary cloud (as well as secondary cloud), because Etcd runs best with minimum of 3 nodes, and more than 5 nodes may be a waste of having so many Postgres slaves.

Screen Shot 2016-07-29 at 10.23.26 AM

The deployment plan may resemble as following: (it is a long plan with total of 15 steps, which will happen on both primary and secondary clouds. Deployment on primary cloud will go first):

Screen Shot 2016-08-03 at 5.20.24 PM.png
Governor-based Postgres deployment plan (totally 15 steps, but here only show up to step 12th due to the space limitation. From step 9th, deployment moves onto secondary clouds)


After the deployment is done and finishing the Post Deployment section mentioned in PostgreSQL-Governor pack main page, from a Postgres client machine we could start to use FQDN to connect to the Postgres master.

/usr/pgsql-9.4/bin/psql --host your_fqdn_here --port 5000 -U postgres postgres

psql (9.4.8)
Type "help" for help.

Please note that the port number to connect is 5000, rather than 5432. In case you do not like 5000, it could be changed in HAProxy component in the Design phase and the port number could be something else (but not 5432).

Screen Shot 2016-08-04 at 7.34.59 AM.png

Test Postgres Master Failover

First let create a table, called “company”, and insert a record into it.

   NAME           TEXT    NOT NULL,
   AGE            INT     NOT NULL,
   ADDRESS        CHAR(50),
   SALARY         REAL,
VALUES (1, 'Paul', 32, 'California', 20000.00 ,'2001-07-13');

Now let use the method mentioned in the last blog to identify the current Postgres master, SSH into the master VM and run “service governor stop” to kill the current master. Later, I will talk about how to put the failed ex-master back online.

After 30 – 60 seconds (depending the value of Etcd TTL), the new Postgres master should be elected from the remaining compute from primary cloud. Now let’s re-connect the Postgres from the client and run the following select query:

postgres=# select * from company;
 id | name | age |         address        | salary | join_date
  1 | Paul |  32 | California             |  20000 | 2001-07-13
(1 row)

Next insert another record:

VALUES (2, 'Allen', 25, 'Texas', '2007-12-13');

Query the table again:

postgres=# select * from company;
 id | name | age |         address        | salary | join_date
  1 | Paul  |  32 | California             |  20000 | 2001-07-13
  2 | Allen |  25 | Texas                  |        | 2007-12-13
(2 rows)

From above we could see the new master is available to receive both read and write requests.

Now let’s shut down the current master by “service governor stop” to simulate another VM down.

Again after a short period of time, the last slave from primary cloud should be promoted to the new Postgres master and we could repeat the similar read and write queries to verify that the new master is working normally or not.

After shutting down the last Postgres instance (master) from primary cloud, no new master will be elected, because the Postgres instances in secondary cloud are not allowed to participate into the election. At this point, we need to flip the primary and secondary cloud and here are the sequences:

(1) Change the primary cloud to secondary cloud, and we will temporarily see two secondary clouds.

Go to “Transition” phase, click the environment and then Postgres platform name (on the right), find “Cloud Status” section (at the bottom), identify the primary cloud and choose “Make Secondary”.

Screen Shot 2016-07-29 at 1.27.31 AM

Then “Commit & Deploy” (only 4 steps).

Screen Shot 2016-08-03 at 5.16.15 PM

(2) Change the secondary cloud (not the one just got flipped) to primary cloud, so we again have one primary and one secondary cloud.

Identify the (right) secondary cloud and choose “Make Primary”.

Screen Shot 2016-07-29 at 1.35.33 AM.png

Then “Commit & Deploy” (now 7 steps).

Screen Shot 2016-08-03 at 5.18.42 PM

After the deployment completes, we try to re-connect the Postgres from client via FQDN:

/usr/pgsql-9.4/bin/psql --host your_fqdn_here --port 5432 -U postgres postgres

psql (9.4.8)
Type "help" for help.

And try some query:

postgres=# select * from company;
 id | name | age |         address        | salary | join_date
  1 | Paul  |  32 | California             |  20000 | 2001-07-13
  2 | Allen |  25 | Texas                  |        | 2007-12-13
(2 rows)

Run an insert command:

VALUES (3, 'Teddy', 23, 'Norway', 20000.00, DEFAULT )

Query the table one more time:

postgres=# select * from company;
 id | name | age |         address        | salary | join_date
  1 | Paul  |  32 | California             |  20000 | 2001-07-13
  2 | Allen |  25 | Texas                  |        | 2007-12-13
  3 | Teddy |  23 | Norway                 |  20000 |
(3 rows)

As seen from above, the old 2 records were not lost and the new record could be added to the existing table.

At this point, we have used simple queries to verify that the Postgres OneOps pack could support seamless master failover within a cloud and even across the clouds. Next question is: how to bring the failed Postgres instances back online?

This question is a little bit out of the discussion in this blog. For more details on this, there should be some materials to refer or the DBA may already know what to do based on the previous experiences of setting up Postgres HA. The one-sentence reminder is: do not simply reboot the failed master without properly handling the recovery tasks.

In this blog we just go ahead and delete the Postgres data directory and restart Postgre Governor process by “service governor start” to let the ex-master Postgres to re-sync all data from the current master.

rm /db/*
systemctl daemon-reload
service governor start

Now the ex-master should be a slave now. (“tail -f /var/log/message”)

Jul 29 08:49:06 pg-238343-1-25880159 bash: 2016-07-29 08:49:06,558 INFO: 
does not have lock
Jul 29 08:49:06 pg-238343-1-25880159 bash: 2016-07-29 08:49:06,573 INFO: 
Governor Running: no action.  i am a secondary and i am following a leader

Upto this point, we have seen a full cycle of Postgres master failover and recovery.


The Governor-based Postgres OneOps pack made several “number 1” in the OneOps application ecosystem.

  1.  the first pack that nicely supports deployment over both primary and secondary clouds.
  2. the first pack that seamlessly provides a HA failover solution between primary and secondary instances with an integrated user experience and data/state consistency.
  3. the first pack that stitches with many other existing packs (Etcd, Haproxy) together to make a complex but transparent HA system, without “re-invent the wheel”.

There will be ongoing improvements, such as, we may provide a knob for users to choose from synchronous and asynchronous replication. (currently it is only asynchronous)


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s