(Disclaimer: the blogs posted here only represent the author’s respective, OneOps project does not guarantee the support or warranty of any code, tutorial, documentation discussed here)
Regarding to the Redis deployment and automation, there has been a well-known public Chef Redis cookbook. In this post I would like to introduce how transparently and easily it could be transplanted to make Redis deployment happen on OneOps. In General, I hope this post will open more avenues for bringing the existing best public DevOps practices into OneOps ecosystem, with very minimum efforts.
As said, OneOps Redis cookbook was mostly mirrored from the public well-known Redis cookbook, so they are 99.99% same! The only difference is that OneOps Redis cookbook is more self-contained so that it does not refer to other cookbooks.
For example, in recipe/install.rb, it does not cross-reference build-essential cookbook (as opposed to what public Redis cookbook did). Instead, in recipe/_install_prereqs.rb, the execution of “make automake gcc” makes sure the necessary packages will be installed from Linux repositories, which has a similar result of running build-essential cookbook.
In addition, Redis deployment through OneOps currently follows the Cluster mode, a Redis cluster is created by running
redis-trib command on only one of the nodes. Please see the following piece of code for the little “tricky” cluster creation process:
I guess the fundamental reason of why OneOps keeps self-contained: it wants to make its own cookbook codebase thin and lightweight. But if we do want to refer to certain cookbook that is not available on OneOps, a temporary workaround could be copy the cookbook (and its dependencies) to OneOps cookbook directory.
Redis Deployment on OneOps
In OneOps “Design” phase, choose “Redis” pack. After creating the Redis design, you may click the “redisio” component to review some Redis attributes (recommend Redis version 3.0 and above)
Add your local SSH key to “user-app” component so that you could directly log into the Redis VM after the deployment.
After saving the Design, create a new environment with “Availability Mode” = redundant and choose 1 cloud as “primary cloud”.
By default, a Redis cluster with 6 VMs will be deployed: 3 VM will serve the masters, and the other 3 VM will be the slaves to replicate data from the 3 masters. The deployment plan will look like the following:
After deployment, the Redis cluster is up and running. We could validate this by checking the cluster members: log into any VM, use
redis-cli command output all cluster members.
>> ssh firstname.lastname@example.org -bash-4.2$ sudo -s [root@redis-11075986-6-24531380 app]#
>> /usr/local/bin/redis-cli cluster nodes xxxxc30d9 172.16.140.249:6379 slave xxxx13fb 0 1467312088802 4 connected xxxxb03a 172.16.140.79:6379 master - 0 1467312090305 3 connected 10923-16383 xxxx588c 172.16.143.128:6379 master - 0 1467312088802 2 connected 5461-10922 xxxx13fb 172.16.140.81:6379 master - 0 1467312088301 1 connected 0-5460 xxxx546e 172.16.140.252:6379 slave xxxxb03a 0 1467312089804 6 connected xxxx01c9 172.16.140.89:6379 myself,slave 3xxxx588c 0 0 5 connected
From above output, we could see 3 master nodes evenly split the keyspace (e.g. 0-5460) and each master will serve the requests that fall into its keyspace. Each slave replicates data from a master. Next let’s verify how master and slave provide data redundancy.
Verify the Redundancy of Redis Cluster
Put a key-value pair into Redis cluster:
>> /usr/local/bin/redis-cli -c 127.0.0.1:6379> set hello world -> Redirected to slot  located at 172.16.140.81:6379 OK
Get the value by the key:
172.16.140.81:6379> get hello "world"
Now let’s shut down the master that saves the key-value pair of “hello world”, in this example it is 172.16.140.81. Open a new terminal to SSH into 172.16.140.81, run service redis@6379 stop to terminate the running Redis instance, go back to the first terminal to output all cluster members again:
>> /usr/local/bin/redis-cli cluster nodes xxxxc30d9 172.16.140.249:6379 master xxxx13fb 0 1467312088802 4 connected xxxxb03a 172.16.140.79:6379 master - 0 1467312090305 3 connected 10923-16383 xxxx588c 172.16.143.128:6379 master - 0 1467312088802 2 connected 5461-10922 xxxx13fb 172.16.140.81:6379 master, fail - 0 1467312088301 1 connected 0-5460 xxxx546e 172.16.140.252:6379 slave xxxxb03a 0 1467312089804 6 connected xxxx01c9 172.16.140.89:6379 myself,slave 3xxxx588c 0 0 5 connected
From above, Redis instance on 172.16.140.81 has been marked as fail, while 172.16.140.249 has been promoted from slave to the new master, to cover the Redis failure on 172.16.140.81.
Try to get the value by the key:
>> /usr/local/bin/redis-cli -c 127.0.0.1:6379> get hello -> Redirected to slot  located at 172.16.140.249:6379 "world"
The value could still be read from Redis cluster and the machine to serve that request is now 172.16.140.249.
The focus of this article is to demonstrate how easily a well recognized public Chef cookbook could be transplanted and integrated with OneOps transparently, which may potentially open the opportunities for the Chef users to migrate their existing Chef cookbooks and scripts onto OneOps.
This article does not discuss about the operational benefits from OneOps on running Redis cluster. This is mostly because OneOps adopted the public Redis cookbook which does not have the full-fledged operational supports. However OneOps is specialized in operational excellence, e.g. auto-repair, auto-replace, auto-scale, as introduced in Cassandra OneOpe Pack. So it is the future work to make highly resilient Redis deployment with strong operational supports.