One of our clients asked us how easy would be building an auto scaling Liferay cluster in AWS. The answer to the question can seem simple, but is far from it.
AWS doesn't allow multicast traffic between EC2 instances (each one of their virtual servers running in EC2) even when that instances belong to the same subnet of a VPC. So Liferay's Clusterlink host autodiscovery won't work. And the only straightforward alternative is configuring an unicast TCP dedicated communication between nodes for ClusterLink, which requires all node IPs to be explicitly configured. And that makes auto scaling difficult.
After googling a while we found that there are some options in newer versions of JGroups which could allow a simpler configuration for new cluster nodes.
As we haven't found any reference about this kind of configuration for Liferay... What better way of spending some spare time than a proof of concept?
First step is setting up the test scenario:
- configure a VPC in AWS with public and private networks
- two load-balanced web servers in the public network
- two Liferay nodes in the private network
with only one restriction: spend no money ( thanks to Amazon Free Tier ).
In this article we will focus on the first step: setting-up the network.
Our test scenario is perfectly described in Amazon help . Obviously they are not giving a lot of details on how to keep us in the free tier. In fact Amazon offers us a VPC setup wizard but the NAT instance they create is not a free one.
Main problem for us is space. We'll need at least 5 servers ( 2 web servers, 2 app servers and a NAT instance ) . But all images (AMIs) directly provided by Amazon are 8 GiB size. So we will exceed the 30 GiB limit for ESB volumes.
Main problem for us is space. We'll need at least 5 servers ( 2 web servers, 2 app servers and a NAT instance ) . But all images (AMIs) directly provided by Amazon are 8 GiB size. So we will exceed the 30 GiB limit for ESB volumes.
Luckily we found an old and unique image (ami-6f3b465f) of a minimal Amazon Linux which is only 2 GiB, runs on HVM and the root volume is an ESB GP2. So it fits perfectly into a free t2.micro instance. I haven't done any check on the AMI so please just have in mind that before using it if you are worried about the security on your servers.
We have initially created five instances:
- 2 x t2.micro instances with 3 GiB space in (10.0.1.0/24) for the web servers.
- 2 x t2.micro instances with 6 GiB space in (10.0.1.0/24) for the app servers.
- 1 x t2.micro instance with 3 GiB space in (10.0.0.0/24) for the NAT server.
Only 21 GiB reserved for now. We even have some space left for creating a third instance of an app server. No DBMS instance reserved as it can be created as an Amazon RDS with an additional 20 GiB space quota.
It is important to create the NAT instance with an assigned public IP to avoid non-used Elastic IP costs when the machine is not running. After having granted us access through the security group configuration we only need to enable IP forwarding on the server.
# Access the EC2 NAT instance with via its public IP using .pem key-pair (example for an OSX machine) macstar:~ trenddevs$ ssh -A -i key-file.pem ec2-user@publicip # Instant enable of IPv4 forwarding, make it permanent and apply changes [ec2-user@ip-10-0-0-130 ~]$ sudo sysctl -w net.ipv4.ip_forward=1 [ec2-user@ip-10-0-0-130 ~]$ sudo vi /etc/sysctl.cnf #(update net.ipv4.ip_forward=1 in the file) [ec2-user@ip-10-0-0-130 ~]$ service network restart # Enable IP masquerading in iptables and make the rule persistent [ec2-user@ip-10-0-0-130 ~]$ sudo iptables -t nat -A POSTROUTING -o eth0-j MASQUERADE [ec2-user@ip-10-0-0-130 ~]$ sudo service iptables save # NOTE: DISABLE [ec2-user@ip-10-0-0-130 ~]$ sudo chkconfig noip on [ec2-user@ip-10-0-0-130 ~]$ sudo service noip start
Last step is disabling the Source/Destination Check in AWS console which avoids traffic originated in any other different IP leave the instance.
After all this changes, a new ssh jump to one of the internal network servers should allow us to check that they can ping www.google.com .
[ec2-user@ip-10-0-0-130 ~]$ ssh 10.0.1.159 Last login: Wed Apr 15 20:20:09 2015 from ip-10-0-0-130.us-west-2.compute.internal __| __|_ ) _| ( / Amazon Linux AMI ___|\___|___| https://aws.amazon.com/amazon-linux-ami/2015.03-release-notes/ [ec2-user@ip-10-0-1-159 ~]$ ping www.google.com PING www.google.com (173.194.33.147) 56(84) bytes of data. 64 bytes from sea09s17-in-f19.1e100.net (173.194.33.147): icmp_seq=1 ttl=52 time=7.66 ms 64 bytes from sea09s17-in-f19.1e100.net (173.194.33.147): icmp_seq=2 ttl=52 time=7.15 ms
Easy stuff for now. Next step, configuring the webservers.
No comments:
New comments are not allowed.