Showing posts with label amazon web services. Show all posts
Showing posts with label amazon web services. Show all posts
Saturday, 25 April 2015
AWS - Setting up a VPC with public and private subnets (III)
This is a network diagram of our Liferay cluster. One ELB, two web servers configured as reverse proxies balancing traffic to the application servers, and an Amazon RDS instance as database choose. Liferay document library will be stored in an S3 volume.
There are other factors to be considered in this network deployment. Availability zones and their impact in our portal will be one of them. If you are looking for help on setting up a Liferay cluster with higher availability requirements, we will be happy to help ( for a fistful of dollars, of course ) .
Wednesday, 15 April 2015
AWS - Setting up a VPC with public and private subnets (II)
We have made some more changes in our installation since our previous article.
First, best option to avoid paying Elastic IPs fees when the instances are stopped ( and they will stay stopped a lot of hours, only a total of 750h a month of computing time shared among all instances ) and assigning a DNS name to the machine is creating a DDNS account in noip.com and setting up a client. AWS provides this howto ( with a couple of missing points )
# Install noip client [ec2-user@ip-10-0-0-130 ~]$ sudo yum install epel-release [ec2-user@ip-10-0-0-130 ~]$ sudo yum-config-manager --enable epel [ec2-user@ip-10-0-0-130 ~]$ sudo yum install -y noip # Configure it [ec2-user@ip-10-0-0-130 ~]$ sudo noip2 -C # Setup noip as an startup service [ec2-user@ip-10-0-0-130 ~]$ sudo chkconfig noip on [ec2-user@ip-10-0-0-130 ~]$ sudo service noip start
As we are going to use an Elastic Load Balancer to balance traffic among our webservers we have configured an additional Security Group, and slightly modified existing configuration (all internal trafic allowed for now):
NAT Instance - INBOUND: ALLOW SSH (22) TRAFFIC FROM 0.0.0.0/0 INBOUND: ALLOW ANY TRAFFIC FROM 10.0.0.0/16 ( our VPC ) OUTBOUND: ALLOW ANY TRAFFIC TO 0.0.0.0/0 Load Balancer - INBOUND: ALLOW HTTP (80) TRAFFIC FROM 0.0.0.0/0 OUTBOUND: ALLOW ANY TRAFFIC TO 0.0.0.0/0 Default SG - INBOUND: ALLOW ANY TRAFFIC FROM 10.0.0.0/16 OUTBOUND: ALLOW ANY TRAFFIC TO 10.0.0.0/16
Next article will delay a while until we sort out with Amazon Customer Services the absurd quota of only 2 simultaneous instances they have applied to our account.
Monday, 13 April 2015
AWS - Setting up a VPC with public and private subnets ( in AWS Free Tier )
One of our clients asked us how easy would be building an auto scaling Liferay cluster in AWS. The answer to the question can seem simple, but is far from it.
AWS doesn't allow multicast traffic between EC2 instances (each one of their virtual servers running in EC2) even when that instances belong to the same subnet of a VPC. So Liferay's Clusterlink host autodiscovery won't work. And the only straightforward alternative is configuring an unicast TCP dedicated communication between nodes for ClusterLink, which requires all node IPs to be explicitly configured. And that makes auto scaling difficult.
After googling a while we found that there are some options in newer versions of JGroups which could allow a simpler configuration for new cluster nodes.
As we haven't found any reference about this kind of configuration for Liferay... What better way of spending some spare time than a proof of concept?
First step is setting up the test scenario:
- configure a VPC in AWS with public and private networks
- two load-balanced web servers in the public network
- two Liferay nodes in the private network
with only one restriction: spend no money ( thanks to Amazon Free Tier ).
In this article we will focus on the first step: setting-up the network.
Our test scenario is perfectly described in Amazon help . Obviously they are not giving a lot of details on how to keep us in the free tier. In fact Amazon offers us a VPC setup wizard but the NAT instance they create is not a free one.
Main problem for us is space. We'll need at least 5 servers ( 2 web servers, 2 app servers and a NAT instance ) . But all images (AMIs) directly provided by Amazon are 8 GiB size. So we will exceed the 30 GiB limit for ESB volumes.
Main problem for us is space. We'll need at least 5 servers ( 2 web servers, 2 app servers and a NAT instance ) . But all images (AMIs) directly provided by Amazon are 8 GiB size. So we will exceed the 30 GiB limit for ESB volumes.
Luckily we found an old and unique image (ami-6f3b465f) of a minimal Amazon Linux which is only 2 GiB, runs on HVM and the root volume is an ESB GP2. So it fits perfectly into a free t2.micro instance. I haven't done any check on the AMI so please just have in mind that before using it if you are worried about the security on your servers.
We have initially created five instances:
- 2 x t2.micro instances with 3 GiB space in (10.0.1.0/24) for the web servers.
- 2 x t2.micro instances with 6 GiB space in (10.0.1.0/24) for the app servers.
- 1 x t2.micro instance with 3 GiB space in (10.0.0.0/24) for the NAT server.
Only 21 GiB reserved for now. We even have some space left for creating a third instance of an app server. No DBMS instance reserved as it can be created as an Amazon RDS with an additional 20 GiB space quota.
It is important to create the NAT instance with an assigned public IP to avoid non-used Elastic IP costs when the machine is not running. After having granted us access through the security group configuration we only need to enable IP forwarding on the server.
# Access the EC2 NAT instance with via its public IP using .pem key-pair (example for an OSX machine) macstar:~ trenddevs$ ssh -A -i key-file.pem ec2-user@publicip # Instant enable of IPv4 forwarding, make it permanent and apply changes [ec2-user@ip-10-0-0-130 ~]$ sudo sysctl -w net.ipv4.ip_forward=1 [ec2-user@ip-10-0-0-130 ~]$ sudo vi /etc/sysctl.cnf #(update net.ipv4.ip_forward=1 in the file) [ec2-user@ip-10-0-0-130 ~]$ service network restart # Enable IP masquerading in iptables and make the rule persistent [ec2-user@ip-10-0-0-130 ~]$ sudo iptables -t nat -A POSTROUTING -o eth0-j MASQUERADE [ec2-user@ip-10-0-0-130 ~]$ sudo service iptables save # NOTE: DISABLE [ec2-user@ip-10-0-0-130 ~]$ sudo chkconfig noip on [ec2-user@ip-10-0-0-130 ~]$ sudo service noip start
Last step is disabling the Source/Destination Check in AWS console which avoids traffic originated in any other different IP leave the instance.
After all this changes, a new ssh jump to one of the internal network servers should allow us to check that they can ping www.google.com .
[ec2-user@ip-10-0-0-130 ~]$ ssh 10.0.1.159 Last login: Wed Apr 15 20:20:09 2015 from ip-10-0-0-130.us-west-2.compute.internal __| __|_ ) _| ( / Amazon Linux AMI ___|\___|___| https://aws.amazon.com/amazon-linux-ami/2015.03-release-notes/ [ec2-user@ip-10-0-1-159 ~]$ ping www.google.com PING www.google.com (173.194.33.147) 56(84) bytes of data. 64 bytes from sea09s17-in-f19.1e100.net (173.194.33.147): icmp_seq=1 ttl=52 time=7.66 ms 64 bytes from sea09s17-in-f19.1e100.net (173.194.33.147): icmp_seq=2 ttl=52 time=7.15 ms
Easy stuff for now. Next step, configuring the webservers.
Subscribe to:
Posts (Atom)