Cloud Native Applications and VMware

After quite a bit of radio silence I’m going to write about Cloud Native Applications and VMware approach to those.
After spending some time looking into container technologies with open source software it’s nice to see that VMware is jumping on the boat by adding their enterprise vision which is probably the missing part compared to other solutions.
I will start by preparing a template for all the services that I will install and I will do it the VMware way by using PhotonOS which I intend to use as proof of concept for vSphere Integrated Containers (VIC), Photon Controller, Harbor and Admiral.
PhotonOS is a lightweight operating system written just for running containerized applications and such; I have to say that after getting familiar with it I quite like its simplicity and quick approach to all day to day activities.
First thing first, you have to choose your deployment type, there are a few:

screen-shot-2016-12-15-at-13-46-27

I won’t describe the process as it’s pretty straightforward, I’ll just say that I manually installed PhotonOS with the ISO choosing the Minimal install option.

After installing we need the IP address and we also need to enable root to ssh into the box:

ip add     # show ip address info
vi /etc/ssh/sshd_config     # PermitRootLogin = yes
systemctl restart sshd     # restart ssh deamon

Then ssh as root and continue:

mkdir .ssh
echo "your_key" >> .ssh/authorized_keys
tdnf check-update
open-vm-tools.x86_64 10.0.5-12.ph1 photon-updates
nss.x86_64 3.25-1.ph1 photon-updates
shadow.x86_64 4.2.1-8.ph1 photon-updates
linux.x86_64 4.4.8-8.ph1 photon-updates
python-xml.x86_64 2.7.11-5.ph1 photon-updates
docker.x86_64 1.11.2-1.ph1 photon-updates
systemd.x86_64 228-25.ph1 photon-updates
python2-libs.x86_64 2.7.11-5.ph1 photon-updates
python2.x86_64 2.7.11-5.ph1 photon-updates
procps-ng.x86_64 3.3.11-3.ph1 photon-updates
filesystem.x86_64 1.0-8.ph1 photon-updates
openssl.x86_64 1.0.2h-3.ph1 photon-updates
systemd.x86_64 228-26.ph1 photon-updates
systemd.x86_64 228-30.ph1 photon-updates
python2-libs.x86_64 2.7.11-7.ph1 photon-updates
python-xml.x86_64 2.7.11-7.ph1 photon-updates
python2.x86_64 2.7.11-7.ph1 photon-updates
curl.x86_64 7.47.1-3.ph1 photon-updates
pcre.x86_64 8.39-1.ph1 photon-updates
openssl.x86_64 1.0.2h-5.ph1 photon-updates
openssh.x86_64 7.1p2-4.ph1 photon-updates
openssl.x86_64 1.0.2j-1.ph1 photon-updates
iptables.x86_64 1.6.0-5.ph1 photon-updates
systemd.x86_64 228-31.ph1 photon-updates
initramfs.x86_64 1.0-4.1146888.ph1 photon-updates
glibc.x86_64 2.22-9.ph1 photon-updates
open-vm-tools.x86_64 10.0.5-13.ph1 photon-updates
rpm.x86_64 4.11.2-11.ph1 photon-updates
linux.x86_64 4.4.26-1.ph1 photon-updates
initramfs.x86_64 1.0-5.11330561.ph1 photon-updates
python2.x86_64 2.7.11-8.ph1 photon-updates
curl.x86_64 7.47.1-4.ph1 photon-updates
bzip2.x86_64 1.0.6-6.ph1 photon-updates
tzdata.noarch 2016h-1.ph1 photon-updates
expat.x86_64 2.2.0-1.ph1 photon-updates
python2-libs.x86_64 2.7.11-8.ph1 photon-updates
python-xml.x86_64 2.7.11-8.ph1 photon-updates
docker.x86_64 1.12.1-1.ph1 photon-updates
cloud-init.x86_64 0.7.6-12.ph1 photon-updates
bridge-utils.x86_64 1.5-3.ph1 photon-updates
linux.x86_64 4.4.31-2.ph1 photon-updates
systemd.x86_64 228-32.ph1 photon-updates
curl.x86_64 7.51.0-1.ph1 photon-updates
initramfs.x86_64 1.0-5.11343362.ph1 photon-updates
cloud-init.x86_64 0.7.6-13.ph1 photon-updates
open-vm-tools.x86_64 10.1.0-1.ph1 photon-updates
initramfs.x86_64 1.0-5.11353601.ph1 photon-updates
cloud-init.x86_64 0.7.6-14.ph1 photon-updates
vim.x86_64 7.4-6.ph1 photon-updates
linux.x86_64 4.4.35-1.ph1 photon-updates
libtasn1.x86_64 4.7-3.ph1 photon-updates
tdnf upgrade -y
Upgrading:
vim x86_64 7.4-6.ph1 1.93 M
tzdata noarch 2016h-1.ph1 1.52 M
systemd x86_64 228-32.ph1 28.92 M
shadow x86_64 4.2.1-8.ph1 3.85 M
rpm x86_64 4.11.2-11.ph1 4.28 M
python2 x86_64 2.7.11-8.ph1 1.82 M
python2-libs x86_64 2.7.11-8.ph1 15.30 M
python-xml x86_64 2.7.11-8.ph1 318.67 k
procps-ng x86_64 3.3.11-3.ph1 1.04 M
pcre x86_64 8.39-1.ph1 960.35 k
openssl x86_64 1.0.2j-1.ph1 5.23 M
openssh x86_64 7.1p2-4.ph1 4.23 M
open-vm-tools x86_64 10.1.0-1.ph1 2.45 M
nss x86_64 3.25-1.ph1 3.87 M
libtasn1 x86_64 4.7-3.ph1 161.48 k
iptables x86_64 1.6.0-5.ph1 1.46 M
linux x86_64 4.4.35-1.ph1 44.76 M
initramfs x86_64 1.0-5.11353601.ph1 11.49 M
glibc x86_64 2.22-9.ph1 50.97 M
filesystem x86_64 1.0-8.ph1 7.14 k
expat x86_64 2.2.0-1.ph1 242.58 k
docker x86_64 1.12.1-1.ph1 82.59 M
curl x86_64 7.51.0-1.ph1 1.24 M
cloud-init x86_64 0.7.6-14.ph1 1.93 M
bzip2 x86_64 1.0.6-6.ph1 1.65 M
bridge-utils x86_64 1.5-3.ph1 36.61 k

Total installed size: 272.23 M

Downloading:
bridge-utils 19201 100%
bzip2 526008 100%
cloud-init 509729 100%
curl 898898 100%
docker 25657821 100%
expat 92851 100%
filesystem 16357 100%
glibc 19396323 100%
initramfs 11983289 100%
linux 18887362 100%
iptables 416848 100%
libtasn1 98060 100%
nss 1591172 100%
open-vm-tools 912998 100%
openssh 1853448 100%
openssl 3192392 100%
pcre 383441 100%
procps-ng 458368 100%
python-xml 86471 100%
python2-libs 5651168 100%
python2 741755 100%
rpm 1761294 100%
shadow 2002202 100%
systemd 11856941 100%
tzdata 633502 100%
vim 1046120 100%
Testing transaction
Running transaction
Creating ldconfig cache

Complete!

After that I rebooted since the “linux” package was updated and that stands for the kernel version.
You can check the kernel version loaded with:

uname -a

More customizations:

vi /boot/grub2/grub.cfg     # edit "set timeout=1"
iptables --list     # show iptables config which by defaults allows only SSH inbound
vi /etc/systemd/scripts/iptables     # edit iptables config file

I like to enable ICMP inbound, you can find the rule I added as the last one before the end of file:

iptables_config_file

systemctl restart iptables
iptables --list     # check running configuration includes ICMP inbound
systemctl enable docker     # enable docker loaded at boot

In coming days I will follow up with VIC, Photon Controller, Harbor and Admiral using this PhotonOS VM as template.

Advertisements

Building a higly available load balancing solution with HAProxy

When you start scaling your environment you will most likely need a load balancer but then again your load balancer will be your single point of failure which is one of the things you always want to avoid.

How do we go around that? Simply by scaling your load balancing solution as well. Most of the times in a production environment you will see load balancers in couples for redundancy. This is possible even with HAProxy using a software called keepalived.

Keepalived is not a tool specific for HAProxy but it does the job for us, since it will make it possible to share an IP address between our 2 load balancers. It does this using VRRP and you will get ownership of the IP address based on your keepalived configuration so you will end up with an active/passive architecture.

If you took the time to read the article i linked in the previous HAProxy post by Luca Dell’Oca you will know already how to build this.

First install keepalived and edit the config file:

yum install keepalived
vi /etc/keepalived/keepalived.conf

This is my config file, which you’ll notice is pretty much the same as Luca’s:

global_defs {
   notification_email {
     failover@myvirtualife.net
     sysadmin@myvirtualife.net
   }
   notification_email_from loadbalancer@myvirtualife.net
   smtp_server 192.168.100.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_script chk_haproxy {
   script "killall -0 haproxy"
   interval 1                     # check every second
   Weight 2                       # add 2 points of prio if OK
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 101
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 12345678
    }

virtual_ipaddress {
    172.16.110.5
}

track_script {
33	chk_haproxy
}
}

After configuring keepalived let’s make a few more changes and then let’s check if the shared IP is active:

net.ipv4.ip_nonlocal_bind = 1
sysctl -p
service keepalived start
chkconfig keepalived on
ip addr sh eth0

keepalived

172.16.110.2 is the ip address of this load balancer.
172.16.110.5 is the shared ip address managed by keepelived.

Now you have to set up another HAProxy VM and configure it in the same way, just remember in the keepalived config file that ‘priority’ must be set to ‘100’.

To test if it works just hard power down the VM that holds the shared IP and test if communication still works.

You obviously also have to install and configure HAProxy on both VMs and remember to keep the two configurations aligned of you make any changes.

Most of the time i disable iptables but Luca does a better job than me and shows you how to configure iptables to happily get along with both keepalived and HAProxy, so if you intend to leave iptables on go check his post too.

Balancing multiple Horizon Workspace gateway-va with HAProxy

When working with Horizon Workspace the first component you will scale to multiple instances is probably the gateway-va since this is the access point of all users, just to make sure it’s always available for connections.

In this case you need a load balancer to direct all users to all the gateway-va you have in your environment; i wrote about commercial and open source load balancers and also how to build one with HAProxy in this post.

I’m going to show you how i configure it with Horizon Workspace but remember that since I’ve learned about HAProxy only relatively recently by Luca Dell’Oca my configuration is just the way i do it and not necessarily the best so use the comments if you want to contribute.

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------

global
log 127.0.0.1 local2 info
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
option accept-invalid-http-request
retries 3
timeout http-request 60s
timeout queue 30m
timeout connect 1800s
timeout client 30m
timeout server 30m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
listen stats :9000
stats realm Haproxy\ Statistics
stats uri /stats

#---------------------------------------------------------------------
# Redirect to secured
#---------------------------------------------------------------------
frontend unsecured
bind :80
redirect scheme https if !{ ssl_fc }

#---------------------------------------------------------------------
# frontend secured
#---------------------------------------------------------------------
frontend front
bind :443 ssl crt /etc/haproxy/reverseproxy.pem
mode http

acl workspace hdr_beg(host) -i workspace.myvirtualife.net
use_backend workspace if workspace

#---------------------------------------------------------------------
# balancing between the various backends
#---------------------------------------------------------------------
backend workspace
mode http
server workspace1 192.168.110.10:443 weight 1 check port 443 inter 2000 rise 2 fall 5 ssl
server workspace2 192.168.110.11:443 weight 1 check port 443 inter 2000 rise 2 fall 5 ssl

Try to add a gateway-va and experiment with HAProxy to test HAProxy as load balancer. You can use this article if you want to know how to do it.

There are few more things worth of noting:

  • timeouts are really long here otherwise users will experience disconnects because this is the kind of web app you keep open quite a lot;
  • on port 9000 on the HAProxy host you will find statistics, for example “lb.yourcompany.yourdomain:9000/stats”, that will give numbers about state of connections and state of backends, problems, etc…
  • “log 127.0.0.1 local2 info” is necessary if you want logging enabled which is so important when troubleshooting problems; a lot on how to read logs in the HAProxy documentation

if you intend to put a SSL cert like in my configuration, know that it has to be a chain of cert and private key like this:

-----BEGIN CERTIFICATE-----
-----END CERTIFICATE-----
-----BEGIN RSA PRIVATE KEY-----
-----END RSA PRIVATE KEY-----

To make logging work and write to a separate file instead of putting everything in “/var/log/messages”, edit your “/etc/rsyslog.conf” file and make sure these lines are present:

# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514

# HAProxy
local2.* /var/log/haproxy.log

How to build a load balancer with HAProxy

If you’ve been reading my previous articles you must have noticed that in Horizon Workspace there is often the hidden assumption that you need and/or you already have in place a load balancer.

Load balancers are usually appliances sold in hardware that are put in front of your workloads to distribute load to multiple backend machines delivering the same services. The reason why you want to do that is to provide performance and availability to your service as it grows.

Horizon Workspace is no difference and since it’s pretty easy to have multiple gateway-va for redundancy and scalability then you are going to need a load balancer.

I don’t want to get much into details about how many vendors are out there and what is good and bad about them, nor what I see in production environments; what I am going to say is that:

  • load balancers can be an expensive combination of hardware and software;
  • nowadays they do a whole bunch of things besides just load balancing connections, like SSL offloading, caching, content inspection, etc.
  • since virtualization has become so mainstream we now have load balancers solutions all in software coming as virtual appliances

Some time ago I just happened to bump into a nice blog post by Luca Dell’Oca about a piece of software called HAProxy.

HAProxy is a opensource software that does HTTP/TCP load balancing with a lot of nice features including for example SSL Offloading; also HAProxy seems to be used in production in very large environments with no problems at all. Check their website for reference.

At the time I was looking for a way to load balance a VMware View environment and after reading Luca’s post about how to do it with HAProxy I became a real funboy. If a customer has no load balancing solution or needs to load balance only a small subset of services I always go with HAProxy now because I found it to be very reliable and it delivers great performance consuming very little resources. What can you ask for more?

The documentation is pretty broad and precise which is always good when it comes to learn your way through things.

Enough with evangelizing HAProxy, I will just get down to business and show you how I build my load balancers.

First let’s clear out some goals and assumptions:

  • I like to use CentOS to do this but it’s not mandatory
  • I’m a big fun of RPMs but i prefer to build HAProxy from source code
  • in this post i will provide with a basic installation just to start-up
  • in future posts i will publish specific configs i use for Horizon Workspace and about how to deploy more than one HAProxy virtual appliance for redundancy
  • by no means this is the best way to do it, it’s just what i do
  • by no means I’m discouraging you from buying commercial load balancers; always remember you are the only support for solutions you build!

What I do is downloading a CentOS iso for minimal install, it’s good for this task and it’s a small download. Pick x86 or x64. Whatever. Just install it as you normally would, connect it to the internet and install VMware Tools as well.

For this tutorial I used the latest CentOS which at the time of writing is 6.4.

After getting a ‘root’ prompt this is what I do:

yum install wget openssl-devel pcre-devel make gcc -y     # this installs prerequisites
wget http://haproxy.1wt.eu/download/1.5/src/devel/haproxy-1.5-dev19.tar.gz     # download the package
tar xzvf haproxy-1.5-dev19.tar.gz     # extracting
cd haproxy-1.5-dev19     # enter the extracted directory
make TARGET=linux2628 CPU=i686 USE_OPENSSL=1 USE_ZLIB=1 USE_PCRE=1     # i compile it with compression and ssl support; use CPU=x86_64 for CentOS x64
make install     # install
cp /usr/local/sbin/haproxy* /usr/sbin/     # copy binaries to /usr/sbin
cp /root/haproxy-1.5-dev19/examples/haproxy.init /etc/init.d/haproxy     # copy init script in /etc/init.d
chmod 755 /etc/init.d/haproxy     # setting permission on init script
mkdir /etc/haproxy     # creating directory where the config file must reside
cp /root/haproxy-1.5-dev19/examples/examples.cfg /etc/haproxy/haproxy.cfg     # copy example config file
mkdir /var/lib/haproxy     # create directory for stats file
touch /var/lib/haproxy/stats     # creating stats file
useradd haproxy     # i like to make haproxy run with a specific user
service haproxy check     # checking configuration file is valid
service haproxy start     # starting haproxy to verify it is working
chkconfig haproxy on     # setting haproxy to start with VM

The main reason why I like to build HAProxy myself is that when I was learning about it I had troubles to make SSL offloading work even if I was sure I was configuring it right. Turns out most RPMs out there are built without SSL support so I started just building it up by myself. In this way I can always use the last version and even if the current latest is a development version I can tell you it’s pretty stable.

Don’t forget to disable all unneeded services/daemons; most of them are not needed to run a load balancer.

If you intend to leave the firewall on, go check Luca’s post which will give you a good insight about how to configure iptables to work with HAProxy.

Don’t bother disabling SELinux, it seems to go by with HAProxy pretty well.

Have fun with your new shiny (and free) load balancer.

How to enable SSH root access in Horizon Workspace Virtual Machines

If you followed my previous posts you know how often and how useful is to SSH into a virtual appliance in Horizon Workspace and most of the time the commands you issue need to be run as ‘root’.

By default root access is not allowed via SSH and in order to get ‘root’ prompt you have to SSH as the user ‘sshuser’ with the same password as ‘root’ and then run:

su -



This can be annoying in particular using SCP to copy files because you are limited to ‘sshuser’ home directory and this force us to log back in as ‘root’ again to move the files we just copies.

In actuality there is a way to enable ‘root’ access straight from SSH to make things faster.

WARNING: I’M DESCRIBING THIS PROCEDURE FOR THE SAKE OF LEARNING BUT BY NO MEANS I SUGGEST TO DO THIS IN PRODUCTION BECAUSE IT WILL MOST LIKELY VIOLATE YOUR SECURITY POLICY.

  1. Connect to each VA console via vSphere Client or WebClient
  2. Select “Login” and enter as “root”
  3. vi /etc/ssh/sshd_config
  4. Find “PermitRootLogin”
  5. Change “PermitRootLogin” from “no” to “yes” and save the file
  6. service sshd restart

The “/etc/ssh/sshd_config” file should look like this:

#HostKeys for protocol version 2
#HostKey /etc/ssh/ssh_host_rsa_key
#HostKey /etc/ssh/ssh_host_dsa_key

# Lifetime and size of ephemeral version 1 server key
#KeyRegenerationInterval 1h
#ServerKeyBits 1024

# Logging
# obsoletes QuietMode and FascistLogging
#SyslogFacility AUTH
#LogLevel INFO

# Authentication:

#LoginGraceTime 2m
PermitRootLogin yes	# default is no
#StrictModes yes
#MaxAuthTries 6
#MaxSessions 10

#RSAAuthentication yes
#PubkeyAuthentication yes

Now you can log in as ‘root’ directly from SSH.

%d bloggers like this: