Building vSphere Integrated Containers from source

In the last blog post we I introduced vSphere Integrated Containers or VIC but to be quick and simple I used binaries from Bintray instead of building it from source.

If you want to build from source it’s pretty simple, just go to the GitHub page¬†and clone the repo, the “README.md” file will give you all the info you need.

I’ll be using the same PhotonOS VM I used in my previous post:

tdnf install git -y
git clone https://github.com/vmware/vic
cd vic
cat README.md

The best way to do this is to take advantage of the containerized approach that will spin off a container with all the prerequisites packages to build the VIC executables so that you don’t have to install them, plus this won’t modify your system at all so a pretty clean way to take.

systemctl enable docker
systemctl start docker
docker run -v $(pwd):/go/src/github.com/vmware/vic -w /go/src/github.com/vmware/vic golang:1.7 make all

A pretty long out will follow and I have no intention to paste the whole thing here, just follow the instructions if you care to see it ūüôā

Just keep in mind that if you don’t give your VM enough RAM the build process will fail because gcc is not capable of allocating enough memory; my VM had 2 GB of RAM and that was good enough.

The build process takes only a few minutes to complete.

After that you can find the executables in the “bin” folder and from there you can use the commands of my previous post.

cd bin
./vic-machine-linux create --target administrator@vsphere.local:password@vcenterFQDN/Datacenter --tls-cname vch --image-store astore --public-network LAN --bridge-network Docker-Bridge --no-tlsverify --force
./vic-machine-linux delete --target administrator@vsphere.local:password@vcenterFQDN/Datacenter --force

 

vSphere Integrated Containers

So you have been playing with containers for a while now using Docker but once you start having several containers running in many VMs you find it difficult to manage or even remember which container runs on which VM.

VMware answer to this problem is called vSphere Integrated Containers (or VIC).

With VIC your Docker Hosts (the VMs that are running the containers) will not be a blackbox anymore but they will be capable of showing up like VMs in your vCenter server, exposing every property a VM holds.

You can get VIC here: https://vmware.github.io/vic/ but you will need to build it from source.

Otherwise you can download the binaries from Bintray: https://bintray.com/vmware/vic/

I will use the template I created in a previous post to deploy and manage VIC on my vCenter.

Deploying from template doesn’t seem to work with PhotonOS so the customizations will need to be handled manually, so I just cloned the VM into a new VM with the name of “VIC”.

Let’s customize our PhotonOS:

vi /etc/hostname # edit hostname
cd /etc/systemd/network/
cp 10-dhcp-en.network 10-static-en.network
vi 10-static-en.network # set static ip address as follows
[Match]
Name=eth0

[Network]
Address=192.168.110.11/24
Gateway=192.168.110.254
DNS=192.168.110.10
chmod 644 10-static-en.network
systemctl restart systemd-networkd
tdnf install tar wget -y
wget https://bintray.com/vmware/vic/download_file?file_path=vic_0.8.0.tar.gz
mv download_file\?file_path\=vic_0.8.0.tar.gz vic_0.8.0.tar.gz
tar xzvf vic_0.8.0.tar.gz

Now SSH to each ESXi host that will run VIC and add a firewall rule so that VIC will not get blocked:

vi /etc/vmware/firewall/vch.xml
<!-- Firewall configuration information -->
<ConfigRoot>
<service id='0042'>
<id>VCH</id>
<rule id='0000'>
<direction>outbound</direction>
<protocol>tcp</protocol>
<porttype>dst</porttype>
<port>2377</port>
</rule>
<enabled>true</enabled>
<required>true</required>
</service>
</ConfigRoot>
esxcli network firewall refresh
esxcli network firewall ruleset list

You should be able to see a rule called “VCH” enabled.

Now you have to create a Virtual Distributed PortGroup, I called mine¬†“Docker-Bridge”.

Back on the VIC virtual machine you should change directory to the VIC extracted executables:

cd vic/
./vic-machine-linux create --target administrator@vsphere.local:password@vcenterFQDN/Datacenter --tls-cname vch --image-store vsanDatastore --public-network LAN --bridge-network Docker-Bridge --no-tlsverify --force

Since I’m using selft signed certificates in my lab I had to work around some certificates checking problems:
–tls-noverify: Disables client side certificates for authentication
–force: Disables check for the destination vCenter, otherwise you would need the certificate thumbprint

You should get an output similar to this:

INFO[2016-12-24T18:27:47Z] ### Installing VCH ####
WARN[2016-12-24T18:27:47Z] Using administrative user for VCH operation - use --ops-user to improve security (see -x for advanced help)
INFO[2016-12-24T18:27:47Z] Loaded server certificate virtual-container-host/server-cert.pem
WARN[2016-12-24T18:27:47Z] Configuring without TLS verify - certificate-based authentication disabled
INFO[2016-12-24T18:27:47Z] Validating supplied configuration
INFO[2016-12-24T18:27:47Z] vDS configuration OK on "Docker-Bridge"
INFO[2016-12-24T18:27:47Z] Firewall status: ENABLED on "/Datacenter/host/Cluster/esxi.vmware.lab"
INFO[2016-12-24T18:27:47Z] Firewall configuration OK on hosts:
INFO[2016-12-24T18:27:47Z] "/Datacenter/host/Cluster/esxi.vmware.lab"
INFO[2016-12-24T18:27:47Z] License check OK on hosts:
INFO[2016-12-24T18:27:47Z] "/Datacenter/host/Cluster/esxi.vmware.lab"
INFO[2016-12-24T18:27:47Z] DRS check OK on:
INFO[2016-12-24T18:27:47Z] "/Datacenter/host/Cluster/Resources"
INFO[2016-12-24T18:27:47Z]
INFO[2016-12-24T18:27:47Z] Creating virtual app "virtual-container-host"
INFO[2016-12-24T18:27:47Z] Creating appliance on target
INFO[2016-12-24T18:27:47Z] Network role "management" is sharing NIC with "client"
INFO[2016-12-24T18:27:47Z] Network role "public" is sharing NIC with "client"
INFO[2016-12-24T18:27:49Z] Uploading images for container
INFO[2016-12-24T18:27:49Z] "bootstrap.iso"
INFO[2016-12-24T18:27:49Z] "appliance.iso"
INFO[2016-12-24T18:27:55Z] Waiting for IP information
INFO[2016-12-24T18:28:06Z] Waiting for major appliance components to launch
INFO[2016-12-24T18:28:06Z] Checking VCH connectivity with vSphere target
INFO[2016-12-24T18:28:06Z] vSphere API Test: https://vcenter.vmware.lab vSphere API target responds as expected
INFO[2016-12-24T18:28:09Z] Initialization of appliance successful
INFO[2016-12-24T18:28:09Z]
INFO[2016-12-24T18:28:09Z] VCH Admin Portal:
INFO[2016-12-24T18:28:09Z] https://192.168.110.57:2378
INFO[2016-12-24T18:28:09Z]
INFO[2016-12-24T18:28:09Z] Published ports can be reached at:
INFO[2016-12-24T18:28:09Z] 192.168.110.57
INFO[2016-12-24T18:28:09Z]
INFO[2016-12-24T18:28:09Z] Docker environment variables:
INFO[2016-12-24T18:28:09Z] DOCKER_HOST=192.168.110.57:2376
INFO[2016-12-24T18:28:09Z]
INFO[2016-12-24T18:28:09Z] Environment saved in virtual-container-host/virtual-container-host.env
INFO[2016-12-24T18:28:09Z]
INFO[2016-12-24T18:28:09Z] Connect to docker:
INFO[2016-12-24T18:28:09Z] docker -H 192.168.110.57:2376 --tls info
INFO[2016-12-24T18:28:09Z] Installer completed successfully

Now you can query the Docker API endpoint, in my case:

docker -H 192.168.110.57:2376 --tls info

If you see this you are good to go:

Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: v0.8.0-7315-c8ac999
Storage Driver: vSphere Integrated Containers v0.8.0-7315-c8ac999 Backend Engine
VolumeStores:
vSphere Integrated Containers v0.8.0-7315-c8ac999 Backend Engine: RUNNING
VCH mhz limit: 10376 Mhz
VCH memory limit: 49.85 GiB
VMware Product: VMware vCenter Server
VMware OS: linux-x64
VMware OS version: 6.0.0
Plugins:
Volume:
Network: bridge
Swarm:
NodeID:
Is Manager: false
Node Address:
Security Options:
Operating System: linux-x64
OSType: linux-x64
Architecture: x86_64
CPUs: 10376
Total Memory: 49.85 GiB
Name: virtual-container-host
ID: vSphere Integrated Containers
Docker Root Dir:
Debug Mode (client): false
Debug Mode (server): false
Registry: registry-1.docker.io

 

You might get an error similar to¬†“Error response from daemon: client is newer than server (client API version: 1.24, server API version: 1.23)”, this is because the docker client installed in PhotonOS can be newer than the Docker API endpoint.

You can fix this setting the client version to a previous one:

echo "export DOCKER_API_VERSION=1.23" >> ~/.bash_profile
source ~/.bash_profile

If you try again now you should be good.

You can run standard Docker commands using the API endpoint we just created, so for example we can run Apache in a container like so:

docker -H 192.168.110.57:2376 --tls run -d --name "Apache" -p 80:80 httpd
docker -H 192.168.110.57:2376 --tls ps

screen-shot-2016-12-24-at-19-40-36

If we try to point a browser to “192.168.110.57”:

screen-shot-2016-12-24-at-19-43-07

Let’s take a look at vCenter to see what has been created:

screen-shot-2016-12-24-at-19-44-13

You can see the Docker API endpoint represented by the VM called “virtual-container-host” but also the container itself!

As you can see we have information about what is running, the container ID, the internal container IP Address etc.

You can even go ahead and edit the virtual hardware like if it was a VM!

screen-shot-2016-12-24-at-19-47-06

Notice how it’s using the vdPortGroup that we created earlier.

To clean up:

docker -H 192.168.110.53:2376 --tls stop Apache
./vic-machine-linux delete --target administrator@vsphere.local:fidelio@vcenter.vmware.lab/Datacenter --force

And this is the coolest way ever to use containers with vSphere!

Note: Just remember that since a vdPortGroup is mandatory this means you need vdSwitch and so you must be running Enterprise Plus edition of vSphere.

Update: VIC is now GA with vSphere 6.5 for all Enterprise Plus users.

Are you starting or advancing your career in IT?

When Neil asked me a piece of advice on the subject I was just having conversations with customers on how will our world change in the near future.

A lot of these conversations are based on the introduction of conteinerized applications and this is what I am going to write as soon as I have enough time, hopefully soon enough.

Anyway Neil did a tremendous job in collecting opinions from experienced field engineers and IT experts so if you want to read what I answered him and what many others had to say go check his blog: http://www.flackbox.com/best-it-career-advice/

I think we should thank Neil for this huge source of information that is useful for everybody, not only newcomers in the IT field.

Cloud Native Applications and VMware

After quite a bit of radio silence I’m going to write about Cloud Native Applications and VMware approach to those.
After spending some time looking into container technologies with open source software it’s nice to see that VMware is jumping on the boat by adding their enterprise vision which is probably the missing part compared to other solutions.
I will start by preparing a template for all the services that I will install and I will do it the VMware way by using PhotonOS which I intend to use as proof of concept for vSphere Integrated Containers (VIC), Photon Controller, Harbor and Admiral.
PhotonOS is a lightweight operating system written just for running containerized applications and such; I have to say that after getting familiar with it I quite like its simplicity and quick approach to all day to day activities.
First thing first, you have to choose your deployment type, there are a few:

screen-shot-2016-12-15-at-13-46-27

I won’t describe the process as it’s pretty straightforward, I’ll just say that I manually installed PhotonOS with the ISO choosing the Minimal install option.

After installing we need the IP address and we also need to enable root to ssh into the box:

ip add     # show ip address info
vi /etc/ssh/sshd_config     # PermitRootLogin = yes
systemctl restart sshd     # restart ssh deamon

Then ssh as root and continue:

mkdir .ssh
echo "your_key" >> .ssh/authorized_keys
tdnf check-update
open-vm-tools.x86_64 10.0.5-12.ph1 photon-updates
nss.x86_64 3.25-1.ph1 photon-updates
shadow.x86_64 4.2.1-8.ph1 photon-updates
linux.x86_64 4.4.8-8.ph1 photon-updates
python-xml.x86_64 2.7.11-5.ph1 photon-updates
docker.x86_64 1.11.2-1.ph1 photon-updates
systemd.x86_64 228-25.ph1 photon-updates
python2-libs.x86_64 2.7.11-5.ph1 photon-updates
python2.x86_64 2.7.11-5.ph1 photon-updates
procps-ng.x86_64 3.3.11-3.ph1 photon-updates
filesystem.x86_64 1.0-8.ph1 photon-updates
openssl.x86_64 1.0.2h-3.ph1 photon-updates
systemd.x86_64 228-26.ph1 photon-updates
systemd.x86_64 228-30.ph1 photon-updates
python2-libs.x86_64 2.7.11-7.ph1 photon-updates
python-xml.x86_64 2.7.11-7.ph1 photon-updates
python2.x86_64 2.7.11-7.ph1 photon-updates
curl.x86_64 7.47.1-3.ph1 photon-updates
pcre.x86_64 8.39-1.ph1 photon-updates
openssl.x86_64 1.0.2h-5.ph1 photon-updates
openssh.x86_64 7.1p2-4.ph1 photon-updates
openssl.x86_64 1.0.2j-1.ph1 photon-updates
iptables.x86_64 1.6.0-5.ph1 photon-updates
systemd.x86_64 228-31.ph1 photon-updates
initramfs.x86_64 1.0-4.1146888.ph1 photon-updates
glibc.x86_64 2.22-9.ph1 photon-updates
open-vm-tools.x86_64 10.0.5-13.ph1 photon-updates
rpm.x86_64 4.11.2-11.ph1 photon-updates
linux.x86_64 4.4.26-1.ph1 photon-updates
initramfs.x86_64 1.0-5.11330561.ph1 photon-updates
python2.x86_64 2.7.11-8.ph1 photon-updates
curl.x86_64 7.47.1-4.ph1 photon-updates
bzip2.x86_64 1.0.6-6.ph1 photon-updates
tzdata.noarch 2016h-1.ph1 photon-updates
expat.x86_64 2.2.0-1.ph1 photon-updates
python2-libs.x86_64 2.7.11-8.ph1 photon-updates
python-xml.x86_64 2.7.11-8.ph1 photon-updates
docker.x86_64 1.12.1-1.ph1 photon-updates
cloud-init.x86_64 0.7.6-12.ph1 photon-updates
bridge-utils.x86_64 1.5-3.ph1 photon-updates
linux.x86_64 4.4.31-2.ph1 photon-updates
systemd.x86_64 228-32.ph1 photon-updates
curl.x86_64 7.51.0-1.ph1 photon-updates
initramfs.x86_64 1.0-5.11343362.ph1 photon-updates
cloud-init.x86_64 0.7.6-13.ph1 photon-updates
open-vm-tools.x86_64 10.1.0-1.ph1 photon-updates
initramfs.x86_64 1.0-5.11353601.ph1 photon-updates
cloud-init.x86_64 0.7.6-14.ph1 photon-updates
vim.x86_64 7.4-6.ph1 photon-updates
linux.x86_64 4.4.35-1.ph1 photon-updates
libtasn1.x86_64 4.7-3.ph1 photon-updates
tdnf upgrade -y
Upgrading:
vim x86_64 7.4-6.ph1 1.93 M
tzdata noarch 2016h-1.ph1 1.52 M
systemd x86_64 228-32.ph1 28.92 M
shadow x86_64 4.2.1-8.ph1 3.85 M
rpm x86_64 4.11.2-11.ph1 4.28 M
python2 x86_64 2.7.11-8.ph1 1.82 M
python2-libs x86_64 2.7.11-8.ph1 15.30 M
python-xml x86_64 2.7.11-8.ph1 318.67 k
procps-ng x86_64 3.3.11-3.ph1 1.04 M
pcre x86_64 8.39-1.ph1 960.35 k
openssl x86_64 1.0.2j-1.ph1 5.23 M
openssh x86_64 7.1p2-4.ph1 4.23 M
open-vm-tools x86_64 10.1.0-1.ph1 2.45 M
nss x86_64 3.25-1.ph1 3.87 M
libtasn1 x86_64 4.7-3.ph1 161.48 k
iptables x86_64 1.6.0-5.ph1 1.46 M
linux x86_64 4.4.35-1.ph1 44.76 M
initramfs x86_64 1.0-5.11353601.ph1 11.49 M
glibc x86_64 2.22-9.ph1 50.97 M
filesystem x86_64 1.0-8.ph1 7.14 k
expat x86_64 2.2.0-1.ph1 242.58 k
docker x86_64 1.12.1-1.ph1 82.59 M
curl x86_64 7.51.0-1.ph1 1.24 M
cloud-init x86_64 0.7.6-14.ph1 1.93 M
bzip2 x86_64 1.0.6-6.ph1 1.65 M
bridge-utils x86_64 1.5-3.ph1 36.61 k

Total installed size: 272.23 M

Downloading:
bridge-utils 19201 100%
bzip2 526008 100%
cloud-init 509729 100%
curl 898898 100%
docker 25657821 100%
expat 92851 100%
filesystem 16357 100%
glibc 19396323 100%
initramfs 11983289 100%
linux 18887362 100%
iptables 416848 100%
libtasn1 98060 100%
nss 1591172 100%
open-vm-tools 912998 100%
openssh 1853448 100%
openssl 3192392 100%
pcre 383441 100%
procps-ng 458368 100%
python-xml 86471 100%
python2-libs 5651168 100%
python2 741755 100%
rpm 1761294 100%
shadow 2002202 100%
systemd 11856941 100%
tzdata 633502 100%
vim 1046120 100%
Testing transaction
Running transaction
Creating ldconfig cache

Complete!

After that I rebooted since the “linux” package was updated and that stands for the kernel version.
You can check the kernel version loaded with:

uname -a

More customizations:

vi /boot/grub2/grub.cfg     # edit "set timeout=1"
iptables --list     # show iptables config which by defaults allows only SSH inbound
vi /etc/systemd/scripts/iptables     # edit iptables config file

I like to enable ICMP inbound, you can find the rule I added as the last one before the end of file:

iptables_config_file

systemctl restart iptables
iptables --list     # check running configuration includes ICMP inbound
systemctl enable docker     # enable docker loaded at boot

In coming days I will follow up with VIC, Photon Controller, Harbor and Admiral using this PhotonOS VM as template.

vExpert 2016 Award

VMW-LOGO-vEXPERT-2016-k

Thanks to VMware for confirming my vExpert!

Dear old ESXTOP aka How to schedule ESXTOP batch mode

Recently I had to record a night activity of a specific VM running on a specific host for troubleshooting reasons because vCenter data wasn’t just enough for that.

Using a number of blog posts around from Duncan Epping and others (it was many, I don’t even have the links anymore) I’ve put up my personal guide about how to take over this task because every time it’s like I have to start from scratch so I decided to document it.

First thing I created a script with the specific run time and collection data I needed:

vi <path>/record-esxtop.sh
esxtop -b -a -d 2 -n 3600 > /esxtopoutput.csv

OR

esxtop -b -a -d 2 -n 3600 | gzip -9c > /esxtopoutput.csv.gz

(-d=sampling rate, -n=number of iterations; the total run time is “d*n” in seconds)
(second version creates a zipped version of the output)

Let’s make this script executable:

chmod +x <path>/record-esxtop.sh

Then, since in latest versions of ESXi there is no crontab, you’ll need to edit the cron file for the user you want to run the script with:

vi /var/spool/cron/crontabs/root

Then add a line similar to this:

30 4 * * * <path>/record-esxtop.sh

Now kill crond and reload:

cat /var/run/crond.pid
ps | grep 
kill -HUP 
ps | grep 
/usr/lib/vmware/busybox/bin/busybox crond
cat /var/run/crond.pid
ps | grep 

Now your script will get executed and you’ll find a file with your data, but how to read it?
It’s dead simple, just open PerfMon on Windows, clear all running counters then right-click on “Performance Monitor” and in the tab “Sources” add your CSV file (need to unpack it first); in the data tab you will then be able to choose metrics and VMs you want to add to your graph.

It would be nice to have a tool that does the same on Mac but I couldn’t find one and I had to use a Windows VM; if you know a Mac alternative for PerfMon please add a comment.

This procedure is supported by VMware as per KB 103346.

How to backup, restore and schedule vCenter Server Appliance vPostgres Database

Now that we are moving away from SQL Express in favor of vPostgres for vCenter simple install on Windows and since vPostgres is the default database engine for (not so simple) install of vCSA I thought it would be nice to learn how to backup and restore this database.

Since it’s easier to perform these tasks on Windows and since there are already many guides on the Internet I will focus on vCSA because I think that more and more production environment (small and big) will be using vCSA since now it’s just as functional as vCenter if not more. (more on this in another post…)

You will find all instructions for both Windows and vCSA versions of vCenter on KB2091961, but more important than that you will find there also the python scripts that will work all the magic for you so grab the “linux_backup_restore.zip” file and copy it to the vCSA:

scp linux_backup_restore.zip root@<vcenter>:/tmp

For the copy to work you must have previously changed the shell configuration for the root user in “/etc/passwd” from “/bin/appliancesh” to “/bin/bash”

Then:

unzip linux_backup_restore.zip
chmod +x backup_lin.py
mkdir /tmp/linux_backup_restore/backups
python /tmp/linux_backup_restore/backup_lin.py -f /tmp/linux_backup_restore/backups/VCDB.bak

All you will see when the backup is completed is:

Backup completed successfully.

You should see the backup file now:

vcenter:/tmp/linux_backup_restore/backups # ls -lha
total 912K
drwx------ 2 root root 4.0K Jun 3 19:41 .
drwx------ 3 root root 4.0K Jun 3 19:28 ..
-rw------- 1 root root 898K Jun 3 19:29 VCDB.bak

At this point I removed a folder in my vCenter VM and Templates view, then I logged off the vSphere WebClient and started a restore:

service vmware-vpxd stop
service vmware-vdcs stop
python /tmp/linux_backup_restore/restore_lin.py -f /tmp/linux_backup_restore/backups/VCDB.bak
service vmware-vpxd start
service vmware-vdcs start

I logged back in the WebClient and my folder was back, so mission accomplished.

Now how do I schedule this thing? Using the good old crontab but before that I will write a script that will run the backup and also give a name to the backup file corresponding to the weekday so I can have a rotation of 7 days:

#!/bin/bash
_dow="$(date +'%A')"
_bak="VCDB_${_dow}.bak"
python /tmp/linux_backup_restore/backup_lin.py -f /tmp/linux_backup_restore/backups/${_bak}

I saved it as “backup_vcdb” and made it executable with “chmod +x backup_vcdb”.

Now to schedule it just run “crontab -e” and enter a single line just like this:

0 23 * * * python /tmp/linux_backup_restore/backup_vcdb

This basically means that the system will execute the script every day of every week of every year at 11pm.

After the crontab job runs you should see a new backup with a name of this sort:

vcenter:/tmp/linux_backup_restore/backups # ls -lha
total 1.8M
drwx------ 2 root root 4.0K Jun 3 19:46 .
drwx------ 3 root root 4.0K Jun 3 19:28 ..
-rw------- 1 root root 898K Jun 3 19:29 VCDB.bak
-rw------- 1 root root 900K Jun 3 19:46 VCDB_Wednesday.bak

You will also have the log files of these backups in “/var/mail/root”.

Enjoy your new backup routine ūüôā

%d bloggers like this: