Building vSphere Integrated Containers from source

In the last blog post we I introduced vSphere Integrated Containers or VIC but to be quick and simple I used binaries from Bintray instead of building it from source.

If you want to build from source it’s pretty simple, just go to the GitHub page¬†and clone the repo, the “README.md” file will give you all the info you need.

I’ll be using the same PhotonOS VM I used in my previous post:

tdnf install git -y
git clone https://github.com/vmware/vic
cd vic
cat README.md

The best way to do this is to take advantage of the containerized approach that will spin off a container with all the prerequisites packages to build the VIC executables so that you don’t have to install them, plus this won’t modify your system at all so a pretty clean way to take.

systemctl enable docker
systemctl start docker
docker run -v $(pwd):/go/src/github.com/vmware/vic -w /go/src/github.com/vmware/vic golang:1.7 make all

A pretty long out will follow and I have no intention to paste the whole thing here, just follow the instructions if you care to see it ūüôā

Just keep in mind that if you don’t give your VM enough RAM the build process will fail because gcc is not capable of allocating enough memory; my VM had 2 GB of RAM and that was good enough.

The build process takes only a few minutes to complete.

After that you can find the executables in the “bin” folder and from there you can use the commands of my previous post.

cd bin
./vic-machine-linux create --target administrator@vsphere.local:password@vcenterFQDN/Datacenter --tls-cname vch --image-store astore --public-network LAN --bridge-network Docker-Bridge --no-tlsverify --force
./vic-machine-linux delete --target administrator@vsphere.local:password@vcenterFQDN/Datacenter --force

 

vSphere Integrated Containers

So you have been playing with containers for a while now using Docker but once you start having several containers running in many VMs you find it difficult to manage or even remember which container runs on which VM.

VMware answer to this problem is called vSphere Integrated Containers (or VIC).

With VIC your Docker Hosts (the VMs that are running the containers) will not be a blackbox anymore but they will be capable of showing up like VMs in your vCenter server, exposing every property a VM holds.

You can get VIC here: https://vmware.github.io/vic/ but you will need to build it from source.

Otherwise you can download the binaries from Bintray: https://bintray.com/vmware/vic/

I will use the template I created in a previous post to deploy and manage VIC on my vCenter.

Deploying from template doesn’t seem to work with PhotonOS so the customizations will need to be handled manually, so I just cloned the VM into a new VM with the name of “VIC”.

Let’s customize our PhotonOS:

vi /etc/hostname # edit hostname
cd /etc/systemd/network/
cp 10-dhcp-en.network 10-static-en.network
vi 10-static-en.network # set static ip address as follows
[Match]
Name=eth0

[Network]
Address=192.168.110.11/24
Gateway=192.168.110.254
DNS=192.168.110.10
chmod 644 10-static-en.network
systemctl restart systemd-networkd
tdnf install tar wget -y
wget https://bintray.com/vmware/vic/download_file?file_path=vic_0.8.0.tar.gz
mv download_file\?file_path\=vic_0.8.0.tar.gz vic_0.8.0.tar.gz
tar xzvf vic_0.8.0.tar.gz

Now SSH to each ESXi host that will run VIC and add a firewall rule so that VIC will not get blocked:

vi /etc/vmware/firewall/vch.xml
<!-- Firewall configuration information -->
<ConfigRoot>
<service id='0042'>
<id>VCH</id>
<rule id='0000'>
<direction>outbound</direction>
<protocol>tcp</protocol>
<porttype>dst</porttype>
<port>2377</port>
</rule>
<enabled>true</enabled>
<required>true</required>
</service>
</ConfigRoot>
esxcli network firewall refresh
esxcli network firewall ruleset list

You should be able to see a rule called “VCH” enabled.

Now you have to create a Virtual Distributed PortGroup, I called mine¬†“Docker-Bridge”.

Back on the VIC virtual machine you should change directory to the VIC extracted executables:

cd vic/
./vic-machine-linux create --target administrator@vsphere.local:password@vcenterFQDN/Datacenter --tls-cname vch --image-store vsanDatastore --public-network LAN --bridge-network Docker-Bridge --no-tlsverify --force

Since I’m using selft signed certificates in my lab I had to work around some certificates checking problems:
–tls-noverify: Disables client side certificates for authentication
–force: Disables check for the destination vCenter, otherwise you would need the certificate thumbprint

You should get an output similar to this:

INFO[2016-12-24T18:27:47Z] ### Installing VCH ####
WARN[2016-12-24T18:27:47Z] Using administrative user for VCH operation - use --ops-user to improve security (see -x for advanced help)
INFO[2016-12-24T18:27:47Z] Loaded server certificate virtual-container-host/server-cert.pem
WARN[2016-12-24T18:27:47Z] Configuring without TLS verify - certificate-based authentication disabled
INFO[2016-12-24T18:27:47Z] Validating supplied configuration
INFO[2016-12-24T18:27:47Z] vDS configuration OK on "Docker-Bridge"
INFO[2016-12-24T18:27:47Z] Firewall status: ENABLED on "/Datacenter/host/Cluster/esxi.vmware.lab"
INFO[2016-12-24T18:27:47Z] Firewall configuration OK on hosts:
INFO[2016-12-24T18:27:47Z] "/Datacenter/host/Cluster/esxi.vmware.lab"
INFO[2016-12-24T18:27:47Z] License check OK on hosts:
INFO[2016-12-24T18:27:47Z] "/Datacenter/host/Cluster/esxi.vmware.lab"
INFO[2016-12-24T18:27:47Z] DRS check OK on:
INFO[2016-12-24T18:27:47Z] "/Datacenter/host/Cluster/Resources"
INFO[2016-12-24T18:27:47Z]
INFO[2016-12-24T18:27:47Z] Creating virtual app "virtual-container-host"
INFO[2016-12-24T18:27:47Z] Creating appliance on target
INFO[2016-12-24T18:27:47Z] Network role "management" is sharing NIC with "client"
INFO[2016-12-24T18:27:47Z] Network role "public" is sharing NIC with "client"
INFO[2016-12-24T18:27:49Z] Uploading images for container
INFO[2016-12-24T18:27:49Z] "bootstrap.iso"
INFO[2016-12-24T18:27:49Z] "appliance.iso"
INFO[2016-12-24T18:27:55Z] Waiting for IP information
INFO[2016-12-24T18:28:06Z] Waiting for major appliance components to launch
INFO[2016-12-24T18:28:06Z] Checking VCH connectivity with vSphere target
INFO[2016-12-24T18:28:06Z] vSphere API Test: https://vcenter.vmware.lab vSphere API target responds as expected
INFO[2016-12-24T18:28:09Z] Initialization of appliance successful
INFO[2016-12-24T18:28:09Z]
INFO[2016-12-24T18:28:09Z] VCH Admin Portal:
INFO[2016-12-24T18:28:09Z] https://192.168.110.57:2378
INFO[2016-12-24T18:28:09Z]
INFO[2016-12-24T18:28:09Z] Published ports can be reached at:
INFO[2016-12-24T18:28:09Z] 192.168.110.57
INFO[2016-12-24T18:28:09Z]
INFO[2016-12-24T18:28:09Z] Docker environment variables:
INFO[2016-12-24T18:28:09Z] DOCKER_HOST=192.168.110.57:2376
INFO[2016-12-24T18:28:09Z]
INFO[2016-12-24T18:28:09Z] Environment saved in virtual-container-host/virtual-container-host.env
INFO[2016-12-24T18:28:09Z]
INFO[2016-12-24T18:28:09Z] Connect to docker:
INFO[2016-12-24T18:28:09Z] docker -H 192.168.110.57:2376 --tls info
INFO[2016-12-24T18:28:09Z] Installer completed successfully

Now you can query the Docker API endpoint, in my case:

docker -H 192.168.110.57:2376 --tls info

If you see this you are good to go:

Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: v0.8.0-7315-c8ac999
Storage Driver: vSphere Integrated Containers v0.8.0-7315-c8ac999 Backend Engine
VolumeStores:
vSphere Integrated Containers v0.8.0-7315-c8ac999 Backend Engine: RUNNING
VCH mhz limit: 10376 Mhz
VCH memory limit: 49.85 GiB
VMware Product: VMware vCenter Server
VMware OS: linux-x64
VMware OS version: 6.0.0
Plugins:
Volume:
Network: bridge
Swarm:
NodeID:
Is Manager: false
Node Address:
Security Options:
Operating System: linux-x64
OSType: linux-x64
Architecture: x86_64
CPUs: 10376
Total Memory: 49.85 GiB
Name: virtual-container-host
ID: vSphere Integrated Containers
Docker Root Dir:
Debug Mode (client): false
Debug Mode (server): false
Registry: registry-1.docker.io

 

You might get an error similar to¬†“Error response from daemon: client is newer than server (client API version: 1.24, server API version: 1.23)”, this is because the docker client installed in PhotonOS can be newer than the Docker API endpoint.

You can fix this setting the client version to a previous one:

echo "export DOCKER_API_VERSION=1.23" >> ~/.bash_profile
source ~/.bash_profile

If you try again now you should be good.

You can run standard Docker commands using the API endpoint we just created, so for example we can run Apache in a container like so:

docker -H 192.168.110.57:2376 --tls run -d --name "Apache" -p 80:80 httpd
docker -H 192.168.110.57:2376 --tls ps

screen-shot-2016-12-24-at-19-40-36

If we try to point a browser to “192.168.110.57”:

screen-shot-2016-12-24-at-19-43-07

Let’s take a look at vCenter to see what has been created:

screen-shot-2016-12-24-at-19-44-13

You can see the Docker API endpoint represented by the VM called “virtual-container-host” but also the container itself!

As you can see we have information about what is running, the container ID, the internal container IP Address etc.

You can even go ahead and edit the virtual hardware like if it was a VM!

screen-shot-2016-12-24-at-19-47-06

Notice how it’s using the vdPortGroup that we created earlier.

To clean up:

docker -H 192.168.110.53:2376 --tls stop Apache
./vic-machine-linux delete --target administrator@vsphere.local:fidelio@vcenter.vmware.lab/Datacenter --force

And this is the coolest way ever to use containers with vSphere!

Note: Just remember that since a vdPortGroup is mandatory this means you need vdSwitch and so you must be running Enterprise Plus edition of vSphere.

Update: VIC is now GA with vSphere 6.5 for all Enterprise Plus users.

Cloud Native Applications and VMware

After quite a bit of radio silence I’m going to write about Cloud Native Applications and VMware approach to those.
After spending some time looking into container technologies with open source software it’s nice to see that VMware is jumping on the boat by adding their enterprise vision which is probably the missing part compared to other solutions.
I will start by preparing a template for all the services that I will install and I will do it the VMware way by using PhotonOS which I intend to use as proof of concept for vSphere Integrated Containers (VIC), Photon Controller, Harbor and Admiral.
PhotonOS is a lightweight operating system written just for running containerized applications and such; I have to say that after getting familiar with it I quite like its simplicity and quick approach to all day to day activities.
First thing first, you have to choose your deployment type, there are a few:

screen-shot-2016-12-15-at-13-46-27

I won’t describe the process as it’s pretty straightforward, I’ll just say that I manually installed PhotonOS with the ISO choosing the Minimal install option.

After installing we need the IP address and we also need to enable root to ssh into the box:

ip add     # show ip address info
vi /etc/ssh/sshd_config     # PermitRootLogin = yes
systemctl restart sshd     # restart ssh deamon

Then ssh as root and continue:

mkdir .ssh
echo "your_key" >> .ssh/authorized_keys
tdnf check-update
open-vm-tools.x86_64 10.0.5-12.ph1 photon-updates
nss.x86_64 3.25-1.ph1 photon-updates
shadow.x86_64 4.2.1-8.ph1 photon-updates
linux.x86_64 4.4.8-8.ph1 photon-updates
python-xml.x86_64 2.7.11-5.ph1 photon-updates
docker.x86_64 1.11.2-1.ph1 photon-updates
systemd.x86_64 228-25.ph1 photon-updates
python2-libs.x86_64 2.7.11-5.ph1 photon-updates
python2.x86_64 2.7.11-5.ph1 photon-updates
procps-ng.x86_64 3.3.11-3.ph1 photon-updates
filesystem.x86_64 1.0-8.ph1 photon-updates
openssl.x86_64 1.0.2h-3.ph1 photon-updates
systemd.x86_64 228-26.ph1 photon-updates
systemd.x86_64 228-30.ph1 photon-updates
python2-libs.x86_64 2.7.11-7.ph1 photon-updates
python-xml.x86_64 2.7.11-7.ph1 photon-updates
python2.x86_64 2.7.11-7.ph1 photon-updates
curl.x86_64 7.47.1-3.ph1 photon-updates
pcre.x86_64 8.39-1.ph1 photon-updates
openssl.x86_64 1.0.2h-5.ph1 photon-updates
openssh.x86_64 7.1p2-4.ph1 photon-updates
openssl.x86_64 1.0.2j-1.ph1 photon-updates
iptables.x86_64 1.6.0-5.ph1 photon-updates
systemd.x86_64 228-31.ph1 photon-updates
initramfs.x86_64 1.0-4.1146888.ph1 photon-updates
glibc.x86_64 2.22-9.ph1 photon-updates
open-vm-tools.x86_64 10.0.5-13.ph1 photon-updates
rpm.x86_64 4.11.2-11.ph1 photon-updates
linux.x86_64 4.4.26-1.ph1 photon-updates
initramfs.x86_64 1.0-5.11330561.ph1 photon-updates
python2.x86_64 2.7.11-8.ph1 photon-updates
curl.x86_64 7.47.1-4.ph1 photon-updates
bzip2.x86_64 1.0.6-6.ph1 photon-updates
tzdata.noarch 2016h-1.ph1 photon-updates
expat.x86_64 2.2.0-1.ph1 photon-updates
python2-libs.x86_64 2.7.11-8.ph1 photon-updates
python-xml.x86_64 2.7.11-8.ph1 photon-updates
docker.x86_64 1.12.1-1.ph1 photon-updates
cloud-init.x86_64 0.7.6-12.ph1 photon-updates
bridge-utils.x86_64 1.5-3.ph1 photon-updates
linux.x86_64 4.4.31-2.ph1 photon-updates
systemd.x86_64 228-32.ph1 photon-updates
curl.x86_64 7.51.0-1.ph1 photon-updates
initramfs.x86_64 1.0-5.11343362.ph1 photon-updates
cloud-init.x86_64 0.7.6-13.ph1 photon-updates
open-vm-tools.x86_64 10.1.0-1.ph1 photon-updates
initramfs.x86_64 1.0-5.11353601.ph1 photon-updates
cloud-init.x86_64 0.7.6-14.ph1 photon-updates
vim.x86_64 7.4-6.ph1 photon-updates
linux.x86_64 4.4.35-1.ph1 photon-updates
libtasn1.x86_64 4.7-3.ph1 photon-updates
tdnf upgrade -y
Upgrading:
vim x86_64 7.4-6.ph1 1.93 M
tzdata noarch 2016h-1.ph1 1.52 M
systemd x86_64 228-32.ph1 28.92 M
shadow x86_64 4.2.1-8.ph1 3.85 M
rpm x86_64 4.11.2-11.ph1 4.28 M
python2 x86_64 2.7.11-8.ph1 1.82 M
python2-libs x86_64 2.7.11-8.ph1 15.30 M
python-xml x86_64 2.7.11-8.ph1 318.67 k
procps-ng x86_64 3.3.11-3.ph1 1.04 M
pcre x86_64 8.39-1.ph1 960.35 k
openssl x86_64 1.0.2j-1.ph1 5.23 M
openssh x86_64 7.1p2-4.ph1 4.23 M
open-vm-tools x86_64 10.1.0-1.ph1 2.45 M
nss x86_64 3.25-1.ph1 3.87 M
libtasn1 x86_64 4.7-3.ph1 161.48 k
iptables x86_64 1.6.0-5.ph1 1.46 M
linux x86_64 4.4.35-1.ph1 44.76 M
initramfs x86_64 1.0-5.11353601.ph1 11.49 M
glibc x86_64 2.22-9.ph1 50.97 M
filesystem x86_64 1.0-8.ph1 7.14 k
expat x86_64 2.2.0-1.ph1 242.58 k
docker x86_64 1.12.1-1.ph1 82.59 M
curl x86_64 7.51.0-1.ph1 1.24 M
cloud-init x86_64 0.7.6-14.ph1 1.93 M
bzip2 x86_64 1.0.6-6.ph1 1.65 M
bridge-utils x86_64 1.5-3.ph1 36.61 k

Total installed size: 272.23 M

Downloading:
bridge-utils 19201 100%
bzip2 526008 100%
cloud-init 509729 100%
curl 898898 100%
docker 25657821 100%
expat 92851 100%
filesystem 16357 100%
glibc 19396323 100%
initramfs 11983289 100%
linux 18887362 100%
iptables 416848 100%
libtasn1 98060 100%
nss 1591172 100%
open-vm-tools 912998 100%
openssh 1853448 100%
openssl 3192392 100%
pcre 383441 100%
procps-ng 458368 100%
python-xml 86471 100%
python2-libs 5651168 100%
python2 741755 100%
rpm 1761294 100%
shadow 2002202 100%
systemd 11856941 100%
tzdata 633502 100%
vim 1046120 100%
Testing transaction
Running transaction
Creating ldconfig cache

Complete!

After that I rebooted since the “linux” package was updated and that stands for the kernel version.
You can check the kernel version loaded with:

uname -a

More customizations:

vi /boot/grub2/grub.cfg     # edit "set timeout=1"
iptables --list     # show iptables config which by defaults allows only SSH inbound
vi /etc/systemd/scripts/iptables     # edit iptables config file

I like to enable ICMP inbound, you can find the rule I added as the last one before the end of file:

iptables_config_file

systemctl restart iptables
iptables --list     # check running configuration includes ICMP inbound
systemctl enable docker     # enable docker loaded at boot

In coming days I will follow up with VIC, Photon Controller, Harbor and Admiral using this PhotonOS VM as template.

%d bloggers like this: