How To Deploy vCSA 6.0 with a Mac

The new vCenter Server Appliance has a new deployment model, both architectural wise and installation wise.

I wrote extensively about the architectural changes in this post, so I will focus on how to deploy it with a Mac using command line tools since if you want to use the graphical setup you need to be running Windows.

In order to do this you need the ISO file of the vCSA mounted in your Mac.

In “/Volumes/VMware VCSA/vcsa-cli-installer/mac” you will find a script called “vcsa-deploy” that requires a JSON file with all the parameters needed to deploy and configure the VCSA on your host.

You can find templates of JSON files in “/Volumes/VMware VCSA/vcsa-cli-installer/templates”, here is how I compiled mine in order to obtain a single VM with all vCenter and PSC services:

{
    "__comments":
    [
        "Sample template to deploy a vCenter Server with an embedded Platform Services Controller."
    ],

    "deployment":
    {
        "esx.hostname":"192.168.1.107",
        "esx.datastore":"vsanDatastore",
        "esx.username":"root",
        "esx.password":"12345678",
        "deployment.option":"tiny",
        "deployment.network":"LAN",
        "appliance.name":"vCenter",
        "appliance.thin.disk.mode":true
    },

    "vcsa":
    {

        "system":
        {
            "root.password":"12345678",
            "ssh.enable":true
        },

        "sso":
        {
            "password":"12345678",
            "domain-name":"vsphere.local",
            "site-name":"Default-First-Site"
        },

        "networking":
        {
            "ip.family":"ipv4",
            "mode":"static",
            "ip":"192.168.110.2",
            "prefix":"24",
            "gateway":"192.168.110.254",
            "dns.servers":"8.8.8.8",
            "system.name":"192.168.110.2"
        }
    }
}

You can see how I used the newly created “vsanDatastore” as my destination datastore.

Your SSO password will be checked against complexity compliance by the script before starting the deployment process.
Passwords are stored in clear text so make sure not to leave around this file and possibly destroy it after use or change all the passwords right after deployment.
You might have noticed that as the system name I used the IP address: I had to do this because I have no DNS (yet) and if you enter a FQDN as system name you need to make sure that it can be resolved both with forward and reverse DNS calls so I had no choice; this will actually be a limitation later on because I will not be able to add the vCSA to a Windows domain so if I want to use Windows credentials to log in my vCenter I will need to setup LDAP authentication.

You just fire this command to start the deployment:

/Volumes/VMware\ VCSA/vcsa-cli-installer/mac/vcsa-deploy vcenter60.json

During the deployment process you will see the following:

Start vCSA command line installer to deploy vCSA "vCenter60", an embedded node.

Please see /var/folders/dp/xq_5cxlx2h71cgy2t83ghkd00000gn/T/vcsa-cli-installer-9wU8aB.log for logging information.

Run installer with "-v" or "--verbose" to log detailed information.

The SSO password meets the installation requirements.
Opening vCSA image: /Volumes/VMware VCSA/vcsa/vmware-vcsa
Opening VI target: vi://root@192.168.1.107:443/
Deploying to VI: vi://root@192.168.1.107:443/

Progress: 99%
Transfer Completed
Powering on VM: vCenter60

Progress: 18%
Power On Completed

Installing services...
Progress: 5%. Setting up storage
Progress: 50%. Installing RPMs
Progress: 56%. Installed oracle-instantclient11.2-odbc-11.2.0.2.0.x86_64.rpm
Progress: 62%. Installed vmware-identity-sts-6.0.0.5108-2499721.noarch.rpm
Progress: 70%. Installed VMware-Postgres-9.3.5.2-2444648.x86_64.rpm
Progress: 77%. Installed VMware-invsvc-6.0.0-2562558.x86_64.rpm
Progress: 79%. Installed VMware-vpxd-6.0.0-2559267.x86_64.rpm
Progress: 83%. Installed VMware-cloudvm-vimtop-6.0.0-2559267.x86_64.rpm
Progress: 86%. Installed VMware-sps-6.0.0-2559267.x86_64.rpm
Progress: 87%. Installed VMware-vdcs-6.0.0-2502245.x86_64.rpm
Progress: 89%. Installed vmware-vsm-6.0.0-2559267.x86_64.rpm
Progress: 95%. Configuring the machine
Service installations succeeded.

Configuring services for first time use...
Progress: 3%. Starting VMware Authentication Framework...
Progress: 11%. Starting VMware Identity Management Service...
Progress: 14%. Starting VMware Single Sign-On User Creation...
Progress: 18%. Starting VMware Component Manager...
Progress: 22%. Starting VMware License Service...
Progress: 25%. Starting VMware Service Control Agent...
Progress: 33%. Starting VMware System and Hardware Health Manager...
Progress: 44%. Starting VMware Common Logging Service...
Progress: 55%. Starting VMware Inventory Service...
Progress: 64%. Starting VMware vSphere Web Client...
Progress: 66%. Starting VMware vSphere Web Client...
Progress: 70%. Starting VMware ESX Agent Manager...
Progress: 74%. Starting VMware vSphere Auto Deploy Waiter...
Progress: 81%. Starting VMware Content Library Service...
Progress: 85%. Starting VMware vCenter Workflow Manager...
Progress: 88%. Starting VMware vService Manager...
Progress: 92%. Starting VMware Performance Charts...
Progress: 100%. Starting vsphere-client-postinstall...
First time configuration succeeded.

vCSA installer finished deploying "vCenter60", an embedded node:
System Name: 192.168.110.20
Login as: Administrator@vsphere.local

It's time to connecto the the new Web Client, just open your browser to "https://" and then select "Log In To the vSphere Web Client".

You should now log in but before starting the normal configuration process I suggest you take care of password expiration in which present in two separate areas in this version of vCSA: the SSO users and the root system user.
About the first one you can go to Administration -> Single Sign-On -> Configuration -> Password Policy and edit the Maximum Lifetime to “0” so effectively you are disabling expiration:

Featured image

For the root user you will need to drop to the vCSA command line, enable and access the Shell the issue the following:

localhost:~ # chage -l root        # show current password expiration settings

localhost:~ # chage -M -1 root     # set expiration to Never
Aging information changed.
localhost:~ # chage -l root
Minimum: 0
Maximum: -1
Warning: 7
Inactive: -1
Last Change: Mar 17, 2015
Password Expires: Never
Password Inactive: Never
Account Expires: Never

Now you could start deploying all your VMs but if you try that you will find that vSAN will complain about a policy violation!

Do you remember how we needed to change the default policy on the host before we could deploy vCSA?
We did that at the host level but when vCSA started managing the host the default policy has been overwritten to the original defaults so now we have to change it again to match our need but this time we can leverage the GUI for this task:

Featured image

Now all is set and you should be good to go… not really!
We’ve never set a network for the vSAN traffic, even if I’m running on a single node configuration this will still trigger a warning:

Featured image

All you have to do is create a new VMKernel portgroup and flag it for vSAN traffic and your will system be again a little happy vSphere host.

Running a Home Lab on a Single vSAN Node

This is how I managed to run my lab on a single vSAN node and manage it completely Windows free, which is always a goal for a Mac user like me; with vSphere 6 this is a lot easier than it used to be in the past thanks to improvement in the Web Client (and the fact that the fat client doesn’t connect to vCenter anymore) and also thanks to the new VCSA that comes with deployment tools for Mac.
About the storage side of things, I’ve always been running my lab with some kind of virtual storage appliance in the past (Nexenta, Atlantis, Datacore) but those require a lot of memory and processing power and this reduces the number of VMs I can run in my lab simultaneously.
It’s true that I can get storage acceleration like this (which is so important in a home lab) but I sacrifice consolidation ratio and add complexity to take into account when I do upgrades and maintenance, so I decided to change my approach and include my physical lab in the process of learning vSAN.
If all goes as I would like I will get storage performance without sacrificing too many resources for it and this would be awesome.
Here is my current hardware setup in terms of disks:

1 Samsung SSD 840 PRO Series
1 Samsung SSD 830
3 Seagate Barracuda ST31000524AS 1TB 7200 RPM 32MB Cache SATA 6.0Gb/s 3.5″

I also have another spare ST31000524AS that I might add later but that would require me to add a disk controller.
Speaking of which, my current controller (C602 AHCI – Patsburg) is not in the vSAN HCL and the queue depth is listed as a pretty depressive value of 31 (per port) but I am still just running a lab and I don’t really need to achieve production grade performance numbers; nevertheless I have been looking around on eBay and it seems like with about €100 I can get a supported disk controller but I decided to wait a few weeks to make sure VMware updates the HCL just because I don’t want to buy something that won’t be on vSphere6/vSAN6 HCL plus I might still get the performance I need with my current setup, or at least this is what I hope.

UPDATE: The controller I was keeping an eye on doesn’t seem to be listed in the HCL for vSAN 6 even now that the HCL is reported to be updated so be careful with your lab purchases!

For the time being I will test this environment on my current disk controller and learn how to troubleshoot performance bottlenecks in vSAN which is going to be a great exercise anyway.

The first thing to do in my case was to decommission the current disks, so once I delete the VSA that was using them as RDM I needed to make sure that the disks had no partitions left on them since this will create problems claiming them during the vSAN setup, so I accessed my ESXi via SSH and started playing around with the command line:

esxcli storage core device list      # list block storage devices

Which gave me a list of devices that I could use with vSAN (showing one disk only):

t10.ATA_____Samsung_SSD_840_PRO_Series______________S12RNEACB06971B_____
   Display Name: Local ATA Disk (t10.ATA_____Samsung_SSD_840_PRO_Series______________S12RNEACB06971B_____)
   Has Settable Display Name: true
   Size: 244198
   Device Type: Direct-Access 
   Multipath Plugin: NMP
   Devfs Path: /vmfs/devices/disks/t10.ATA_____Samsung_SSD_840_PRO_Series______________S12RNEACB06971B_____
   Vendor: ATA     
   Model: Samsung SSD 840 
   Revision: DXM0
   SCSI Level: 5
   Is Pseudo: false
   Status: on
   Is RDM Capable: false
   Is Local: true
   Is Removable: false
   Is SSD: true
   Is VVOL PE: false
   Is Offline: false
   Is Perennially Reserved: false
   Queue Full Sample Size: 0
   Queue Full Threshold: 0
   Thin Provisioning Status: yes
   Attached Filters: 
   VAAI Status: unknown
   Other UIDs: vml.0100000000533132524e45414342303639373142202020202053616d73756e
   Is Shared Clusterwide: false
   Is Local SAS Device: false
   Is SAS: false
   Is USB: false
   Is Boot USB Device: false
   Is Boot Device: false
   Device Max Queue Depth: 31
   No of outstanding IOs with competing worlds: 32
   Drive Type: unknown
   RAID Level: unknown
   Number of Physical Drives: unknown
   Protection Enabled: false
   PI Activated: false
   PI Type: 0
   PI Protection Mask: NO PROTECTION
   Supported Guard Types: NO GUARD SUPPORT
   DIX Enabled: false
   DIX Guard Type: NO GUARD SUPPORT
   Emulated DIX/DIF Enabled: false

This is useful to identify the SSD devices, the device names and their physical path. Here’s a recap of the useful information in my environment:

/vmfs/devices/disks/t10.ATA_____Samsung_SSD_840_PRO_Series______________S12RNEACB06971B_____
/vmfs/devices/disks/t10.ATA_____SAMSUNG_SSD_830_Series__________________S0VYNYABC03672______
/vmfs/devices/disks/t10.ATA_____ST31000524AS________________________________________5VPDP87L
/vmfs/devices/disks/t10.ATA_____ST31000524AS________________________________________5VPDP8N3
/vmfs/devices/disks/t10.ATA_____ST31000524AS________________________________________9VPC5AQ9

The Samsung 840 Pro will give me much better performance in a vSAN diskgroup so I will put aside the 830 for now.

Now for each and every disk I check the presence of partitions and removed all of them if any; I’m going to show you the commands I run against one disk as an example:

~ # partedUtil getptbl /vmfs/devices/disks/t10.ATA_____ST31000524AS________________________________________5VPDP87L
gpt
121601 255 63 1953525168
1 34 262177 E3C9E3160B5C4DB8817DF92DF00215AE microsoftRsvd 0
2 264192 1953519615 5085BD5BA7744D76A916638748803704 unknown 0

~ # partedUtil delete /vmfs/devices/disks/t10.ATA_____ST31000524AS________________________________________5VPDP87L 2

~ # partedUtil delete /vmfs/devices/disks/t10.ATA_____ST31000524AS________________________________________5VPDP87L 1

~ # partedUtil getptbl /vmfs/devices/disks/t10.ATA_____ST31000524AS________________________________________5VPDP87L
gpt
121601 255 63 1953525168

partedUtil is used to manage partitions, the “getptbl” shows the partitions (2 in this case) and the delete command removes them; note how in the end of these commands I needed to specify the partition number on top of which I wanted to execute the operation.

At that point with all the disks ready I needed to change the default vSAN policy because otherwise I wouldn’t be able to satisfy the 3-nodes requirement, so I needed to enable the “ForceProvisioning” setting.
Considering that at some point vSAN will need to destage writes from SSD to HDD I also decided to enable StripeWidth and set it to “3” so I could take advantage of all of my 3 HDD when IOs involve the magnetic disks.
Please note that this is probably a good idea in a lab while in a production environment you will need to find good reasons for this since VMware encourages customers to leave the default value at “1”; problems to consider comes into play when you are doing the sizing of your environment (careful about components number even if vSAN 6 raised the per host limit from 3000 to 9000), in general your should read the “VMware Virtual SAN 6.0
Design and Sizing Guide” (http://goo.gl/BePpyI) before making any architectural decision.

To change vSAN default policy and create a cluster I made very minor changes to William Lam steps described here for vSAN 1.0:

esxcli vsan policy getdefault      # display the current settings

esxcli vsan policy setdefault -c cluster -p "((\"hostFailuresToTolerate\" i0) (\"forceProvisioning\" i1) (\"stripeWidth\" i3))"
esxcli vsan policy setdefault -c vdisk -p "((\"hostFailuresToTolerate\" i0) (\"forceProvisioning\" i1) (\"stripeWidth\" i3))"
esxcli vsan policy setdefault -c vmnamespace -p "((\"hostFailuresToTolerate\" i0) (\"forceProvisioning\" i1) (\"stripeWidth\" i3))"
esxcli vsan policy setdefault -c vmswap -p "((\"hostFailuresToTolerate\" i0) (\"forceProvisioning\" i1) (\"stripeWidth\" i3))"
esxcli vsan policy setdefault -c vmem -p "((\"hostFailuresToTolerate\" i0) (\"forceProvisioning\" i1) (\"stripeWidth\" i3))"

esxcli vsan policy getdefault      # check that the changes made are active

This is when I created the vSAN cluster comprised of one node:

esxcli vsan cluster new
esxcli vsan cluster get
Cluster Information
Enabled: true
Current Local Time: 2015-03-21T10:23:14Z
Local Node UUID: 51a90242-c628-b3bc-4f8d-6805ca180c29
Local Node State: MASTER
Local Node Health State: HEALTHY
Sub-Cluster Master UUID: 51a90242-c628-b3bc-4f8d-6805ca180c29
Sub-Cluster Backup UUID:
Sub-Cluster UUID: 52b2e982-fd0f-bc1a-46a0-2159f081c93d
Sub-Cluster Membership Entry Revision: 0
Sub-Cluster Member UUIDs: 51a90242-c628-b3bc-4f8d-6805ca180c29
Sub-Cluster Membership UUID: 34430d55-4b18-888a-00a7-74d02b27faf8

I was good to add the disks to a diskgroup now, remember that in every diskgroup there is 1 SSD and one or more HDD:

[root@esxi:~] esxcli vsan storage add -s t10.ATA_____Samsung_SSD_840_PRO_Series______________S12RNEACB06971B_____ -d t10.ATA_____ST31000524AS________________________________________5VPDP87L

[root@esxi:~] esxcli vsan storage add -s t10.ATA_____Samsung_SSD_840_PRO_Series______________S12RNEACB06971B_____ -d t10.ATA_____ST31000524AS________________________________________5VPDP8N3

[root@esxi:~] esxcli vsan storage add -s t10.ATA_____Samsung_SSD_840_PRO_Series______________S12RNEACB06971B_____ -d t10.ATA_____ST31000524AS________________________________________9VPC5AQ9

I had no errors, so I checked the vSAN storage to see what was composed of:

esxcli vsan storage list
t10.ATA_____ST31000524AS________________________________________5VPDP87L
Device: t10.ATA_____ST31000524AS________________________________________5VPDP87L
Display Name: t10.ATA_____ST31000524AS________________________________________5VPDP87L
Is SSD: false
VSAN UUID: 527ae2ad-7572-3bf7-4d57-546789dd7703
VSAN Disk Group UUID: 52e56e97-d27b-6d9b-d1fe-c73da8082ccc
VSAN Disk Group Name: t10.ATA_____Samsung_SSD_840_PRO_Series______________S12RNEACB06971B_____
Used by this host: true
In CMMDS: true
Checksum: 2442595905156199819
Checksum OK: true
Emulated DIX/DIF Enabled: false

t10.ATA_____ST31000524AS________________________________________9VPC5AQ9
Device: t10.ATA_____ST31000524AS________________________________________9VPC5AQ9
Display Name: t10.ATA_____ST31000524AS________________________________________9VPC5AQ9
Is SSD: false
VSAN UUID: 52e06341-1491-13ea-4816-c6e6338316dc
VSAN Disk Group UUID: 52e56e97-d27b-6d9b-d1fe-c73da8082ccc
VSAN Disk Group Name: t10.ATA_____Samsung_SSD_840_PRO_Series______________S12RNEACB06971B_____
Used by this host: true
In CMMDS: true
Checksum: 1139180948185469177
Checksum OK: true
Emulated DIX/DIF Enabled: false

t10.ATA_____Samsung_SSD_840_PRO_Series______________S12RNEACB06971B_____
Device: t10.ATA_____Samsung_SSD_840_PRO_Series______________S12RNEACB06971B_____
Display Name: t10.ATA_____Samsung_SSD_840_PRO_Series______________S12RNEACB06971B_____
Is SSD: true
VSAN UUID: 52e56e97-d27b-6d9b-d1fe-c73da8082ccc
VSAN Disk Group UUID: 52e56e97-d27b-6d9b-d1fe-c73da8082ccc
VSAN Disk Group Name: t10.ATA_____Samsung_SSD_840_PRO_Series______________S12RNEACB06971B_____
Used by this host: true
In CMMDS: true
Checksum: 10619796523455951412
Checksum OK: true
Emulated DIX/DIF Enabled: false

t10.ATA_____ST31000524AS________________________________________5VPDP8N3
Device: t10.ATA_____ST31000524AS________________________________________5VPDP8N3
Display Name: t10.ATA_____ST31000524AS________________________________________5VPDP8N3
Is SSD: false
VSAN UUID: 52f501d7-ac52-ffa4-a45b-5c33d62039a1
VSAN Disk Group UUID: 52e56e97-d27b-6d9b-d1fe-c73da8082ccc
VSAN Disk Group Name: t10.ATA_____Samsung_SSD_840_PRO_Series______________S12RNEACB06971B_____
Used by this host: true
In CMMDS: true
Checksum: 7613613771702318357
Checksum OK: true
Emulated DIX/DIF Enabled: false

At this point I could see my “vsanDatastore” in the vSphere Client. (I had no vCenter yet)

The next step will be to deploy vCenter on this datastore; I will be using VCSA and I will show you how to do it with a Mac.

Top vBlog 2015 Voting Started!

As every year the Top vBlog 2015 voting has started, so if you like the content of my blog (even if can’t write as often as i would like) please spend a minute to give me your preference!

Voting is open until the 19 of March.

vExpert 2015 Award

Featured image

Thanks to VMware for confirming my vExpert!

Certificate Lifecycle Management published on VMTN Blog!

My article about Certificate Lifecycle Management on vSphere 6 has been published with other articles from vExperts from all over the world on VMware VMTN Blog, so thank you VMware for the opportunity :-)

You can check all links here:

How to revert new Transparent Page Sharing behaviour

There have been a lot of blog posts around warning us that starting from vSphere 5.5 Update2d (build 2403361) TPS will be basically turned off.

Besides comments about the why and how to partially re-enable inter-VM TPS I just think in most cases it might make sense to restore the original behaviour; I have been in the process of upgrading an infrastructure where the RAM contrain represented a much bigger concearn than the specific security reason why TPS has been turned off so I decided to take a look at the shared memory metrics in vCenter just to find out that post-upgrade situation wouldn’t be exactly heavenly if all the shared memory pages would be translated into unique pages.

I found this arcicle KB2097593 where it explains the various status of the settings you can apply and one of them basically disables the new behaviour restoring the old fashion TPS:

Featured image

In other words setting the value “ShareForceSalting” to 0 will bring things back in time.

The complete procedure is as follows:

1. Log in to ESX (i)/vCenter with the VI-Client.
2. Select ESX (i) relevant host.
3. In the Configuration tab, click Advanced Settings (link) under the software section.
4. In the Advanced Settings window, click Mem.
5. Search for Mem.ShareForceSalting and set the value to 0.
6. Click OK.
7. Reboot host

vSphere 6 Certificate Lifecycle Management

Recently I’ve been fighting with a vSphere environment and CA certificates and I thought a lot about certificate management and lifecycle in a VMware vSphere environment after that and how much it needs improvement. With the SSL Certificate Automation Tool VMware made a step in the right direction and even if the tools itself is sometimes a little buggy it is still very handy in automating a long and error prone process. In vSphere 6 VMware is taking another step in the right direction to help us create, apply and manage SSL certificates in a vSphere environment, but before talking about this we need to talk a bit about what’s new in SSO and vCenter architecture in vSphere 6. Since the introduction of SSO VMware changed its architecture in every major release, starting from 5.1 to 5.5 and now to 6.0 so let’s make a little bit of history:

Featured image

The new vSphere 6 management architecture introduces two main roles that you can deploy, these are the Management Node and the Platform Service Controller (PSC); the reason behind this separation is to have a logical entity that will take care of the main management features while another entity will hold the core and security features of the solution. What is nice about this separation is that you don’t need a 1:1 ratio between Management Nodes and PSCs so you can install PSC on separate boxes and replicate between them and then have as many Management nodes as you need (as long as you are within the same SSO domain)

Featured image

For HA scenario if you install PSC on separate boxes you will still need a load balancer. Supported solutions are Big-IP F5 and NetScaler so far.

You can obviously still install everything in one box:

Featured image

You might have noticed that the HA model for SSO was active/passive in 5.1, then active/active in 5.5 and now is active/passive again; this is due to the re-engineering of the Secure Token Service (STS) which is moving to a new and more robust method of STS (known as WebSSO) which is the same already used by vCAC (or vRealize Automation if you will) and that will be used from now forward instead of the old 5.5 method (WS-Trust). Let’s see how services are spread out on each role:

Let’s take a look to the services within the Management Node and the PSC:

Featured image

In the Management Node we can find services and features that every vSphere Admin feels very comfortable and familiar with such as vCenter Server, vSphere Web Client, Syslog Collector, etc., but two of them deserve a few words:

  • Virtual Datacenter Service: this service is new and it has been introduced to help mitigate the limitation connected with the Datacenter object in vCenter as a Management boundary.
  • (Optional) vPostgres: This component is obviously referring to the vCenter Appliance (thus optional) but I believe more and more new deployments or upgrades deserve to be considered a good fit for vCSA since VMware announced complete equality of features between vCenter installed on Windows and vCSA; leave alone the fuss of dedicating Windows licenses for vCenter which might not be a huge problem I just find the process of patching ad upgrading a vCSA simply amazing and it’s not a secret that products like EVO:RAIL make extensive use of vCSA. VMware wants to move all their services deployment model towards Virtual Appliances, this is not a secret and we need to get used to it, the sooner the better, but I’m digressing…

Featured image

In the Platform Service Controller or PSC we find our old friend SSO (we have had a rough past but now we are on better terms) and quite a few new services:

  • VMware Single Sign-On
    • Secure Token Service (STS)
    • Identity Management Service (IdM)
    • Directory Service (VMDir)
  • VMware Certificate Authority (VMCA)
  • VMware Endpoint Certificate Store (VECS)
  • VMware Licensing Service
  • Authentication Framework Daemon (AFD)
  • Component Manager Service (CM)
  • HTTP Reverse Proxy

Describing all these services is out of the scope of this post but as you probably guess two of them will be our focus: the VMware Certificate Authority (or VMCA) and the VMware Endpoint Certificate Store (or VECS). But what are the roles of VMCA and VECS? The VMCA is no more or less than a CA, so you can:

  • Generate Certificates
  • Generate CRLs
  • Use the UI
  • Use the Command Line Interface to replace certificates

The VECS is where all certificates within the PSC are stored, with the only exception of the ESXi certificates that are stored locally on vSphere hosts, so here you can:

  • Store certificates and keys
  • Sync trusted certificates
  • Sync CRLs
  • Use the UI
  • Use the CLI to perform various actions

Since VMCA and VECS are part of the PSC, they will take advantage of the Multi-Master Replication Model which is offered by the Directory Service (VMDir) in order to achieve HA. In the past every service had its own user and required its own certificate but this is not the case anymore since we now have Solution Users (SU); since the number of services has increased significantly it would be impractical to manage the lifecycle of this many certificates so now we have 4 main SU that will hold the certificate used for a number of services.

Voila_Capture 2015-01-08_08-14-37_pm_white_background

What about use cases/scenarios in which I can implement VMCA? In what ways you can use this new tool?

Featured image

Scenario 1 and 2 are similar: the VMCA is the CA that releases certificates for all Solution Users (SU), the only difference is that in scenario 1 the VMCA is the root CA and you will need to distribute the Root CA Certificate so that all corporate browsers will trust it, while in scenario 2 the VMCA becomes part of an existing PKI as a subordinate CA and you certificate trust.

Featured image

In scenario 3 VMCA is installed but not used, CSRs are created and submitted to an external CA and VECS will be used to store certificates in PEM format.

Featured image

My favorite is scenario 2 because most enterprises I see already have a PKI (Microsoft CA usually) and all clients already trust the CA certificates, so adding the VMCA as s subordinate is a non disruptive process with a very low maintenance impact on the PKI itself, it protects investments already made to implement the current PKI and  preserves the knowledge to run the PKI.

Replacing certificates is still a CLI task (looks like Powershell will be involved) but VMCA and VECS are a very promising step toward the right direction for simplifying certificate lifecycle management in a vSphere environment.

Follow

Get every new post delivered to your Inbox.

Join 109 other followers

%d bloggers like this: