Monthly Archives: June 2014

Automating Sysprep File Transfer to vCenter

I was at my local Salt Lake City VMware User Group (VMUG) last week doing a Q&A when one of the users mentioned it is painful to only be able to upload one Sysprep file at a time to vCenter. I wanted to take the time to address this for him as well as any of you who may have the same issue.

Problem: uploading Sysprep files to vCenter only allows for one file upload at a time.

Currently to upload a Sysprep file, one must login to to the VCSA web interface click on “Summary”, then “Upload” (bottom right).


You then select the desired target directory and choose the Sysprep files one at a time.


For the Windows vCenter Server the files must be placed within their respective folders within C:\ProgramData\VMware\VMware VirtualCenter\Sysprep\.


This is not always the easiest process to add several Sysprep files to vCenter. So I decided to create a script that will do it for me.

The Script

Once you download the script (see the end of the blogpost for the download link), open it up and read through the script to become familiar with what it does. As always, never run a script in your environment unless you know exactly what it is doing and you are comfortable with it.

After the script comments you’ll see there is a block of variables that need to be set according to your environment. The $DefaultVIServer along with $vCUser and $vCPass are the credentials for the script to login to the vCenter where your target vCenter resides (*Note: These can be one and the same, however, in larger environments some users have multiple vCenters). The $target variables are for the vCenter we will be loading the Sysprep files to.

For the script to work correctly you’ll need to create a folder and create the folder names as you see them in the picture above. All of the Sysprep files should be places respectively in their folders. The $Location variable is the location where these subfolders reside on your local machine; for me it was C:\Temp.

If the Sysprep files will be uploaded to a Windows vCenter Server and vCenter is not installed on the C: partition, the $vC_partition variable will need to be updated to the correct drive.


Once the files have been placed in their folders and the User Configuration has been set, the script is ready to run.

The script will check to see if vCenter is the VCSA or a Windows VM and will place the files accordingly. This script will not overwrite any files in the Sysprep repository by default. If you plan on having this script overwrite files with the same filename you must add ‘-Force’ to the end of the Copy-vmguestfile command at the bottom of the script.

You can grab the script HERE

Author & Article Source: Brian Graf | VMware Blogs

Log Insight 2.0 Binaries Available Now!

By Jon Herlocker

Today, we’re proud to be releasing the binaries for VMware Log Insight 2.0. (You can download the eval from MyVMware, or grab the upgrade if you are already a customer).  Bill Roth talked about the release earlier at a high level, but I’d like to discuss the details. It’s hard to compete with the excitement felt when releasing a 1.0 product, but Log Insight 2.0 comes really close for me! The Log Insight team was able to achieve so much goodness in such a short period of time. Let’s take a look at some of the big improvements in 2.0, keeping in mind that these and more were all completed in less than five months:

  1. Distributed architecture for scale out deployments
  2. Machine Learning for event-type detection, also known as Intelligent Grouping.
  3. Log Insight Collection Framework: RESTful ingestion API, and Windows agent for WMI and Windows Application Log Files.
  4. Huge improvements to charting: new kinds of charts, new controls over charts, chart legends
  5. Major usability additions to dashboards: ability to constrain dashboards, automatic detection of potential linkages between dashboards
  6. Huge improvements in query performance, focused on predictable query response times.
  7. New look and feel and huge usability improvements to interactive analytics including inline charts
  8. Improved user experience for automating configuration of ESXi logging

There’s too much material to cover in a single blog post, so I’m going to break up the topics across multiple blog posts. Today I’ll address scale-out deployments and machine learning for event-type detection.

Distributed Architecture for Scale-out Deployments. Log Insight 2.0 now supports up to six full size nodes that create a single virtual Log Insight 2.0  instance, allowing six-times the ingestion level, query level, and database size. Moving from a single node system to a distributed system was a complicated feat. Not only do you have to handle distribution of incoming data and queries, but you also have to handle lots of different failure scenarios, distributed upgrades, distributed configuration, distributed monitoring, and more. We have approached cluster management with the same ruthless attention to usability and complexity reduction that you’ve seen in past Log Insight features, so almost none of this complexity will be visible to you! Deploying a Log Insight 2.0 cluster is almost as easy as deploying a single node of Log Insight. There is one additional question in the startup wizard: are you starting a new cluster, or joining an existing one? If you are joining an existing one, you just need to provide the name or IP address of a node in the existing cluster.  All in, a Log Insight 2.0 cluster should be significantly lower cost both to install and to maintain than competing solutions.

Log Insight 2.0 supports increased scale with minimal overhead by partitioning the data across all nodes in the Log Insight 2.0 cluster. Using a traditional load balancer, incoming data, either syslog or via the new RESTful collection API, is routed to any node in the cluster. We call these worker nodes. Each worker node independently indexes data arriving to its node, answers queries for the data residing on its node, and manages its own data retention. No communication is required between worker nodes, leading to an efficiently scalable architecture. One of the nodes in the cluster serves as a query coordinator. The query coordinator runs the user interface web server, breaks a user query into sub-queries to each of the worker nodes, merges the results from those sub-queries, and incrementally feeds results back to the user interface.

Log Insight

The end effect of these technological improvements is a Log Insight 2.0 cluster that is rated to handle 6×7500 = 45,000 events per second (eps) without message loss while concurrently enduring a reasonably heavy query load. If your query load is pretty light, you should be able to easily get more than 60,000 eps  (let us know how high you get!).

Machine Learning for Event Type Detection. One of the core value propositions of Log Insight is the transformation of unstructured data into a form where it can be queried and analyzed like a SQL database. In v1 of Log Insight, structure was defined either in content packs or manually through the user interface (with one-click field extraction). With v2, Log Insight automatically discovers the structure within the unstructured data, and extends its user interface to enable powerful summarization and discovery capabilities over that structure. This schema discovery works on any text/unstructured event data – even data that has never been seen before. So Log Insight is now more effective than ever in analyzing your proprietary application log data.

To understand how the event type detection works, it helps to review how these log messages are created. In the programming code of the software being monitored, there is a line that looks like this:

printf(“API call took %d miliseconds”, ms);

The first argument is called the format string and the second argument is the variable whose value replaces the “%d”. %d also defines a data-type – in this case an integer. The goal of event type detection is to discover the format strings that were used to create each message we observe in the event stream, without access to the original source code or other prior knowledge.

Log Insight uses three steps to detect event-types. In the first step, incoming messages are clustered together using machine learning so that messages that have many similar terms in them are grouped together. Each message is assigned an event_type corresponding to the cluster in which it is placed. This clustering happens in-line with ingestion, but is so fast that it does not slow down the ingestion pipeline (compared to v1).

Figure 2: Each message is assigned an event_type

In the second step, Log Insight examines each cluster, and applies a different kind of machine learning to learn a regular expression for each distinct value of event_type. This is where Log Insight comes up with a format string – clearly identifying the parts of the message that are the same for every message of a distinct event_type and what parts are variable. The parts that vary become fields that can be queried like a database column – we call them smart fields. Finding a good regular expression is challenging – there are many regular expressions that match, but we want to select one that is as specific as possible while still matching all events in the event_type.  In the Event Types view, you can see the results of this – the text in black is the same for every message of that event type, and the smart fields show up in blue. In Figure 3, we can immediately see something interesting – because the hostname and username are black, we know that every single message of this event type comes from the same source and user.

Event Types view - black is a constant, blue is variable across messages of that event typen event_type

In the third step, Log Insight analyzes each smart field, and assigns a data type to that section, with possible types including timestamps, hostnames, IP addresses, integers, strings, etc. In the figure below, you can see that Log Insight has identified that the first smart field is a timestamp.

Log Insight has inferred the data type of the first "smart field".

Once the structure has been detected, the formerly unstructured data can now be queried like a database where each smart field is like a column in your database table. You can aggregate across values of a smart field, group-by values of a smart field, limit your results to specific values of a smart field and more. You can also supply your own names for smart fields, so that they are easier to reference in the future. The Event Types view is, at its core, the results of your query “group by event_type”. It’s a powerful view that can summarize a massively large number of messages into a more easily processed number of distinct event types.

The machine learning event type detection capability brings the power of automated computer analytics to assist you with IT operations. Its automated schema detection will significantly accelerate time to value with new types of log data, and its summarization capabilities will significantly reduce your information overload, allowing you to quickly focus in on the novel or interesting information in your logs. In a future blog post, I’ll review how our new inline charting allows you to quickly explore different sub-dimensions of the data from the Event Types screen.

Author & Article Source: Bill Roth \ Jon Herlocker | VMware Blogs

Resolving OpenSSL Heartbleed for VMware ESXi 5.5 Hosts.

Issue:You need to determine whether or not your VMware ESXi 5.5 Hosts are vulnerable to the OpenSSL Heartbleed vulnerability found in the Open SSL 1.0.1 library.

Step # 1

Download the CrowdStrike Heartbleed Scanner. From the URL below.

Step # 2

After you install CrowdStrike Heartbleed Scanner, you will need to add your VMware ESXi Hosts FQDN or IP Address to the Target Entry List, if you are wanting to only check a single host.

I recommend doing this for your first one, so that you get familiar with the tool.

Then you can move on to doing a whole list of Hosts or an IP Address Range.

In the Target Entry Section, you can either enter a single Host, by FQDN, IP Address or use an IP Address range for multiple Hosts.

Step # 3

You will leave all of the default settings on the Control Section, unless you feel that you need to changes any of these settings.

Once you have entered your Hosts, you are now ready to start scanning.

You can start the Scan using the Blue play button.

Next, you can monitor that Scanner, as it is running, it has a circle of dots that stop, when it is done.

Once the scan has completed, you will see that results.

If a vulnerable Host is found, it will report as “VULNERABLE” in the Status section.


Step # 4

Here is how you go about patching your VMware ESXi Hosts.

I’m going to be using VMware Update Manager (VUM), to perform my remediation steps.

1) Create a New Baseline, that will include VMware Patch:

VMware ESXi 5.5, Bulletin “ESXi550-201404401”

a) Add Name

b) Add Description

c) Baseline Type, I selected Host Baselines, next Host Patch, and then click Next button.

d) Patch Options, I selected Fixed, since this New Baseline is for the Heartbleed vulnerability Only.

e) Patches, enter ESXi550-201404401, you will need to click on the down arrow and select Patch ID.

Next, click on the down arrow to add this patch to your New Baseline, then click the next button.

f) You can now click on the Finish button.

g) Now, you can attach your New Baseline to your Host.

a) Select your Baseline, under Individual Baselines by Type and then click the Attach button.

h) Next click on the scan button. Now you should be able to work with VUM as you would normally.


2) Now I will add this VMware Patch to the New HeartBleed Baseline.




Here is what a Host will look like if it is not vulnerable to the OpenSSL Heartbleed vulnerability.

Under that Status section, it will report as “Failed to connect


Step # 4

You can find more information for your review, on the OpenSSL Heartbleed vulnerability.

Resolving OpenSSL Heartbleed for ESXi 5.5 – CVE-2014-0160 (2076665)

VMware ESXi 5.5, Patch ESXi550-201404401-SG: Updates esx-base (2076121)

Response to OpenSSL security issue CVE-2014-0160/CVE-2014-0346 a.k.a: “Heartbleed” (2076225)

NIST – Vulnerability Summary for CVE-2014-0160

vSphere Hardening Guide 5.5 Update 1 Released!

There are 4 new additions to the guide. Please review.

  1. enable-VGA-Only-Mode: Used for server VM’s that don’t need a graphical console. e.g. Linux web servers, Windows Core, etc.
  2. disable-non-essential-3D-features: Remove 3D graphic capabilities from VM’s that don’t need them.
  3. use-unique-roles: A new companion control to use-service-accounts. If you have multiple service accounts then each one should have a unique role with just enough privs to accomplish their task. This is in line with least-priv operations
  4. change-sso-admin-password: A great catch. When installing Windows vCenter, you’re prompted to change the password of administrator@vsphere.local. When installing the VCSA in a default manner you are not. This control reminds you to go back and do that.

The rest are formatting, spelling, clarification, etc.. One interesting change is the “enable-nfc-ssl” control. That has been renamed to “verify-nfc-ssl” now that SSL is enabled by default in 5.5 for NFC traffic. All of the changes are called out in the Change Log.

I’d like to thank the many customers and internal folks who have contributed and pointed out the errors that needed correcting. It’s great to have so many folks that are willing to pitch in!

Head on over to the vSphere Hardening Guide page to grab your copy now!

Thanks and please feel free to contact me on Twitter at @vspheresecurity or email to mfoley at if you have any input you’d like to share.



Author & Article Source: Mike Foley | VMware Blogs