Note: Changing VMNIC names by editing the /etc/vmware/esx.conf file is not supported by VMware.We are not going to make any changes to the /etc/vmware/esx.conf file manually.

Using this method we will let VMware ESXi/ESX do it automatically.

These instructions are for VMware ESXi hosts with vSwitches or Port Groups already configured.

You can also use these instructions for many situation like any of these:

1) If you lost host network connectivity either after:
a) System BIOS update
b) Firmware update
c) Patch Upgrade/update
d) Add a new Network Card

Step # 1
Place the ESXi/ESX host into Maintenance Mode.
Step # 2
Remove all VMNICs from the virtual switches except vmnic0, which is used for management.(Note: Make a note and/or diagram of which VMNICs are assigned to which vSwitches.)
Step # 3
Shut down the ESXi/ESX host.
Step # 4
Remove/disable all NICs except vmnic0.a) On-board NICs can be disabled in the System BIOS.

b) NICs installed in PCI slots should be physically removed.

(Note: Make a note and/or diagram from which slot each Network Card was installed.)

Step # 5
Boot the server. At boot time, all removed/disabled VMNICs are removed from the /etc/vmware/esx.conf file.
Step # 6
Shut down the ESXi/ESX host.
Step # 7
Add/enable all required NICs.a) Enable the On-Board NICs in the BIOS.
b) Re-insert the PCI bus NICs into the slots they came from.
(Note: I do this one at a time, shutting down and restarting the ESXi host after adding back each NIC.

This way I can check that my order is correct. You can use this process to check after adding each NIC.)

(Note: You can verify this by going into the Direct Console User Interface (DCUI) Console.)

Step # 1 Login to the DCUI.
Step # 2 Under System Customization, select Configure Management Network.

Step # 3 Under Configure Management Network, select Network Adapters.

Step # 8
Boot the ESXi/ESX host.All VMNICs are assigned per segment, bus, slot, and function ID, and are ordered correctly.
Step # 9
Assign VMNICs back to the virtual switches they were assigned to before.
Step # 10
Exit maintenance mode on the ESXi/ESX host.
Step # 11
Now all of your Network Cards should be back into the system and correctly ordered.
More Information on how VMware ESXi/ESX deals with the Network Card ordering.
The PCI ID to VMNIC numbering relationship is determined at boot time and is automatically entered into the esx.conf file for persistence.

The ESXi/ESX host first scans the segment number, then the bus number, the slot number, and finally the function number. This order ensures that ports on the same multi-port NIC are numbered sequentially.

Initially, when ESXi/ESX is installed, the VMNIC order will be sequential. This may change over time as NICs are removed and other NICS are added. This can result in VMNIC ordering that is undesirable and not in sync with the naming convention on other ESXi/ESX hosts.

Note: Changing VMNIC names by editing the /etc/vmware/esx.conf file is not supported by VMware.We are not going to make any changes to the /etc/vmware/esx.conf file manually.Using this method we will let VMware ESXi/ESX do it automatically.

These instructions are for a VMware ESXi hosts without vSwitches or Port Groups configured.
This is a freshly installed ESXi host that does not have the Network Adapters in the correct order.

If you are not using a Dell iDRAC/HP iLO or other Remote Management Interface, you need to follow my other process “Existing Installation Process”, because we disable all Network Cards and you will not be able to access your ESXi host.

I have another Blog post on doing this process if you already have your vSwitches or Port Groups setup and configured.

Step # 1
Place the ESXi/ESX host into Maintenance Mode.(Note: If ESXi/ESX host has already been added to vCenter, if not then you can safely skip this step.)
Step # 2
Shutdown the ESXi/ESX host.
Step # 3
Physical Network Cards need to be physically remove from the system.(Note: Make a note and/or diagram from which slot each Network Card was installed.)
Step # 4
1) Boot into System BIOS to disable the On-Board Network Cards.
(Note: We disabled even our Mezzanine Card Slots, if using Blade Technology.)
2) Shutdown ESXi/ESX host.
Step # 5
Boot the ESXi/ESX host.(Note: At boot time, VMware ESXi/ESX will remove all removed and/or disabled Network Cards from the /etc/vmware/esx.conf file.)
Step # 6
Shutdown the ESXi/ESX host.
Step # 7
Add/Enable ESXi/ESX host Network Cards in the required order.1) Enable the On-Board Network Cards.Re-start the ESXi/ESX host.(Note: You can verify this by going into the Direct Console User Interface (DCUI) Console.)Step # 1 Login to the DCUI.
Step # 2 Under System Customization, select Configure Management Network.Step # 3 Under Configure Management Network, select Network Adapters.
Step # 8
Add your Physical Network Cards back into the same slots from which they were installed.(Note: You can add them back into the system one at a time. This way you can re-check them using the DCUI Console process.)Continue this process until you have all of your Network Cards back into the system.
Step # 9
Now all of your Network Cards should be back into the system and correctly ordered.
More Information on how VMware ESXi/ESX deals with the Network Card ordering.
The PCI ID to VMNIC numbering relationship is determined at boot time and is automatically entered into the esx.conf file for persistence.

The ESXi/ESX host first scans the segment number, then the bus number, the slot number, and finally the function number. This order ensures that ports on the same multi-port NIC are numbered sequentially.

Initially, when ESXi/ESX is installed, the VMNIC order will be sequential. This may change over time as NICs are removed and other NICS are added. This can result in VMNIC ordering that is undesirable and not in sync with the naming convention on other ESXi/ESX hosts.


Full Error Message:

The vSphere Client could not connect to “your vCenter Server”.
You do not have permission to login to the server: your vCenter Server FQDN.

You will need to grant permissions for you domain account/group, before you can connect using your Domain Login credentials.
Step # 1
Login using Administrator@vsphere.local with your password that you created during installation.
Step # 2
Now that you are logged into your vCenter Server.

1) Click on your vCenter in the left pane, in my lab it is vc02.lab.local

2) Next, click on the Permissions tab.

3) Within the Permissions tab, you should see a VSPHERE.LOCAL\Administrator Account.

4) Right-Click anywhere on the white space, and select Add Permission…

Step # 3
In the Assigned Role Section, Click on the drop down box and select Administrator.

Leave Propagate to Child Objects “Checked”

Step # 4
In the Assign Permissions Section, click on the Add button.

On the Select Users and Groups, Click on the drop down box and select your current Domain.

In my case, I selected Administrator for my LAB domain.  If I were doing this in my production environment, I would select an VMware Administrator group and not an individual domain account.

Next, click the Add button, now you should see it in the Users: textbox area.

Then, click on the Check Names, and if you do not encounter any problems, click on the Ok button.

Once you click on the Check Names button, it will start validating users and groups.

Now, click on the Ok button again to exit.

Step # 5
Click File and Exit, so that you can logout of the VMware vSphere Client.

Now, try to log back in to vCenter, but this time use your Domain Login credentials.

If everything went perfect, you are now logged into vCenter with your Domain Credentials.

At this point I would start assigning my vCenter Permissions.

Here is a list of useful VMware Web Site related Links, that a VMware Systems Engineer shared with me and I wanted to share them with you.

For tech support:


Manage your support & licenses:






VMWare on YouTube:


WebCasts – including what’s new in 5.5:


VMware Communities:

The VMware Workstation team is providing public access to the latest VMware Workstation Technology Preview July 2014. This Tech Preview adds new features not included in the prior May 2014 Technology Preview in addition to general stability, application compatibility and usability improvements. The VMware Workstation Technology Preview includes changes to the core virtualization engine and new capabilities we are exploring.

VMware Workstation Tech Preview July 2014

We would appreciate feedback in the following areas:

  • New OS support including the latest versions of Windows, Ubuntu, Fedora, RHEL and OpenSUSE
  • VMware Hardware Version 11 including improved CPU support and upgraded USB 3.0 controller
  • Graphics memory configuration per virtual machine
  • Windows 8 Unity mode improvements
  • Create and boot virtual machine with EFI
  • Experimental performance tuning for VM suspend and resume

The VMware Workstation Technology Preview July 2014 is available to download HERE via our VMware Workstation Community. More details including installation instructions can be found on the What’s New page.

As with the prior Technology Preview May 2014, please post all of your feedback in the VMware Workstation Technology Preview 2014 community forum. Our Developers, Quality Assurance Engineers, Support Teams, Technical Writers, Product Marketing and Product Managers are all actively involved in the forums to ensure that your suggestions and comments get our attention.

Thank you!

The VMware Workstation Team

Author & Article Source: William Myrhang | VMware Blogs

Tagged with:

There are a few errors I’ve run into over the years that just stump me. Like you, I start doing some web searches and piecing things together. I cross-reference what I find with people I think may have more details for me. Well, I have recently had the “Invalid credentials” error in VMware vCenter Orchestrator (vCO) when viewing my vCenter Server instance in the vCO inventory. I hate to admit that it had me stumped for a while.

When adding my vCenter server in the vCO plugins section, the connection and credentials tested out just fine, so why was the VCO client giving me this error? 


Well, after doing some digging (and also cross-referencing the install-config guide), I found this describing when to use the “session per user” end point option: “Select this option only if your vCenter Server is in an Active Directory domain or if vCenter Server Single Sign-On is enabled. ” This setting was checked by default in my environment.

For background, I deployed the vCO appliance along with the vCenter Server appliance. I am also using the local root credentials for vCenter versus SSO.

As seen below, navigate the vCO configuration page (https://VCO-IP:8283), find the vSphere plugin, and click “Edit” on the appropriate vCenter Server instance.


Above “User name,” change the “session per user” selection to “share a unique session.”


Once you refresh the vCO client, you should see the error gone and you can now run workflows against this vCenter Server instance.


I hope this saves you some trouble! I spent more time than I care to admit, rebuilding environments and running multiple tests. It should have been something I never even missed from the start. :-)

Come back soon!

Get notification of new blog postings and more by following Harry on Twitter: @HarrySiii

Author & Article Source: Harry Smith | VMware Blogs

Tagged with:

I was at my local Salt Lake City VMware User Group (VMUG) last week doing a Q&A when one of the users mentioned it is painful to only be able to upload one Sysprep file at a time to vCenter. I wanted to take the time to address this for him as well as any of you who may have the same issue.

Problem: uploading Sysprep files to vCenter only allows for one file upload at a time.

Currently to upload a Sysprep file, one must login to to the VCSA web interface click on “Summary”, then “Upload” (bottom right).


You then select the desired target directory and choose the Sysprep files one at a time.


For the Windows vCenter Server the files must be placed within their respective folders within C:\ProgramData\VMware\VMware VirtualCenter\Sysprep\.


This is not always the easiest process to add several Sysprep files to vCenter. So I decided to create a script that will do it for me.

The Script

Once you download the script (see the end of the blogpost for the download link), open it up and read through the script to become familiar with what it does. As always, never run a script in your environment unless you know exactly what it is doing and you are comfortable with it.

After the script comments you’ll see there is a block of variables that need to be set according to your environment. The $DefaultVIServer along with $vCUser and $vCPass are the credentials for the script to login to the vCenter where your target vCenter resides (*Note: These can be one and the same, however, in larger environments some users have multiple vCenters). The $target variables are for the vCenter we will be loading the Sysprep files to.

For the script to work correctly you’ll need to create a folder and create the folder names as you see them in the picture above. All of the Sysprep files should be places respectively in their folders. The $Location variable is the location where these subfolders reside on your local machine; for me it was C:\Temp.

If the Sysprep files will be uploaded to a Windows vCenter Server and vCenter is not installed on the C: partition, the $vC_partition variable will need to be updated to the correct drive.


Once the files have been placed in their folders and the User Configuration has been set, the script is ready to run.

The script will check to see if vCenter is the VCSA or a Windows VM and will place the files accordingly. This script will not overwrite any files in the Sysprep repository by default. If you plan on having this script overwrite files with the same filename you must add ‘-Force’ to the end of the Copy-vmguestfile command at the bottom of the script.

You can grab the script HERE

Author & Article Source: Brian Graf | VMware Blogs

Tagged with:

By Jon Herlocker

Today, we’re proud to be releasing the binaries for VMware Log Insight 2.0. (You can download the eval from MyVMware, or grab the upgrade if you are already a customer).  Bill Roth talked about the release earlier at a high level, but I’d like to discuss the details. It’s hard to compete with the excitement felt when releasing a 1.0 product, but Log Insight 2.0 comes really close for me! The Log Insight team was able to achieve so much goodness in such a short period of time. Let’s take a look at some of the big improvements in 2.0, keeping in mind that these and more were all completed in less than five months:

  1. Distributed architecture for scale out deployments
  2. Machine Learning for event-type detection, also known as Intelligent Grouping.
  3. Log Insight Collection Framework: RESTful ingestion API, and Windows agent for WMI and Windows Application Log Files.
  4. Huge improvements to charting: new kinds of charts, new controls over charts, chart legends
  5. Major usability additions to dashboards: ability to constrain dashboards, automatic detection of potential linkages between dashboards
  6. Huge improvements in query performance, focused on predictable query response times.
  7. New look and feel and huge usability improvements to interactive analytics including inline charts
  8. Improved user experience for automating configuration of ESXi logging

There’s too much material to cover in a single blog post, so I’m going to break up the topics across multiple blog posts. Today I’ll address scale-out deployments and machine learning for event-type detection.

Distributed Architecture for Scale-out Deployments. Log Insight 2.0 now supports up to six full size nodes that create a single virtual Log Insight 2.0  instance, allowing six-times the ingestion level, query level, and database size. Moving from a single node system to a distributed system was a complicated feat. Not only do you have to handle distribution of incoming data and queries, but you also have to handle lots of different failure scenarios, distributed upgrades, distributed configuration, distributed monitoring, and more. We have approached cluster management with the same ruthless attention to usability and complexity reduction that you’ve seen in past Log Insight features, so almost none of this complexity will be visible to you! Deploying a Log Insight 2.0 cluster is almost as easy as deploying a single node of Log Insight. There is one additional question in the startup wizard: are you starting a new cluster, or joining an existing one? If you are joining an existing one, you just need to provide the name or IP address of a node in the existing cluster.  All in, a Log Insight 2.0 cluster should be significantly lower cost both to install and to maintain than competing solutions.

Log Insight 2.0 supports increased scale with minimal overhead by partitioning the data across all nodes in the Log Insight 2.0 cluster. Using a traditional load balancer, incoming data, either syslog or via the new RESTful collection API, is routed to any node in the cluster. We call these worker nodes. Each worker node independently indexes data arriving to its node, answers queries for the data residing on its node, and manages its own data retention. No communication is required between worker nodes, leading to an efficiently scalable architecture. One of the nodes in the cluster serves as a query coordinator. The query coordinator runs the user interface web server, breaks a user query into sub-queries to each of the worker nodes, merges the results from those sub-queries, and incrementally feeds results back to the user interface.

Log Insight

The end effect of these technological improvements is a Log Insight 2.0 cluster that is rated to handle 6×7500 = 45,000 events per second (eps) without message loss while concurrently enduring a reasonably heavy query load. If your query load is pretty light, you should be able to easily get more than 60,000 eps  (let us know how high you get!).

Machine Learning for Event Type Detection. One of the core value propositions of Log Insight is the transformation of unstructured data into a form where it can be queried and analyzed like a SQL database. In v1 of Log Insight, structure was defined either in content packs or manually through the user interface (with one-click field extraction). With v2, Log Insight automatically discovers the structure within the unstructured data, and extends its user interface to enable powerful summarization and discovery capabilities over that structure. This schema discovery works on any text/unstructured event data – even data that has never been seen before. So Log Insight is now more effective than ever in analyzing your proprietary application log data.

To understand how the event type detection works, it helps to review how these log messages are created. In the programming code of the software being monitored, there is a line that looks like this:

printf(“API call took %d miliseconds”, ms);

The first argument is called the format string and the second argument is the variable whose value replaces the “%d”. %d also defines a data-type – in this case an integer. The goal of event type detection is to discover the format strings that were used to create each message we observe in the event stream, without access to the original source code or other prior knowledge.

Log Insight uses three steps to detect event-types. In the first step, incoming messages are clustered together using machine learning so that messages that have many similar terms in them are grouped together. Each message is assigned an event_type corresponding to the cluster in which it is placed. This clustering happens in-line with ingestion, but is so fast that it does not slow down the ingestion pipeline (compared to v1).

Figure 2: Each message is assigned an event_type

In the second step, Log Insight examines each cluster, and applies a different kind of machine learning to learn a regular expression for each distinct value of event_type. This is where Log Insight comes up with a format string – clearly identifying the parts of the message that are the same for every message of a distinct event_type and what parts are variable. The parts that vary become fields that can be queried like a database column – we call them smart fields. Finding a good regular expression is challenging – there are many regular expressions that match, but we want to select one that is as specific as possible while still matching all events in the event_type.  In the Event Types view, you can see the results of this – the text in black is the same for every message of that event type, and the smart fields show up in blue. In Figure 3, we can immediately see something interesting – because the hostname and username are black, we know that every single message of this event type comes from the same source and user.

Event Types view - black is a constant, blue is variable across messages of that event typen event_type

In the third step, Log Insight analyzes each smart field, and assigns a data type to that section, with possible types including timestamps, hostnames, IP addresses, integers, strings, etc. In the figure below, you can see that Log Insight has identified that the first smart field is a timestamp.

Log Insight has inferred the data type of the first "smart field".

Once the structure has been detected, the formerly unstructured data can now be queried like a database where each smart field is like a column in your database table. You can aggregate across values of a smart field, group-by values of a smart field, limit your results to specific values of a smart field and more. You can also supply your own names for smart fields, so that they are easier to reference in the future. The Event Types view is, at its core, the results of your query “group by event_type”. It’s a powerful view that can summarize a massively large number of messages into a more easily processed number of distinct event types.

The machine learning event type detection capability brings the power of automated computer analytics to assist you with IT operations. Its automated schema detection will significantly accelerate time to value with new types of log data, and its summarization capabilities will significantly reduce your information overload, allowing you to quickly focus in on the novel or interesting information in your logs. In a future blog post, I’ll review how our new inline charting allows you to quickly explore different sub-dimensions of the data from the Event Types screen.

Author & Article Source: Bill Roth \ Jon Herlocker | VMware Blogs

Tagged with:

Issue:You need to determine whether or not your VMware ESXi 5.5 Hosts are vulnerable to the OpenSSL Heartbleed vulnerability found in the Open SSL 1.0.1 library.

Step # 1

Download the CrowdStrike Heartbleed Scanner. From the URL below.

Step # 2

After you install CrowdStrike Heartbleed Scanner, you will need to add your VMware ESXi Hosts FQDN or IP Address to the Target Entry List, if you are wanting to only check a single host.

I recommend doing this for your first one, so that you get familiar with the tool.

Then you can move on to doing a whole list of Hosts or an IP Address Range.

In the Target Entry Section, you can either enter a single Host, by FQDN, IP Address or use an IP Address range for multiple Hosts.

Step # 3

You will leave all of the default settings on the Control Section, unless you feel that you need to changes any of these settings.

Once you have entered your Hosts, you are now ready to start scanning.

You can start the Scan using the Blue play button.

Next, you can monitor that Scanner, as it is running, it has a circle of dots that stop, when it is done.

Once the scan has completed, you will see that results.

If a vulnerable Host is found, it will report as “VULNERABLE” in the Status section.


Step # 4

Here is how you go about patching your VMware ESXi Hosts.

I’m going to be using VMware Update Manager (VUM), to perform my remediation steps.

1) Create a New Baseline, that will include VMware Patch:

VMware ESXi 5.5, Bulletin “ESXi550-201404401”

a) Add Name

b) Add Description

c) Baseline Type, I selected Host Baselines, next Host Patch, and then click Next button.

d) Patch Options, I selected Fixed, since this New Baseline is for the Heartbleed vulnerability Only.

e) Patches, enter ESXi550-201404401, you will need to click on the down arrow and select Patch ID.

Next, click on the down arrow to add this patch to your New Baseline, then click the next button.

f) You can now click on the Finish button.

g) Now, you can attach your New Baseline to your Host.

a) Select your Baseline, under Individual Baselines by Type and then click the Attach button.

h) Next click on the scan button. Now you should be able to work with VUM as you would normally.


2) Now I will add this VMware Patch to the New HeartBleed Baseline.




Here is what a Host will look like if it is not vulnerable to the OpenSSL Heartbleed vulnerability.

Under that Status section, it will report as “Failed to connect


Step # 4

You can find more information for your review, on the OpenSSL Heartbleed vulnerability.

Resolving OpenSSL Heartbleed for ESXi 5.5 – CVE-2014-0160 (2076665)

VMware ESXi 5.5, Patch ESXi550-201404401-SG: Updates esx-base (2076121)

Response to OpenSSL security issue CVE-2014-0160/CVE-2014-0346 a.k.a: “Heartbleed” (2076225)

NIST – Vulnerability Summary for CVE-2014-0160

There are 4 new additions to the guide. Please review.

  1. enable-VGA-Only-Mode: Used for server VM’s that don’t need a graphical console. e.g. Linux web servers, Windows Core, etc.
  2. disable-non-essential-3D-features: Remove 3D graphic capabilities from VM’s that don’t need them.
  3. use-unique-roles: A new companion control to use-service-accounts. If you have multiple service accounts then each one should have a unique role with just enough privs to accomplish their task. This is in line with least-priv operations
  4. change-sso-admin-password: A great catch. When installing Windows vCenter, you’re prompted to change the password of administrator@vsphere.local. When installing the VCSA in a default manner you are not. This control reminds you to go back and do that.

The rest are formatting, spelling, clarification, etc.. One interesting change is the “enable-nfc-ssl” control. That has been renamed to “verify-nfc-ssl” now that SSL is enabled by default in 5.5 for NFC traffic. All of the changes are called out in the Change Log.

I’d like to thank the many customers and internal folks who have contributed and pointed out the errors that needed correcting. It’s great to have so many folks that are willing to pitch in!

Head on over to the vSphere Hardening Guide page to grab your copy now!

Thanks and please feel free to contact me on Twitter at @vspheresecurity or email to mfoley at if you have any input you’d like to share.



Author & Article Source: Mike Foley | VMware Blogs

Tagged with: