Vmware Virtual Infrastructure


 What is Virtualization? A single most effective way to reduce IT expenses while boosting efficiency and agility—not just for large enterprises, but for small and midsize businesses too. VMware virtualization lets you: Run multiple operating systems and applications on a single computer.

Here is a great overview video on Virtualization created by VMware

Virtualzation enables organizations to run more workload on single server by consolidating the environment so the your application run on virtual machines. Converting to a virtualized datacenter reduces the required datacenter square footage, rack space, power, cooling, cabling, storage, and network components by reducing the sheer number of physical hardware.











Reduction of physical machines can be realized by converting physical machine to virtual machine and consolidating the converted machines onto a single host.

No need to wait for the hardware to for new server deployment. In contracts to the long process of deploying physical servers, deploying virtual machines can be deployed in a matter minutes..


Physical and Virtual Architecture 










*Illustration above- difference between a virtualized and non virtualized host, Traditional architecture, the operating system interfaces directly with the installed hardware.

Virturalization is a technology that decouples physical hardware from a computer operating system. Virtualzation allows you to consolidate and run multiple workloads as virtual machines on single computer. A virtual machine is a computer that is created by software, like a physical computer, runs an operating system and application. Each virtual machine contains its own virtual hardware, including virtual CPU, memory, hard disk, and network interface card, which look like physical hardware to the operating systems and applications.

Why Use Virtual Machine?

Physical Machine is difficult to move or copy, bound to a specific to set of hardware components. Life cycle of hardware is very short.

Virtual Machine
1. easy to move and copy, –
*Encapsulated into files.
*Independent of physical hardware.
2.Easy to manage.
*Isolated from other virtual machines.
*Insulated from hardware changes.

Provides the ability to support legacy applications.
Allows servers to be consolidate.













In a nonvirtual environment the operating system assumes it owns all physical memory in the system. When an application starts it uses the interfaces provided by the operating system to allocate or release virtual memory pages during the execution. Virtual memory is a well known technique used in most the general purpose operating systems, and almost modern processors have hardware to the support it.

Virtual memory creates a uniform virtual address between space for application and allows the operating system and hardware to handle the address translation between the virtual address space  and the physical address space.

This technique adapts the execution environment to support large address space, process protection, file mapping, and swapping in modern computer systems.

In Vmware layer creates a contiguous addressable memory space for the virtual machine when it is started. The memory space allocated is configured when the virtual machine is created and has the same properties as the virtual address space.

This configuration allows the hypervisor  to run multiple virtual machines simultaneously while protecting the memory of each virtual machine from the being accessed by others.


Physical and Virtual Network 

The key virtual networking components in virtual architecture are virtual Ethernet adapter and virtual switches. A virtual  machine can be configured with one or more virtual Ethernet adapters.
Virtual switches allow virtual machine on the same ESXI host communicate with each other using the same protocols would be used over in the physical switches, without the need for additional hardware. Virtual switches also support VLAN compatible with standard VLAN implementation from other Vendor specific CISCO.

Vmware technology allows you to link local virtual machine to each other and to the external network through a virtual switch. A virtual switch, For example, physical Ethnernet switch, forwards frames at the data link layer. An ESXi host might container multiple virtual switches.




Vnetwork Standard Switch

What is a Virtual Network?

A virtual network provides networking for hosts and virtual machines.

What is virtual Switch?

*Direct network traffic between virtual machine and links to external networks.
*Combines the bandwidth of multiple adapters and balances traffic among them. It can also handle physical network interface card(NIC) failover.
*Models a physical Ethernet switch:
– Virtual machine’s network interface card(NIC) can connect to a port.
-Each uplink adapter uses one port.










A virtual network provides networking for VMware ESXI virtual machine. The fundamental component of a virtual network is virtual switch. A virtual switch is a software construct, implemented in the VMkernel,that provides networking connectivity for virtual machine that run on an ESXI host.

All network communication is handled by a host passes through one or more virtual switches. A virtual switch provides connections for virtual machines to communication with one another, whether they run on the same host or on a different host. A virtual switch allows connections for the management and migration networks as well a connection to access IP storage.

Virtual switches work at layer 2 of the OSI model.  You cannot have two virtual switches mapped to the same physical network interface card(NIC) But you can have two or more physical NICs mapped to the same virtual switch.

Use virtual switches to combine the bandwidth of multiple network adapters and balance communications traffic among them. You  can also configure the adapters to handle physical NIC failover.

When two or more virtual machines are connected to the same virtual switch, network traffic among them is routed locally. If an uplink adapter(physical Ethernet adapter) is attached to the virtual switch, each virtual machine can access the external network that the adapter is connected to .

Type of Virtual Switch Connections:

*A virtual switch allows the following connection types:
*VMKernel port:
-For IP storage, Vmotion migration, VMware vSphere Fault Tolerance
-For the ESXI management network

A virtual switch provides tow types of connection types to hosts and virtual machines:
*Connecting virtual machine to the physical network.
*Connecting VMKernel services to the physical network. VMkernel services include access to IP storage, such a NFS or ISCSI, Vmotion  migration and access to the management network.

The ESXI I Management port is used to connect to network or remote services, including the VMware Vsphere Client. Each  ESXI management network port and each VMkernel port must be configured with it own IP Address, netmask, and gateway.

The ESXI management network port is used to connect to network  or remote services, including  the VmwareVsphere  client. Each ESXI management network port and each Vmkernel port must be configured with its own IP address, netmask, and gateway.

The virtual machine port groups and VMKernel ports connect to the outside world through the physical Ethnernet adapters that are connected to the virtual switch uplink ports.








When designing your network environment, VMware vSphere allows you to place all your network on a single virtual switch. Also can opt for multiple virtual switches, each with a separate network.

The decision partly depends on the layout of your physical network. For example, might not have enough network separate to create a separate virtual for each network. Instead NIC team all adapters in a single virtual switch and isolate the network by using VLANs.











Example of Vsphere 5 Host Network Design.


Vsphere5Host networkdesign






















  A  Virtual Network support two type of virtual switches:

*VNetwork standard switches– Configured at the ESXI host level.

-Virtual switch configuration for single host
-Discussed in the module.

-Maximum 4088 virtual switch ports per standard switch and 4096 virtual switch ports per host.

*Vnetwork districted switches(VDS): similar to standard switch but it function as single virtual switch across all associated hosts. –Virtual switches that  provide a consistent network configuration for virtual machine as they migrate across hosts.












Detecting and Handing Network Failure:

Network Failure is detected by the VMkernel, which monitors:

-Link state only
-Link state, plus beaconing

Switches can be notified whenever:

-A failover event occurs.
-A new virtual NIC is connected to the virtual switch.

Failover implemented by the VMKernel based on configurable
-how physical adapter is returned to active duty after recovering from failure.

Load-balancing option:
-Use explicit failover order. Always use the highest order uplink from the list of active adapters that pass failover detection criteria.

 The VMkernel can use link status or beaconing or both to detect a network failure. Monitoring the link status provided by the network adapter detects failures like cable pulls physical switch power failures. This monitoring does not detects configuration errors like a physical switch port that is blocked by a spanning tree or misconfigured to the wrong VLAN.

It does not detect cable pull or link failures on the upstream side of the physical switch. Beaconing introduces a load of a 62-byte packet approximately every 10 seconds per physical NIC.

When beaconing is activated, the VMKernel sends out and listens for probe packets on all NIC in the team. This technique can detect failure that link-status monitoring alone cannot. Consult your switch manufacturer to confirm the benefit of configuring beaconing in your environment.

A physical  switch can be notified by the VMKernel  whenever a virtual NIC is connected to a virtual switch


The virtual switch connects to the external network through outbound Ethernet adapters.The virtual switch is capable of binding multiple vmnics together(similar to Network Interface Card[NIC]  teaming on a traditional server, offering greater availability and bandwidth to the virtual machine using the virtual switch.











Virtual switches are similar to modern  physical switches in many ways. Like a physical switch each virtual switch each virtual switch is isolated and has it own forwarding table, so every destination the switch looks up can match only ports on the same virtual switch where the frame originated. This feature improves security, making it difficult for hackers to break virtual switch isolation.

Virtual switches support VLAN segmentation at the port level, so each port can be configured as an access or trunk port, providing access to either single or multiple VLANs.

Unlike physical switches, virtual switches down require a spanning tree protocol because a single-teir networking topology is enforced. Multiple virtual switches cannot be interconnected and network traffic cannot flow directly from one virtual switch to another virtual switch on the same host.

Virtual switches provide all the ports that you need in one switch. Virtual switches need not be cascaded because virtual switches do not physical Ethernet adapters and leaks between virtual switches do not occur.

















VLANs. (Virtual Local Area Network).

ESXI support 802.1Q VLAN Tagging.

Virtual switch tagging is one of the three tagging policies supported.
-Packets from a virtual machine are tagged as they exit the virtual switch.
-Packets are untagged as they return to the virtual machine.
-Affect on performance is minimal.

ESXI provides VLAN support by giving a port group a VLAN ID.

VLAN provide for logical grouping of witch ports, allowing communications as if all virtual machines or ports in a VLAN were on the same physical LAN segment. A VLAN is a software configured broadcast domain.

-Create logically group networks (not based on physical topology)
-Improve performance by confining broadcast traffic to a subset of the switch port.
-Save cost by partitioning to network without overhead of new routers.

VLAN can be configured at the port group level. The host provides VLAN support through virtual switch tagging.

Which is provided by giving a port group a VLAN ID(by default, a VLAN ID is optional)

The VMkernel then takes care of all tagging and untagging as the packet pass through the virtual switch.













Standard Virtual Switch Policies Overview:

There are 3 Network Policies:
-Traffic shaping.
-NIC teaming.

Policies defined:
*At the standard switch virtual switch level:
-Default policies for all the ports on the standard virtual switch.

*At the port or port group level:
-Effective policies: Policies defined at this level override the default policies set tat the standard virtual switch level.

Traffic shaping is useful in cases where you might want to limit the traffic to or from a virtual machine. You would do this traffic shaping to either protect a virtual machine other traffic in an oversubscribed network.

The three network polices are security, traffic shaping and NIC teaming. These polices are defined for the entire standard virtual switch and can also defined for a VMKernel port or virtual machine port group. When a policy is defined for an individual port or port group, they policy at the level overrides the default policies defined for the standard switch.





 Security Policy

Administrators can configure layer 2 Ethernet security options the standard virtual switch and at the port groups.

Security policies can be defined at both the standard virtual switch level and the port group level.

The network security policy contains the following exceptions:

*Promiscuous Mode– When set to Reject, placing a guest adapter in promiscuous mode has no effect on which frames are received by the adapter(default is Reject)

*MAC Address Changes-When set to Reject, if the guest attempts to change the MAC address assigned to the virtual NIC, it stops receiving frames (default is Accept).

*Forget Transmits- When set to Reject, the virtual NIC drops frames that the guest sends, where the source address field contains a MAC address other than the assigned virtual NIC MAC address. (The default is Accept).

In general, these policies give you the option of disallowing certain behaviors that might compromise security, For example, a hacker might use a promiscuous mode device to capture network traffic for unscrupulous activities. Or someone might impersonate a node and gain unauthorized access by spoofing its MAC address.

Set Promiscuous Mode to Accept to use an application in a virtual machine that analyzers or sniffs packets, such as a network-based intrusion detection systems.





Set Mac Address Changes and Forget Transmits to Reject to help project against certain attack’s launched by a rogue guest operating system.

Leave MAC Address Changes and Forged Transmits at their default values(Accept) if your application change the mapped MAC address, as do some guest operating system-based firewalls.

To Set the security Policies:

Click on the host’s Configuration tab—–>Networking link——>Properties-next to the virtual switch you want to modify-



Under Vswitch(Standard Switch) Properties dialog box, select the port group and click Edit.




Click the Security Tab.



Traffic Shaping Policy

Network Traffic shaping is a mechanism for controlling a virtual machine’s network bandwidth.

Average rate, peak rate, and burst size are configurable




A  Virtual Machine’s network bandwidth can be controlled by enabling the network traffic shaper.  The network traffic shaper, when used on a standard virtual switch, shapes only outbound network traffic. To control inbound traffic, use a load-balancing system, or turn on rate limited features on the your physical router.


 Configure Traffic Shaping:
*Traffic shaping is disabled by default. Parameters apply to each virtual NIC in the standard switch(vSwitch). Only on a Standard Switch, traffic controls outbound traffic only




 NIC Teaming Policy

NIC Teaming Settings:

*Load Balancing (Outbound Only)

*Network Failure Detection 

*Notify Switches 


*Failover Order




NIC teaming policies allow you to determine how network traffic is distributed between adapters and how to reroute traffic in the event of an adapter failure. NIC teaming policies include load-balancing and failover settings. Default NIC teaming policies are set for the entire standard switch.  These default setting can be overridden at the port group level. The policies show are what is inherited from the settings at the switch layer. At the port group level for Production, you can select one of the policy exceptions and override the default selection.

To modify NIC teaming policies of a port group:

1. Click your ESXI host Configuration Tab.

2. Click the Network Link.

3. Click the Properties link next to the virtual switch on which the port group is located.

4. Select the port group in the list of the ports and click Edit.

5. In th e port group Properties window, click the NIC Teaming tab.


Load Balancing Method: Originating Virtual Port ID:




I.E Load Balancing:  Originating Virtual Port ID shows routing based on the originating Port ID, called virtual port ID loading balancing. With the this method, a virtual machine’s outbound traffic is mapped to a specific physical NIC. The NIC is determining by the ID of the virtual port to which this virtual machine is connected. This method is simple and fast and does not required the VMkernel to examine the frame for necessary information. 

Which the load is distributed in the NIC team using the port-based method, no single -NIC virtual machine gets more bandwidth than can be provided by single physical adapter.


Load -Balancing Method: Source Mac Hash

Routing based on source MAC hash. In this load-balancing method, each virtual machine’s outbound traffic is mapped to a specific physical NIC that is based on the virtual NIC’s MAC address. This method has low overhead and is compatible with all switches, but it might not spread traffic evenly across the physical NICs.

When the load is distributed in the NIC Team using the MAC-based method, no single-NIC virtual machine gets more bandwidth than can be provided by a single physical adapter.



Load-Balancing Method: IP Hash 

Routing based on IP hash. In this load-balancing method, a NIC for each outbound packet is chosen based on the source and destination IP address. This method has higher CPU overhead but a better distribution of traffic across physical NICs.

The IP-based method requires 802.2 ad link aggregation support or EtherChannel on the switch. The Link Aggregation control Protocol is a method to control the bundling of several physical ports to form a single logical channel. (LACP is part of the IEEE 802.3ad specification) EtherChannel and IEEE 802.3ad standards are similar and accomplish the same goal. EtherChannel is a port trunking technology used primarily on Cisco Switches. This technology allows grouping several physical Ethnernet links to create one logical Ethernet link for providing fault tolerance and high speed links between switches, routers, and servers.

When the load is distributed in the NIC team using the IP-Based method, single-NIC virtual machine might use the bandwidth of multiple physical adapters.

When one virtual machine communicates to different clients, it chooses different NICs. On the return traffic, the packet can come in on multiple paths because more than two NICs might be teamed. Thus link aggregation must be supported on the physical switch. None of this activity deals with inbound traffic. Only the outbound traffic is affected.



Detecting and Handing Network Failures:

Network failure is detecting byt the VMKernel, which monitors:

*Link state only

*Link state, plus beaconing

(Beacon probing is a network failover detection mechanism that sends out and listens for beacon probes on all NICs in the team and uses this information along with link status to determine link failure. ESX/ESXi sends beacon packets every 10 seconds)

Switches can be notified whenever:
*Failure  event occurs.

*A new virtual NIC is connected to the virtual switch.

Failover implemented by the VMKernel based on configurable parameters:

*Failback: –How physical adapter is returned to active duty after recovering from failure.

*Load-balancing option:  –Use explicit failover, Always use the highest order uplink from the list of active adapters that pass failover detection criteria.



The VMKernel can use link status or beconing or both to detect a network failure. Monitoring the link status provided by the network adapter dete3cts failure link cable pulls and physical switch power failures.

This monitoring does not detect configuration errors like a physical switch port that is blocked by a spanning tree or misconfiguration to the wrong VLAN. It does not detect cable pulls or link failures on the upstream side of the physical switch, Beaconing introduces a lad of a 62-bytes packet approximately every 10 seconds per physical NIC.


When beaconing is activated, the VMKernel sends out an listens for probe packets on all NICs in the team. This technique can detect failures that link-status monitoring alone cannot. Consult your switch manufacturer to confirm the benefit of configuring beaconing in your environment.

A physical switch can be notified by the VMKernel whenever a virtual NIC is connected to a virtual switch. A physical switch can also be notified whenever a failover event causes a virtual NICs traffic to be routed over a different physical NIC. The notification is sent out over the network to the update the lookup tables on physical switches. In most cases, this notification process is desirable because otherwise virtual machines would experience greater latency after failovers and Vmotion operation.

But do not set this option when the virtual machines connected to the port group are running unicast-mode Microsoft Network Load Balancing(NLB).(* NLB is multicast mode is unaffected)..

When using explicit failover order, always use the highest order uplink from the list of active adapters that pass failover-detection criteria.

The failover option determines how a physical adapter is returned to active duty immediately upon recovery, displacing the standby adapter that look its place at eh time of failure. If Failure is set to NO. , a failed adapter is left inactive even after recovery, until another currently active adapter fails, requiring it replacement.

What is a VMware vSphere Distributed Switch(VDS)?












VMware vSphere Distributed Switch (VDS) provides a centralized interface from which you can configure, monitor and administer virtual machine access switching for the entire data center.

Use these VDS features to streamline provisioning, administration and monitoring of virtual networking across multiple hosts:

*Central control of virtual switch port configuration, portgroup naming, filter settings, and others Link Aggregation Control Protocol (LACP) that negotiates and automatically configures.

*link aggregation between vSphere hosts and the access layer physical switch.

Network health-check capabilities to verify vSphere to physical network configuration.

Enhanced Network Monitoring and Troubleshooting Capabilities The VDS provides monitoring and troubleshooting capabilities:
Support for RSPAN and ERSPAN protocols for remote network analysis IPFIX Netflow version 10 SNMPv3 support Rollback and recovery for patching and updating the network configuration Templates to enable backup and restore for virtual networking configuration Network-based coredump (Netdump) to debug hosts without local storage –

The VDS extends the features and capabilities of virtual networks while simplifying provisioning and the ongoing configuration, monitoring and management process.


vSphere network switches can be divided into two logical sections: the data plane and the management plane. The data plane implements the packet switching, filtering, tagging and so on. The management plane is the control structure used by the operator to configure data plane functionality.

Each vSphere Standard Switch (VSS) contains both data and management planes, and the administrator configures and maintains each switch individually.

The VDS eases this management burden by treating the network as an aggregated resource. Individual host-level virtual switches are abstracted into one large VDS spanning multiple hosts at the data center-level. In this design, the data plane remains local to each VDS but the management plane is centralized.

Each VMware vCenter Server instance can support up to 128 VDSs; each VDS can manage up to 500 hosts.

*Distributed Virtual Port Groups (DV Port Groups) — Port groups that specify port configuration options for each member port.

*Distributed Virtual Uplinks (dvUplinks) — dvUplinks provide a level of abstraction for the physical NICs (vmnics) on each host.

*Private VLANs (PVLANs) — PVLAN support enables broader compatibility with existing networking environments using the technology

*Network vMotion — Simplifies monitoring and troubleshooting by tracking the networking state (such as counters and port statistics) of each virtual machine as it moves from host to host on a VDS

Bi-directional Traffic Shaping — Applies traffic shaping policies on DV port group definitions, defined by average bandwidth, peak bandwidth and burst size – See more at:


 VMware Vsphere -Vmotion –

What is Vsphere Vmotion?

VMware® VMotion™ enables the live migration of running virtual machines from one physical server to another with zero downtime, continuous service availability, and complete transaction integrity. VMotion is a key enabling technology for creating the dynamic, automated, and self-optimizing datacenter.

• Improve availability by conducting maintenance without disrupting business operations

• Move virtual machines within server resource pools to continuously align the allocation of resources to business priorities


 How Does VMware VMotion Work?
Live migration of a virtual machine from one physical server to another with VMware VMotion is
enabled by three underlying technologies.

First, the entire state of a virtual machine is encapsulated by a set of files stored on shared
storage such as Fibre Channel or iSCSI Storage Area Network (SAN) or Network Attached Storage
(NAS). VMware vStorage VMFS allows multiple installations of VMware ESX® to access the same virtual machine files concurrently.

Second, the active memory and precise execution state of the virtual machine is rapidly transferred
over a high speed network, allowing the virtual machine to instantaneously switch from running on
the source ESX host to the destination ESX host.

VMotion keeps the transfer period imperceptible to users by keeping track of on-going memory
transactions in a bitmap. Once the entire memory and system state has been copied over to the
target ESX host, VMotion suspends the source virtual machine, copies the bitmap to the target ESX
host, and resumes the virtual machine on the target ESX host. This entire process takes less than
two seconds on a Gigabit Ethernet network.

Third, the networks being used by the virtual machine are also virtualized by the underlying ESX
host, ensuring that even after the migration, the virtual machine network identity and network
connections are preserved. VMotion manages the virtual MAC address as part of the process. Once the destination machine is activated, VMotion pings the network router to ensure that it is aware of
the new physical location of the virtual MAC address. Since the migration of a virtual machine with
VMotion preserves the precise execution state, the network identity, and the active network
connections, the result is zero downtime and no
disruption to users.

Vmotion Migration Overview:


 Storage Overview
VMware ESXI hosts should be configured so that they have shared access to datastores.
Datastores are logical containers that hide specifics of each storage device and provide a uniform
model for storing virtual machine files. Depending on the type of storage that you use, datastores can be formatted with a VMware Sphere VMFS or file system native to an operating or storage device that is shared using the Network File System (NFS) protocol.



Several storage technologies are supported by ESXI host in the VMware Vsphere environment.

*Direct-attached storage– Internal or external storage disks or arrays attached to the host through a direct connection instead of a network connection.

*Fiber Channel- A high-speed transport protocol used for storage area networks (SANs). Fibre Channel encapsulates SCSI commands, which are transmitted between Fibre Channel nodes.  In general, a Fibre Channel node is a sever, a storage system, or a tape drive. A Fibre Channel switch interconnect multiple nodes, forming the “fabric” in a Fibre Channel network.

*FCoE-The Fibre Channel Traffic is encapsulated into Fibre Channel over the Ethernet(FCoE) Fame. These FCoE frames are converged with the networking traffic. By enabling the same Ethernet link to carry both Fibre Channel and Ethernet traffic. FCoE increases the use of the physical infrastructure and reduces the total number of the network ports and cabling.

*ISCSI- A SCSI transport protocol, enabling access to storage device and cabling over standard TCP/IP networks. iSCSi maps SCSI block-oriented storage over TCP/IP. Initiators, such as an iSCSI host bus adapter(HBA) in an ESXI host, send SCSI commands to targets.

*NAS-Storage shared over standard TCP/IP networks at the file system level. NAS storage is used to hold NFS datastores. The NFS protocol does not support SCSI commands.

iSCSI,NAS, and FCoE can run over 1Gbps Ethernet or 10 Gbps Ethernet. 10 GigE provides increased storage performance levels and provides sufficient bandwitdth that permits multiple types of the high-bandwidth protocol traffic to coexist on the same network.



A Datastore is a generic term for a container that hold files. A datastore can be formatted with a virtual machine file system(VMFS) or in the case of a NAS/NFS device, with a file system native to the storage provider. Both VMFS and NFS datastores can be shared across multiple ESXI hosts.

A virtual machine is storage a set of files in the its own directory in a datastore. Datastore can also be used to storage ISO images, virtual machines and templates.





VMFS is designed, constructed and optimized for a virtualized environment.  VMFS is a high-performance cluster file system designed for virtual machine. VMFS used distributed journaling of it file system meta data changes to allow fast and resilient recovery in the event of a hardware failure.

VMFS increase resource used by providing multiple virtual machines with shared access to a consolidate pool of clustered storage. VMFS is also the foundation for distributed infrastructure services such as live migration of virtual machines and virtual machine files, dynamically balance workloads across available compute resources, automated restart of virtual machines and fault tolerance.

VMFS provides an interface to storage resources so the several storage protocols(Fiber Channel, Fiber Channel over Ethernet, ISCSI and NAS).  can be used to access datastores on which virtual machines can reside. Dynamic growth of VMFS datastores through aggregation of storage resources and dynamic expansion of a VMFS datastore gives you the ability to increase a shared storage resource pool with no downtime. (Mounting a point in time copy of a datastore).
NO other clustered file system provides the capabilities of VMFS. It is a distributed locking methods forge the link between the virtual machine and the underlying storage resource in a manner that no other cluster file system can equal.














VMFS stores all the files that makes up the virtual machine into a single directory. VMFS provides encapsulation of the entire virtual machine so the it can be easily become part of a business continuity or disaster recovery solution.












VMFS File System Layouts

A VMFS datastore employs a file structure similar to a Linux or UNIX operating system. Each datastore is mounted to folder and contains several sub directories containing files that describe a virtual machine. VMFS has been optimized to support large files and to perform many small concurrent write.

What is  are the new feature of VMFS-5?
Newly installed ESXi 5 will be formatted with VMFS 5 version but if you have upgraded the ESX 4 or ESX 4.1 to ESXi 5, then datastore version will be VMFS 3 only. You will able to upgrade the VMFS 3 to VMFS 5 via vSphere client once ESXi upgrade is Complete.

Below is VMFS-5 vs VMFS-3 Feature Comparison





  • Allows Concurrent access to shared storage.
  • Can be dynamically expanded 
  • Uses a 1MB block size, good for storing large virtual disk files.
  • Storage large virtual disk files
  • Provides on-disk, block-level Locking. 


VMFS is a clustered file system that allows multiple physical servers to read and write to the same storage device simultaneously. The cluster file system enables unique visualization-based services including:

  • Migration of running virtual machines from one physical server to another without downtime.
  • Automatic restart of a failed virtual machine on a separate physical server.
  • Clustering of virtual machine across different physical servers.


VMFS allow IT organization to greatly simplify virtual machine provisioning by efficiently storing the entire machine state in acentral location. VMFS allows multiple ESXI host to concurrently access shared virtual machine storage.

The size of a VMFS datastore can be increased dynamically while virtual machine residing on the VMFS datastore are powered on the running. A VMFS datastore efficiently stores both large and small files belonging to a virtual machine. A VMFS datastore can support virtual disk files. A virtual disk file has a maximum of 2TB in size. A VMFS datastore uses sublock addressing to make efficient use of storage for small files.




VMFS provides block-level distributed looking to ensure that the same virtual machine is not powered on the multiple serves at the same time.  If a physical server fails, the on-disk lock for each virtual machine can be resleased so the virtual machine can be restarted on the other physical servers.

On the slide, each ESXI host has two virtual machines running on it. The lines connecting the virtual machine to the disk icons for the virtual machine disk(VMDKs) are logical representations of the assoication and allocation of the large VMfS datastore.  The VMFS datastore is made up of one or more LUNS. The virtual machine see the assigned storage volume only as a SCSI targe from within the guest operating system. The virtual machine contents are only files on teh VMFS volumes. 

VMFS can be deployed on various SCSI-based storage devices:direct-attached storage. Fibre Channel storage, and ISCSI storage

What is NFS?


  • Storage Shared over the network at the file system level
  • Supports NFS version 3  over TCP/IP

NFS is a file-sharing protocol ESXI hosts use to communicate with a NAS Device. NAS is a specified storage devcie connects to a network and can provide file access services to ESXI hosts.

NFS datastore are treated like VMFS datastore because they can be used to hold virtual machine files, templates, and ISO images. In additional, an NFS volume allows the migration using Vmware Vmotion of virtual machine whose files reside on an NFS datastore.. ESXI hosts support NFS vrersion 3 over TCP only.

ESXI hosts do not use the Network Lock Manager (NLM)Protocol, which is a standard protocol used to support the files locking of NFS command files. VMware has its own locking protocol. NFS locks are implemented by creating lock files on the NFS server.

Lock files are named .LCK-<field>, where <Field> is the value of the file id field.  

When a lock file is created, an update is periodically sent to the lock file to inform other ESXI hosts the lock is still active. The lock file updates generate small(84-byte)WRITE request to the NFS server.




A Virtual Drive stored on VMFS datastore always appears to the virtual machine as mounted as mounted SCSI device. The Virtual disk hides the physical storage layer from the virtual machine’s operating systems. 

For the operating systems in the virtual machine., VMFS preserves the internal file systrem semantics. Thus the operating system running in the virtual machine sees a native file system, not VMFS. These semantics ensure correct application behavior and data integrity for applications running in virtual machines.

 Storage Device Naming Conventions

Storage devices are identified in several ways:

  • SCSI ID- Unique SCSI Identifier
  • Canonical name-The Network Address Authority (NAA) ID is a unique LUN identifier, guaranteed to be persistent across reboots.

*In Additional to NAA IDs, device can also be identifier with mpx to 110 identifiers.

  • Runtime name -Uses the convention vmhbaN:C:T:L This name is not persistence through reboots. 




ON ESXIU host, SCSI storage devices use various identifiers, Each identifier serves a specific purpose. For example, the VMkernel requires an identifier, generated by the storage device, that is guaranteed to be unique to each LUN. If a unique identifier cannot by provided by the storage device, the VMkernel mu7st generate a unique identifier to represent each LUN or disk.

The disk identifier referenced in the slide are not user-friendly, so a third more user-friendly naming convention is created after reboot to reference each disk. This name can be used when using command-line utilities to interact with storage that is recognized by an ESXI host.

SCSI storage device identifiers include:

  • SCSI ID- The unique address of SCSI device.
  • Canonical Name- The Network Address Authority ID (NAA ID). NAA IDs, are globally unique identifiers that are persistence across of a SCSI device.

The T10 identifier is another unique identifier that can appear on an SCSI device. Like NAA and MAC identifiers, T10 identifiers are assigned by IETF subcommittees(such as the INCITS) to specific vendors. T10 identifiers always begin with the string “t10”



MPX  is a VMware namespace that is used when no other valid namespace can be obtained from the LUN> An mpx name is not globally unique and is not persistent across reboots. Typically only local devices used name starting with “mpx”



  • Runtime name- The name of the first pathe of the device. The runtime name is created by the host. It is not reliable identifier for th edevice because it is not persistent. The runtime name might change if you add HBAs to the ESXI host.

Storage device name appear in the various panels in the Vmware Vsphere Client. 

Viewing Storage Map. 



Storage maps are an easy way to visually represent relationships between selected inventory objects and storage. For example, you can view what targets a virtual machine can see or how many path a virtual machine has to a storage device. Maps can assist in troubleshooting by showing problem entities.




Physical Storage Considerations

Discuss Vsphere Storage needs with storage administrator support team.

  • LUN Sizes
  • I/O bandwidth
  • Disk cache parameters
  • Zoning and masking
  • Identical LUN presentation to each ESXI host
  • Active-active or active -passive arrays
  • Export properties for NFS datastore.


Before the Vsphere administrator,implement Vsphere environment, need to have review the storage needs with your storage support team.

 ISCSI  Components
iscsi_componentsAn ISCSI SAN consists of an iSCSI storage system, which contains one or more LUNs and one or more storage processors(SPs), Communications between the host and the storage array occurs over a TCP/IP network.

The ESXi Host is configured with an iSCSI initiator . An initiator can be hardware-based, in which case the initiator is an isCSI host bus adapter (HBA), or hardware initiator . Or the initiator can be software-based, known as the iSCSI software initiator.

An initiator transmits SCSI commands over the IP network. A target receives SCSI commands from the IP network. You can have multiple initiator and targets in your iSCSI network. iSCSI is SAN oriented because the initiator finds one or more targets, a target presents LUNs to the imitator , and the initiator sends it SCSI commands. An intiator resides in the ESXI host. Targets reside in the storage arrays that are supported by the ESXI host.

iSCSI arrays can restrict access to targets from hosts using various mechanisms including IP address, subnets and authentication requirements.

iSCSI Addressing



The main addressable, discoverable entity is an iSCSI node, An iSCSI node can be an initiator or a target. An iSCSI node requires a name so that storage can be managed regardless of address.

The iSCSI name can use one of the following formats: the is iSCSI qualified name(IQN) or the extended unique identifier (EUI).

The IQN can be up to 255 characters long. The naming convention:

  • The prefix “iqn”
  • A data code specifying the year and month in which the organization registered the domain or subdomain name used as the naming authority string.
  • The organizational naming authority string, which consists of a valid, reversed domain or subdomain name.
  • (Optional). A colon(;), followed by a string of the assigning organization’s choosing, which must make each assigned iSCSI name unique.

The EUI naming convention:

  • The prefix “eui” followed by a 16-character name. The name includes 24bits for a company name that is assigned by the IEEE and 40 bits for a unique ID, such as a serial number.

iSCSI Initiators


To access iSCSI targets, your host uses iSCSI initiators. The initiator transport SCSI requests and responses, encapsulated into the iSCSI protocol, between the host and the iSCSI target. Your host supports two types of initiator: software iSCSI and Hardware iSCSI


A  software iSCSI initiator is VMware code built into the VMkernel. The imitator allows your host to connect to the ISCSI storage device through standard network adapters. The software ISCSI initiator  handles iSCSI processing while communicating  with the network adapter. With the software iSCSI initiator, you can use iSCSI technology without purchasing specialized hardware.

A hardware iSCSI initiator is a specialized 3rd party adapter capable of accessing iSCSI storage over TCP/IP. Hardware iSCSI initiators are divided into two categories: dependent hardware iSCSI and independent hardware iSCSI.

A dependent hardware iSCSI initiator, or adapter, depends on VMware networking and on iSCSI configuration and management interfaces are provided by VMware. This type of adapter presents a standard network adapter and iSCSI offload function for the same port. To make this type of adapter functional, you must set up networking for the iSCSI traffic and bind the adapter and a appropriate VMKernel is iSCSI port.

An independent hardware isCSI adapter handles all iSCSI and network processing and managementt for you ESXI host.

Configuration Software ISCSI 

To configure the iSCSI software initiator:

  1. Configure a VMKernel port for accessing IP storage.
  2. Enable the iSCSI software adapter .
  3. Configure the iSCI qualified name (IQN) and alias (if required))
  4. Configure iSCSI software adapter proper properties, such as static/dynamics discovery addresses and iSCSI port binding.
  5. Cofngure iSCSI security *Challenge Handshake Authentication Protocol (CHAP).


How Configure the iSCSI Software Initiator  Tutorial  Video



What is Virtual Disk?
A virtual  machine has at least one virtual disk. Adding the first disk implicitly adds a virtual SCSI adapter for it to be connected. The ESXI host offers a choice of adapters. Buslogic Parallel, LSI Logic  SAS, and VMware Paravirtual.






The Virtual Machine Creation wizard in the Vsphere Client selects the type of virtual SCSI adapter, based on the choice of guest operating system. The virtual disk is stored in the same folder as the virtual machine configuration file. Although you can select to place a virtual disk in an alternate location. For example, when separating boot and data disks.

















Selecting a VMFS datastore to hold the new, blank virtual disk and specify the disk’s size. The name of the virtual disk files match the name of the virtual machine. By Default the virtual disk type is Thick Provision Lazy Zeroed.  There are two additional types: Thick Provision Eager Zeroed  and Thin Provision.



















Virtual Drive typical Configuration:

The datastore on which to store the virtual machine files and guest operating system to be installed on the virtual machine.

Two type of virtual disk options:

*Thick Provision Lazy Zeroed: -Space required for the virtual machine disk is allocated during creation. Data remaining on the physical device is not erased during creation, but is zeroed out on the demand at a later time on first write from the virtual machine.

*Thick Provision Eager Zeroed: – Space required form the virtual disk is allocated during creation. Data remaining on the physical device is zeroed out when the disk is created. If you select this check box, this virtual machine can take advantage of VMware vsphere Fault Tolerance

Thin Provision- A Thin provisioned disk uses only as much datastore space as the disk initially needs.  If the thin disk more space later, it can expand to the maximum capacity allocated to it.

Vsphere Thin Provisioning enables virtual machines to use storage space as needed, further reducing the cost of storage for virtual environment considerably. Thin provisioning provides alarms and reports

Physical File Systems and VMware Vsphere VMFS

Conventional File systems allow only one server to have read-write access to the same file at a given time. By contract, VMware VMFS enable a distributed storage architecture allows multiple ESXI host concurrent read and write access to the same shared storage resources. 





Virtual machine Files
The configuration file, identified by a .vmx file extension, contains all of the configuration information and hardware settings of the virtual machine.














Virtual Machine Hardware
A virtual machine uses virtual hardware. Each guest operating system see ordinary hardware devices. The guest operating system does not know that these devices are virtual .

All virtual machine have uniform hardware(except for a few varations that the system administration can apply).  Uniform hardware makes virtual machines portable across VMware virtualization platforms.



You can configure virtual machine CPU and memory and add a hard drive and virtual network interface cards(NICS). You can also add and configure virtual hardware, such as CD/DVD drives, SCSI devices. Not all devices are available to add and configure. For example, you cannot add video devices, but you can configure available video devices and video cards.

Virtual CPU and Memory
*Support up to 32 virtual(Vsphere 5x) or 64 (Vsphere 6.0) virtual CPU(vCPU). Depends on the number of licensed CPUs on a host and the number of processors supported by a guest operating system.
*Support up to 1TB(Vsphere 5x) or 2TB (Vsphere 6.0) max memory size.
Depending on the amount the guest operating systems will be told that it has.













ESXI Architecture

ESXI is the virtualization layer is abstracts the processors, memory, storage, and networking resoruces of the physical host into multiple virtual machine.  ESXI is a bare metal hypervisor creates the foundation for a dynamic and automated datacenter.



At the ESXI architecture, application running in virtual machine access CPU, memory, disk, and network interface without direct access to the underlying hardware.  The ESXI hypervisor is called the VMkernel.  The VMkernel receives requests from virtual machines from resources from the virtual machine monitor (VMM) and presents the request to the physical hardware. The one VMM per virtual machine has the job of presenting virtual hardware to the virtual machine and receiving request.

Interfaces to access to ESXI:
*Vsphere Client (direct to ESXI host or Vcenter).
*The Vsphere API/SDK
*Common Information Model(CIM)

Configuring ESXI Throught DCUI

The direct console user interface (DCUI) is similar to BIOs of a computer with a keyboard-only user interface – DCUI is a low-level configuration and management interface, accessible through the console of the server, used primarily for initial basic configuration. To start customizing system settings Press F2.

DCUIlogs  DCUITechsupport DCUItroubleshooting

























DCUI allows an administrator to configuration ESXI host:
*Root password
*Enable and disable lockdown
*IP Configuration (IP Address, subnet mask, default gateway)*DNS Server.
*VLAN settings
*Restart ESXI management network (without rebooting the host).
*Test management using ICMP ping request.
*Configure Keyboard Layout
*View systems logs
*Enable troubleshooting services, when required.
*Enable troubleshooting services- Local Tech Support Mode(TSM)
*Remote Tech Support Model Service(SSH)-Remote SSH via puttyTroubleshooting.

* It is good practice to keep troubleshooting service disable until it is necessary, for instance, working with VMware support to troubleshoot a problem.

ESXI -NTP Client.

Network Time Protocol (NTP) is a client server protocol used to synchronize a computer’s clock to a time reference.
*For accurate performance graphs.
*For accurate time stamps in log messages.
*So Virtual machine have a source to synchronize with.


An ESXI host can be configured as an NTP client 

The Network Time Protocol(NTP) is an Internet Standard protocol is used to synchronized  computer clock. times in a network. The benefits to synchronizing an ESXI host’s time include:
*Performance data can be displayed and interpreted properly.
*Accurate time stamps appear in log messages(which make audit logs meaningful).
*Virtual machines can synchronized their time with the ESXI host. The Time synchronization is beneficial to application, running on the virtual machines.

NT is client-server protocol. When configured the ESXI host to the be an NTP client, the host synchronized it time with the an NTP server. which can be a server on the Internet or your corporate NTP server. ESXI support NTP version 3 and 4.















Reference: NTP  http://www.ntp.org.

Network Setting -DNS and Routing:
The host’s DNS and Routing link allows you to change:

*The host name and domain.
*The primary and secondary DNS server.
*The default gateways: VMkernel.

To configure these settings, click the host’s Configuration tab, click the DNS and Routing link.




ESXI-Sources-DNS, NTP, Syslogs, and AD,ESXISouces



VMware Virtual Network Interface Card Adapter Choices:
Network adapters that might be available for your virtual machine
*Flexible-Vlance(PCNet32- 32bit guest operating system.
*Vmxnet-Better performance than vlance0
*E1000-Higher performance Adapter available for only some guest operating systems.
*VMXNet, VMXnet2, and VMXNet3 are VMware drives only available via Vmware Vmtools
-Vmxnet2- Enhanced vmxnet)
vmxnet adapter with enhanced performance.
-Vmxnet3-Builds on the vmxnet2 adapter






























Network Adapter Properties

For each physical adapter, speed and duplex can be changed.

You might need to set the speed and duplex for certain NIC and switch combinations.

To change the speed and duplex of a network adapter in a standard virtual switch:

1. Select your ESXI host from the inventory and click the Configuration tab.

2. Click the Network Link.

3. Click the Properties link the standard virtual switch to modified. In the properties dialog box. select the Network Adapter tab.

4. Click the Edit button to change the speed and duplex.

If you are using a Gigabit Ethernet adapter, leave the Configured Speed, Duplex setting to Auto negotiate because it is part of the Gigabit standard. Gigabit Ethernet adapters are now common, so it rarely have to modify this settings.

Virtual Machine Console  

The virtual machine remote console available in the vSphere Client, provide the mouse, keyboard, and screen features. To install an operating system, use the virtual machine’s console. The virtual machine console allows access to the BIOS of the virtual machine. The console offers ability to power the virtual machine on and off and to reset it.











The virtual machine remote console supports connecting smart card readers to multiple virtual machines, which can then be used for smart card authentication to virtual machines.

The virtual machine remote console is normally not used to connect to the virtual machine for daily tasks. VMware View Remote desktop connection, Virtual Network Connection, or other options like power cycling, configuring hardware, and troubleshooting networking issues.

The virtual machine console allows you to send the Ctrl + Alt + Del key sequence to the virtual machine. Press Ctrl+Alt+Del+Insert in to the virtual console or select VM>Guest in the virtual machine console menu bar and select Send Ctrl+Alt+Del  from the drop down menu. To release the pointer from the virtual machine console so can use it in other windows.,press Ctrl+Alt.


 Virtualization Basics



Leave a Reply

Your email address will not be published. Required fields are marked *