VMware vRealize Operation 6.5 Installation and Configuration Guide

VMware vRealize Operation (VROPS) 6.5 Installation and Configuration Guide 

VROPS Initial Setup Video
 

vROPS OVA Install Flowchart–

In this section we will be covering  VROPS Installation and Configuration.

I. Requirements
minimum hardware specification 
4 vCPU
16 GB RAM
4 GB of Free Space *Depending on your organization data collection

Please visit Vmware KB for VROPS 6.5 sizing guidelines.

vROPS Standard Specs

II. Download and Registred vRops

Go to VMware vRealize site and download a 6.5 trial version of vROps.

You will require a VMware account to do this and agree to any licensing.
Download the OVA file for vROps to your local computer.


Step 1. Log into the vCenter and go file and Select —>File—>Deploy OVF Template

vrops deployment step 1

Step 2: Select the OVA file (vRealize-Operations-Appliance-6.5.0.5097674) Click Next


Step 3: vRealize version and the size on disk will be displayed. Click Next

Step 4: Accept the license form VMware and click Next

*scroll down and read the EULA-

Step 5:

Enter a name for the vROps appliance or keep default – vRealize Operations Manager Applicance

Step 6:

You can select different size configurations from Very Small to X-Large depending on the number of VMs that need to be monitored and data collection.   Remote Collection Standard/Large(cluster) Extra small/Large

A remote collector node is an additional cluster node that allows
vRealize Operations Manager to gather more objects into its inventory for monitoring. Unlike data nodes, remote collector nodes only include the collector role of
vRealize Operations Manager, without storing data or processing any analytics functions.

A remote collector node is usually deployed to navigate firewalls, reduce bandwidth across data centers, connect to remote data sources, or reduce the load on the vRealize Operations Manager analytics cluster.

You must have at least a master node before adding remote collector nodes.

 *in this installation step will choice small option.

Step 7: Select the required Storage   Click Next to continue.

 

Step 8: Disk Format

Select default  Disk Format. and click Next.

Step 10:Enter the networking details for the vROps appliance and click Next

 

Step 11:Confirm the settings for the appliance and click Finish to complete the deployment


START UP:

Step 12: Open a console into the newly deployed appliance. You will see the vmware scren appear as the appliance writes the new configs.

Once the appliance has completed the installation you will see a similar screen to the below. Next you need to launch the web console on the IP address provided earlier as part of the deployment to configure the start-up configurations.

 

Step 13:
Open a web browser and if prompted to accept https site settings click through the prompts to continue. You will then be taken to a Get Started screen. If this is the first appliance with the vROps farm click New Installation, otherwise you can choose to expand a current vROps system.

Installation Options
*Express Installation.
*New Installation.
*Expand an Existing Installation

Step 14: Click Next on the Getting Started initial setup for a new cluster.

Step 15: Enter a password for the admin user and click Next

Step 16:  Here you can choice  CA signed certificate or even a third-party certificate or just use the defaults. As this is an evaluation environment I’ve selected the default certificate. Click Next.

Step 17: Enter a cluster name and also select which NTP server you want to synchronize against.



Step 18: Click Finish to finalize the initial setup.

Step 19:You will be taken to a newly designed configuration screen which shows the current cluster status. Click on Start vRealize Operations Manager to allow vRealize to come online. You can see from the State that vROps is currently Powered Off and Offline.

Step 20:

Once vROps begins the start up process a notice is received to ensure there are enough nodes in the cluster to handle the required workload. Click Yes to continue.

Step 21:

Once vROps is setup and started you will notice the state will change.

Step 22: Enter the vROps IP address or DNS name into a browser and you will receive a log in prompt.



LOG IN:
Step 23: Once you log in you will get prompted for some more configuration settings. Select New Environment if this is a new environment or you can import data from a current vCOPS environment. Click Next.



Step 24: Accept the license and click Next

tep 25: Enter a license key if you have one or just continue with a Product Evaluation and click Next.

Step 26: Click Finish to complete the login.

 

VROPS 6.5 Installation 
http://pubs.vmware.com/vrealizeoperationsmanager-65/topic/com.vmware.vcom.core.doc/GUID-A601D15B-80CD-43D2-B7A2-42973F732B8A.html

 

System Center 2016 Installation Guide

SCCM 2016 Configuration Installation Guide Phases
Image result
Microsoft announced the release of System Center Configuration Manager (SCCM) 1602, which is the latest update to its device management product. The “1602” part of the update’s name refers to its year and month release time (as in “2016 February”), but Microsoft announced its arrival today in March 11, 2016
Phase 1  Design Recommendation and Installation Prerequisites
(coming soon)

Phase 2 | SQL Installation and Configuration
Phase 3 | SCCM 2016 Installation
Phase 4 | Application Catalog Web Service Point Installation
Phase 5 | Application Catalog Website Point Installation
Phase 6 | Asset Intelligence Synchronization Point Installation
Phase 7 | Certificate Registration Point Installation
Phase 8 | Distribution Point Installation
Phase 9 | Endpoint Protection Point Installation
Phase 10 | Enrollment Point Installation
Phase 11 | Enrollment Proxy Point Installation
Phase 12 | Fallback Status Point Installation
Phase 13 | Management Point Installation
Phase 14 | Reporting Services Point Installation
Phase 15 | Software Update Point Installation
Phase 16 | State Migration Point Installation
Phase 17 | System Health Validator Point Installation
Phase 18 | Service Connection Point Installation
Phase 19 | Boundaries Configuration
Phase 20 | Client Settings Configuration
Phase 21 | Discovery Methods Configuration
Phase 22 | Maintenance Task Configuration
Phase 23 | Backup and Restore

ESXI Host Patching Method using VMware PowerCli

ESXI Host Patching Method using VMware PowerCli

Image result for upgrade and patching via powercli

Great Video Reference

I. Intro/Scope
We will be Illustrating how to install ESXI host patch release by using VMware PowerCLI Install-VMHostPatch cmdlets and other cmdlets Maintenance Mode and Restart command to quickly patch ESXI with eixsting tool set cmdlet with in PowerCLI.

 

II.  Requirements

1. Download Required VMware Patches and upload to ESXI host Datastore.

a. Download ESXI Patch from Link www.vmware.com/go/downloadpatches

You can search specific Patch release for ESXI host.
https://my.vmware.com/group/vmware/patch#search

b. Once Download to local machine, extract the patch file data from the downloaded zip (e.g.update-from-esxi6.0-6.0_update03 zip)
Upload the extracted content to a folder on the ESXi datastore

c. Current ESXI Host Patch Level- 

2. Make sure you have Powercli install on Local machine.
You can go to www.vmware.com/go/powercli.


III. Install ESXI Patch Level Steps.

1. Open PowerCli -run as administrator-

2. Run the follow Command -Connect to the Server via PowerCLI – “Connect-VIServer ipaddress or Host name

For example, Connect-VISERVER -192.168.2.223,

a. You will prompt to input User ID – type root or Domain\user and password.

b. You will see certificate information from ESXI host.

c. Once you logged into the ESX host – You will see user ID and connection is 443.

 

3. Place the Standalone ESXI host into Maintenance mode

by running the follow command “Set-VMHost -VMHost ipaddress -State Maintenance”




For example, the command to Patch is – “Install-VMHostPatch -VMHost ipaddress -HostPath /vmfs/volumes/datastorename/folder/metadata.zip”

 

4. SScrollup to to where Install-VMHostPatch , you will see has completed.

5. Scrool down to command line and type the follow command to reboot ESXI Hosst
I.E Res

Reboot the Host to complete the install

*Optional- I normally run the command ping -T to the ESXI host.

once see the TTL is shows up, likely host has been boot back up …

 

6. Log into Vsphere Click to validate ESXI Patch has been applied. For example, you should see ESXI Update 3 Patch – 

 

ESXI Update 3 Patch Level- 5050593

This concludes Applying Patch Release to ESXI Host via VMware Powercli.

Reference
vSphere PowerCli CMDLETs Reference
Install-VMHOSTPatch
https://www.vmware.com/support/developer/PowerCLI/PowerCLI41U1/html/Install-VMHostPatch.html

Quickest Way to Patch an ESX/ESXi Using the Command-line
https://blogs.vmware.com/vsphere/2012/02/quickest-way-to-patch-an-esxesxi-using-the-command-line.html

Understanding  ESXI Patches
https://blogs.vmware.com/vsphere/2012/02/understanding-esxi-patches-finding-patches.html 

 

VMware VM- Guest OS-Windows Server 2016 Install Guide.

What is Windows Server 2016?

Windows Server 2016 is a server operating system developed by Microsoft as part of the Windows NT family of operating systems, developed concurrently with Windows 10.

The first early preview version (Technical Preview) became available on October 1, 2014

Together with the first technical preview of System Center.Unlike previous Windows Server versions, which were released simultaneously with the client operating system,

Windows Server 2016 was released on September 26, 2016, at Microsoft’s Ignite conference and became generally available on October 12, 2016.

Download the datasheet

 

 

 

 


Scope and Purpose:

We will be illustrating how to install a fresh installation of Windows Server 2016 on Vmware Vsphere Virtualization environment.

CPU, Memory and Storage Prerequisites:

A minimum of 1.4 GHz 64-bit EMT64 or AMD64 processor. Quad Core Recommended for production systems.

Disk Space:

For Core installation, a minimum Disk Space of 32 GB is required. Additional 4 GB is required for GUI installation.

Disk Space Capacity Planning:

Microsoft Support recommends 
3X times the RAM size limited up to 32 GB. Which means 96 GB (32×3 = 96 GB)

Memory:
512 MB ECC supported Memory Modules
800 MB for VM Installations, post installation, reduce RAM to 512 MB.

Network Requirements:
Minimum a Gigabit Ethernet adapter with 1 Gbps throughput.

We will set the based Windows Server 2016 Based Installation with the follow Specs

  • 2 CPU
  • 6G of  RAM
  • 80 G of Disk Space”
  • 1G Network Adaptor Interface.

Reference
Guest Operating System Installation Guide – Windows Server 2016

http://partnerweb.vmware.com/GOSIG/Windows_Server_2016.html

Windows Server 2016 Download
https://www.microsoft.com/en-us/evalcenter/evaluate-windows-server-2016 

Windows Server 2016 Platform
https://www.microsoft.com/en-us/cloud-platform/windows-server

Windows Server 2016 Essential Memory Limit RAM and Other Hardware Limits
https://www.servethehome.com/windows-server-2016-essentials-ram-limit-and-other-hardware-limits/

General VMware Tools installation instructions,
http://kb.vmware.com/kb/1014294.

Installation Certificate Authority on Windows Server 2016 DC Guide

Installation Certificate Authority on Windows Server 2016 DC Guide

  1. Open Server Manager – Manage – Add Roles and Features
  2. Select: Active Directory Certificate Services

 

3. Click Add Features and Click Next To Continue.

4. Click Next To Continue.

5. Certification Authority and  Certification Authority Web Enrollment

 

 

6. Click on  Install To Process Installation of CA Authority Feature. 

7. Click on Close Once Installation Is Complete.

8. To Configure Active Directory Certificate Services to review the installation status.

image

9. Select Next To Specify credential to configure role services.

10. Select Certificate Authority and Certification Authority Web Enrollment


11. Select  Enterprise and Click Next.

12. Select Root CA and Click Next. 

13. Select Create a New Private key and Click Next.

14.  Select RSA 256  – Keep  Default with 2048 key Character length

Please note SHA256 as SHA1 is deprecated 

To Upgrade your existing internal CA –

certutil-setregca\csp\CNGHashAlgorithm SHA256

*download Digicert Certutil.
https://www.digicert.com/csr-creation-ssl-installation-windows-server-2016-digicert-utility.htm

15. Keep all Common Name, Distinguished name Suffix, Preview DN default,  Click Next

16.By Default Certificate is valid for 5 years , Don’t make any changes on it , Click next

17. Keep Certificate database location and log location default. and Click Next.

18. Review Configuration Summary Page  and Click Configure.

18. This should complete the CA Certificate Server. -Click Close

19. Set 443 or Secure Socket Layer protocol for Certsrv Web.

Let us see how to Request a Create a Simple Cert from Internal Certificate Authority

Now if you Open IIS Manager, you will see “CertSrv”  a Virtual Directory Created,


Use the right side column “Browse *.443(https)

 

20. If you don’t see a “Browse *.443(https) , It means binding is not there.

To add binding – Right Click on Default Web Site – Click on Edit Bindings

image

21. Click on add HTTPS – 443 – Choose the CA Cert

SSL Certificate :

Now you can see 443 in your website.

image

Now CA Authority Server Installation and Configuration Is complete.

22. Validate CA Certificate Home Page is working.. Go to  https://localhost/certsrv or IP/FQDN Where CA Certificate Server is installed at.

For Example:

Https://localhost/certsrv

Https://192.168.2.6/certsrv

Windows Server 2016″ Create CSR and Install SSL Certificate with DigiCert Utility.

Windows Server 2016 Upgrade Guide -How To-All About

Windows Server 2012 to 2016 Upgrade Guide -How To

Upgrade Requirments.

Upgrading previous retail versions of Windows Server to Windows Server 2012 R2

 The table below briefly summarizes which already licensed (that is, not evaluation) Windows operating systems can be upgraded to which editions of Windows Server 2012 R2.

Note the following general guidelines for supported paths:

  • In-place upgrades from 32-bit to 64-bit architectures are not supported. All editions of Windows Server 2012 R2 are 64-bit only.
  • In-place upgrades from one language to another are not supported.
  • In-place upgrades from one build type (fre to chk, for example) are not supported.
  • If the server is a domain controller, see http://technet.microsoft.com/library/hh994618.aspx for important information.
  • Upgrades from pre-release versions of Windows Server 2012 R2 are not supported. Perform a clean installation to Windows Server 2012 R2.
  • Upgrades that switch from a Server Core installation to the Server with a GUI mode of Windows Server 2012 R2 in one step (and vice versa) are not supported. However, after upgrade is complete, Windows Server 2012 R2 allows you to switch freely between Server Core and Server with a GUI modes. For more information about these installation options, how to convert between them, and how to use the new Minimal Server Interface and Features on Demand, see http://technet.microsoft.com/library/hh831786.

If you do not see your current version in the left column, upgrading to this release of Windows Server 2012 R2 is not supported.

If you see more than one edition in the right column, upgrade to either edition from the same starting version is supported.

Upgrade Windows Server 2012 to Windows Server 2016

1. Download Windows 2016 ISO from Microsoft site

2. Mount Windows Server 2016 ISO on Windows 2012 R2 Domain controller.

3. Log in to Windows Server 2012 and plug the media file (DVD, Flash memory, etc) to the server. Open the file explorer and double click on the DVD Drive to run the Windows Server 2016 setup.

This PC

4. Select Download & install updates to let the installation go on smoothly and mark, I want to help make installation of Windows better. Click on Next button.

Install Updates
s

5. Select an edition of Windows Server 2016 which meets your organization requirements. Also, in the future you may enlarge your network and need more roles and license support for your network computers. Click on Next button.

Windows Server 2016 editions

6. Read the notes and license terms. If you don’t like, clicking on Decline button, you can go back. If you agree, click on Accept button.

Accept the term

7. If you choose the edition same as the edition you used currently, you can keep your apps and files. As mentioned before if you don’t choose the right edition you can’t keep your apps and files. Select Keep personal files and apps if you’re using the same edition or select Nothing to erase everything. Then click on Next button.

Choose what to keep

Installing Windows Server 2016

7. The Windows Server automatically checks your system and finds if your server is compatable, so just click on Install button to start the installation.

Install Windows Server

Be patient It will take some time to install Windows server 2016 on your existing system. there will be a few restarts until the installation completes.

Installation process

7. Specify Keyboard setting. Keep Default for English.  Click on Next to continue

Region, preferred language, keyboard

8. Read the ELA license terms  on Accept button.

Accept License terms

9. The user will default to Administrator. You will be required to set a  complex password (a password composed of lowercase letters, uppercase letters, numbers and symbols) and reenter you password. Then click on Finish to conntinue. 

Password

10.  Click on  Ctrl+Alt+Del buttons and sign in with the password you had entered before in previous section.

Press Ctrl+Alt+Del buttons

Welcome to Windows server 2016. You have upgraded Windows server 2012 to Windows server 2016.There are a lot the latest improvements in Windows 2016 Server Release.

Reference.

BTHHD -Upgrade Windows Server 2012 R2 to Windows 2016 Upgrade Guide.

 

NLB Solution Windows 2016 Step by Step Guide. 

Windows 2016 Server Upgrade
https://technet.microsoft.com/en-us/windowsserver/dn527667.aspx

Server role upgrade and migration matrix for Windows Server 2016:
https://technet.microsoft.com/en-us/windows-server-docs/get-started/server-role-upgradeability-table

Windows Server 2016 and Microsoft Server Application Compatibility:
https://technet.microsoft.com/en-us/windows-server-docs/get-started/server-application-compatibility

Windows 2016 Download Link
https://www.microsoft.com/en-us/evalcenter/evaluate-windows-server-2016

vSphere Update Manager 6.0 Patch and Upgrade Management Guide

vSphere Update Manager 6.0 Patch and Upgrade Management Guide

VUM Getting Started
vSphere Update Manager(VUM) is a utility that oversees the installation of updates for existing installations of VMware ESX Server and guest operating systems. Update Manager tracks vulnerabilities within the virtual infrastructure and automatically applies user-defined patches to eliminate those vulnerabilities.

Why leverage vSphere Update Manager in Vsphere environment?

  • Facilitate upgrades and patching of ESX Server installations, guest operating systems,
    and application 
  • Helps establish a consistently secure and patched environment.
  • Out of the box single Vsphere Patching Solution 

Assumptions-Prerequisite Requirements.
• vSphere vCenter 5.5, 6.0, or 6.5 has been installed.
• vSphere Update Manager(VUM) 5.5, 6.0 or 6.5 has been installed.
• vSphere Update Manager(VUM) Client installed.

Please reference vCenter and VUM installation guide sections if none of the 1.2 has been completed.

Download VMware Update Manager Patch Managment Guide Below.
VMware_VcenterUpdateManager.6.0Patch_Guide

Download ESXI Patch
https://my.vmware.com/group/vmware/patch#search

Search for ESXI Patch Level.

Great Video Tutorial Host Upgrade Using Update Manager.

Using Powercli to Upgrade ESXI Host to ESXI 6.0.0 Update 2

Reference

 Updating an ESXi/ESX host using VMware vCenter Update Manager 4x,5x,and 6x. 

 VUM Administration Guide.

ESXI Patching via PowerCli

ESX 6.0.0 Update 2 Download

vCenter 6.5 Installation Guide

What is Vcenter?

VCenter Server provides centralized management and operation, resource provisioning and performance evaluation of virtual machines residing on a distributed virtual data center.
VMware VCentre Server is designed primarily for VSphere,

 

vcenter2

1.1      Target Audience/Purpose/Scope:

This document is targeted installation of  Vmware Vsphere 6.0 vCenter Server.

The purpose and scope in implementing vCenter 6.0 Server to leverage

  • Storage VMotion,
  • VMware High Availability (VMHA),
  • VMware Distributed Resource Scheduler (DRS).
  • Deploy from Template
  • Clone VM to Template
  • Systems Prep Virtual Machine Provisioning. .

 

1.2 Prerequisite requirements

  1. Download VMware-VIMSetup-all-6.5.0-4602587 from VMware download(Click On Link)
  2. Windows 2012 R2 OS
  3. Min CPU, RAM and Disk requirements
    CPU x 2, RAM 2G to 8G, and Disk 50 G
  4. vCenter Host DNS -FQDN and AD Membership Registered to both AD and DNS. i.e

Vcenter 6.5 Installation Guide on Windows Server 2012 R2


Click Here Download Guide.
vmware_vCenter6.5_install_Guide

Video Tutorials

VMware vSphere Vcenter 6.5 on Windows 2012 R2- Installation
Coming Soon!!!!

VMware vSphere VCSA 6.5 Installation
Coming Soon!!!!

 

1.3 Reference:

New in Vsphere 6.5 –Vcenter server

VMware vSphere 6.5 Documentation

 Installing vCenter Server 6.5 on a Windows Server 2012 R2 system

Stopping, starting, or restarting VMware vCenter Server 6.x services 

VMware NSX Overview

Image result for what is nsx ?

 What is VMware NSX?

VMware NSX is a virtual networking and security software product family created from VMware’s vCloud Networking and Security (vCNS) and Nicira Network Virtualization Platform (NVP) intellectual property.

 

Image result for NSX services

IT organizations have gained significant benefits as a direct result of server virtualization. Server consolidation reduced physical complexity, increased operational efficiency and the ability to dynamically re-purpose underlying resources to quickly and optimally meet the needs of increasingly dynamic business applications.

VMware’s Software Defined Data Center (SDDC) architecture is now extending virtualization technologies across the entire physical data center infrastructure. VMware NSX®, the network virtualization platform, is a key product in the SDDC architecture.

Image result for NSX Logical Switches Logical Routers

With NSX, virtualization delivers for networking what it has already delivered for compute and storage. In much the same way that server virtualization programmatically creates, snapshots, deletes and restores software-based virtual machines (VMs), NSX network virtualization programmatically creates, snapshots, deletes, and restores software-based virtual networks.

The result is a completely transformative approach to networking that not only enables data center managers to achieve orders of magnitude better agility and economics, but also allows for a vastly simplified operational model for the underlying physical network. With the ability to be deployed on any IP network, including both existing traditional networking models and next-generation fabric architectures from any vendor, NSX is a completely non-disruptive solution. In fact, with NSX, the physical network infrastructure you already have is all you need to deploy a software-defined data center

Image result for NSX services

The figure above draws an analogy between compute and network virtualization. With server virtualization, a software abstraction layer (server hypervisor) reproduces the familiar attributes of an x86 physical server (for example, CPU, RAM, Disk, NIC) in software, allowing them to be programmatically assembled in any arbitrary combination to produce a unique VM in a matter of seconds.

With network virtualization, the functional equivalent of a network hypervisor reproduces the complete set of Layer 2 through Layer 7 networking services (for example, switching, routing, access control, firewalling, QoS, and load balancing) in software. As a result, these services can be programmatically assembled in any arbitrary combination, to produce unique, isolated virtual networks in a matter of seconds.

With network virtualization, benefits similar to server virtualization are derived. For example, just as VMs are independent of the underlying x86 platform and allow IT to treat physical hosts as a pool of compute capacity, virtual networks are independent of the underlying IP network hardware and allow IT to treat the physical network as a pool of transport capacity that can be consumed and repurposed on demand. Unlike legacy architectures, virtual networks can be provisioned, changed, stored, deleted, and restored programmatically without reconfiguring the underlying physical hardware or topology. By matching the capabilities and benefits derived from familiar server and storage virtualization solutions, this transformative approach to networking unleashes the full potential of the software-defined data center.

NSX can be configured through the vSphere Web Client, a command-line interface (CLI), and a REST API.

NSX Components

Image result for nsx components

The NSX data plane consists of the NSX vSwitch, which is based on the vSphere Distributed Switch (VDS) with additional components to enable services. NSX kernel modules, userspace agents, configuration files, and install scripts are packaged in VIBs and run within the hypervisor kernel to provide services such as distributed routing and logical firewall and to enable VXLAN bridging capabilities.

The NSX vSwitch (vDS-based) abstracts the physical network and provides access-level switching in the hypervisor. It is central to network virtualization because it enables logical networks that are independent of physical constructs, such as VLANs. Some of the benefits of the vSwitch are:

Image result for nsx VLANX

Support for overlay networking with protocols (such as VXLAN) and centralized network configuration. Overlay networking enables the following capabilities:

Reduced use of VLAN IDs in the physical network.

Creation of a flexible logical Layer 2 (L2) overlay over existing IP networks on existing physical infrastructure without the need to re-architect any of the data center networks

Provision of communication (east–west and north–south), while maintaining isolation between tenants

Application workloads and virtual machines that are agnostic of the overlay network and operate as if they were connected to a physical L2 network

Facilitates massive scale of hypervisors

Multiple features—such as Port Mirroring, NetFlow/IPFIX, Configuration Backup and Restore, Network Health Check, QoS, and LACP—provide a comprehensive toolkit for traffic management, monitoring, and troubleshooting within a virtual network

The logical routers can provide L2 bridging from the logical networking space (VXLAN) to the physical network (VLAN).

The gateway device is typically an NSX Edge virtual appliance. NSX Edge offers L2, L3, perimeter firewall, load balancing, and other services such as SSL VPN and DHCP.

Image result for nsx control plane components
Control Plane

The NSX control plane runs in the NSX Controller cluster. NSX Controller is an advanced distributed state management system that provides control plane functions for NSX logical switching and routing functions. It is the central control point for all logical switches within a network and maintains information about all hosts, logical switches (VXLANs), and distributed logical routers.

The controller cluster is responsible for managing the distributed switching and routing modules in the hypervisors. The controller does not have any dataplane traffic passing through it. Controller nodes are deployed in a cluster of three members to enable high-availability and scale. Any failure of the controller nodes does not impact any data-plane traffic.

NSX Controllers work by distributing network information to hosts. To achieve a high level of resiliency the NSX Controller is clustered for scale out and HA. NSX Controllers must be deployed in a three-node cluster. The three virtual appliances provide, maintain, and update the state of all network functioning within the NSX domain. NSX Manager is used to deploy NSX Controller nodes.

The three NSX Controller nodes form a control cluster. The controller cluster requires a quorum (also called a majority) in order to avoid a “split-brain scenario.” In a split-brain scenario, data inconsistencies originate from the maintenance of two separate data sets that overlap. The inconsistencies can be caused by failure conditions and data synchronization issues. Having three controller nodes ensures data redundancy in case of failure of one NSX Controller node.

A controller cluster has several roles, including:

Image result for NSX controller cluster has several roles,Persistence server

API provider

Persistence server

Switch manager

Logical manager

Directory server

Each role has a master controller node. If a master controller node for a role fails, the cluster elects a new master for that role from the available NSX Controller nodes. The new master NSX Controller node for that role reallocates the lost portions of work among the remaining NSX Controller nodes.

Image result for NSX controller cluster has several roles,Persistence server

NSX supports three logical switch control plane modes: multicast, unicast and hybrid. Using a controller cluster to manage VXLAN-based logical switches eliminates the need for multicast support from the physical network infrastructure. You don’t have to provision multicast group IP addresses, and you also don’t need to enable PIM routing or IGMP snooping features on physical switches or routers.

Thus, the unicast and hybrid modes decouple NSX from the physical network. VXLANs in unicast control-plane mode do not require the physical network to support multicast in order to handle the broadcast, unknown unicast, and multicast (BUM) traffic within a logical switch. The unicast mode replicates all the BUM traffic locally on the host and requires no physical network configuration. In the hybrid mode, some of the BUM traffic replication is offloaded to the first hop physical switch to achieve better performance. Hybrid mode requires IGMP snooping on the first-hop switch and access to an IGMP querier in each VTEP subnet.

Related image

The NSX management plane is built by the NSX Manager, the centralized network management component of NSX. It provides the single point of configuration and REST API entry-points.

The NSX Manager is installed as a virtual appliance on any ESX™ host in your vCenter Server environment. NSX Manager and vCenter have a one-to-one relationship. For every instance of NSX Manager, there is one vCenter Server. This is true even in a cross-vCenter NSX environment.

In a cross-vCenter NSX environment, there is both a primary NSX Manager and one or more secondary NSX Managers. The primary NSX Manager allows you to create and manage universal logical switches, universal logical (distributed) routers and universal firewall rules. Secondary NSX Managers are used to manage networking services that are local to that specific NSX Manager. There can be up to seven secondary NSX Managers associated with the primary NSX Manager in a cross-vCenter NSX environment.

The consumption of NSX can be driven directly through the NSX Manager user interface, which is available in the vSphere Web Client. Typically end users tie network virtualization to their cloud management platform for deploying applications. NSX provides rich integration into virtually any CMP through REST APIs. Out-of-the-box integration is also available through VMware vCloud Automation Center, vCloud Director, and OpenStack with the Neutron plug-in for NSX.

Image result for NSX services

You can install NSX Edge as an edge services gateway (ESG) or as a distributed logical router (DLR). The number of edge appliances including ESGs and DLRs is limited to 250 on a host.

Uplink interfaces of ESGs connect to uplink port groups that have access to a shared corporate network or a service that provides access layer networking. Multiple external IP addresses can be configured for load balancer, site-to-site VPN, and NAT services.

A logical router can have eight uplink interfaces and up to a thousand internal interfaces. An uplink interface on a DLR generally peers with an ESG, with an intervening Layer 2 logical transit switch between the DLR and the ESG. An internal interface on a DLR peers with a virtual machine hosted on an ESX hypervisor with an intervening logical switch between the virtual machine and the DLR.The DLR has two main components:

Image result for NSX Edge Services Gateway Distributed Logical Router

The DLR control plane is provided by the DLR virtual appliance (also called a control VM). This VM supports dynamic routing protocols (BGP and OSPF), exchanges routing updates with the next Layer 3 hop device (usually the edge services gateway) and communicates with the NSX Manager and the NSX Controller cluster. High-availability for the DLR virtual appliance is supported through active-standby configuration: a pair of virtual machines functioning in active/standby modes are provided when you create the DLR with HA enabled.

At the data-plane level, there are DLR kernel modules (VIBs) that are installed on the ESXi hosts that are part of the NSX domain. The kernel modules are similar to the line cards in a modular chassis supporting Layer 3 routing. The kernel modules have a routing information base (RIB) (also known as a routing table) that is pushed from the controller cluster. The data plane functions of route lookup and ARP entry lookup are performed by the kernel modules. The kernel modules are equipped with logical interfaces (called LIFs) connecting to the different logical switches and to any VLAN-backed port-groups. Each LIF has assigned an IP address representing the default IP gateway for the logical L2 segment it connects to and a vMAC address. The IP address is unique for each LIF, whereas the same vMAC is assigned to all the defined LIFs.

Logical Routing Components

1

A DLR instance is created from the NSX Manager UI (or with API calls), and routing is enabled, leveraging either OSPF or BGP.

2

The NSX Controller leverages the control plane with the ESXi hosts to push the new DLR configuration including LIFs and their associated IP and vMAC addresses.

3

Assuming a routing protocol is also enabled on the next-hop device (an NSX Edge [ESG] in this example), OSPF or BGP peering is established between the ESG and the DLR control VM. The ESG and the DLR can then exchange routing information:

The DLR control VM can be configured to redistribute into OSPF the IP prefixes for all the connected logical networks (172.16.10.0/24 and 172.16.20.0/24 in this example). As a consequence, it then pushes those route advertisements to the NSX Edge. Notice that the next hop for those prefixes is not the IP address assigned to the control VM (192.168.10.3) but the IP address identifying the data-plane component of the DLR (192.168.10.2). The former is called the DLR “protocol address,” whereas the latter is the “forwarding address.”

The NSX Edge pushes to the control VM the prefixes to reach IP networks in the external network. In most scenarios, a single default route is likely to be sent by the NSX Edge, because it represents the single point of exit toward the physical network infrastructure.

4

The DLR control VM pushes the IP routes learned from the NSX Edge to the controller cluster.

5

The controller cluster is responsible for distributing routes learned from the DLR control VM to the hypervisors. Each controller node in the cluster takes responsibility of distributing the information for a particular logical router instance. In a deployment where there are multiple logical router instances deployed, the load is distributed across the controller nodes. A separate logical router instance is usually associated with each deployed tenant.

6

The DLR routing kernel modules on the hosts handle the data-path traffic for communication to the external network by way of the NSX Edge.

Related image

The NSX components work together to provide the following functional services.

Related image

A cloud deployment or a virtual data center has a variety of applications across multiple tenants. These applications and tenants require isolation from each other for security, fault isolation, and non-overlapping IP addresses. NSX allows the creation of multiple logical switches, each of which is a single logical broadcast domain. An application or tenant virtual machine can be logically wired to a logical switch. This allows for flexibility and speed of deployment while still providing all the characteristics of a physical network’s broadcast domains (VLANs) without physical Layer 2 sprawl or spanning tree issues.

A logical switch is distributed and can span across all hosts in vCenter (or across all hosts in a cross-vCenter NSX environment). This allows for virtual machine mobility (vMotion) within the data center without limitations of the physical Layer 2 (VLAN) boundary. The physical infrastructure is not constrained by MAC/FIB table limits, because the logical switch contains the broadcast domain in software.

Routing provides the necessary forwarding information between Layer 2 broadcast domains, thereby allowing you to decrease the size of Layer 2 broadcast domains and improve network efficiency and scale. NSX extends this intelligence to where the workloads reside for East-West routing. This allows more direct VM-to-VM communication without the costly or timely need to extend hops. At the same time, NSX logical routers provide North-South connectivity, thereby enabling tenants to access public networks.

Logical Firewall provides security mechanisms for dynamic virtual data centers. The Distributed Firewall component of Logical Firewall allows you to segment virtual datacenter entities like virtual machines based on VM names and attributes, user identity, vCenter objects like datacenters, and hosts, as well as traditional networking attributes like IP addresses, VLANs, and so on. The Edge Firewall component helps you meet key perimeter security requirements, such as building DMZs based on IP/VLAN constructs, and tenant-to-tenant isolation in multi-tenant virtual data centers.

The Flow Monitoring feature displays network activity between virtual machines at the application protocol level. You can use this information to audit network traffic, define and refine firewall policies, and identify threats to your network.

SSL VPN-Plus allows remote users to access private corporate applications. IPsec VPN offers site-to-site connectivity between an NSX Edge instance and remote sites with NSX or with hardware routers/VPN gateways from 3rd-party vendors. L2 VPN allows you to extend your datacenter by allowing virtual machines to retain network connectivity while retaining the same IP address across geographical boundaries.

The NSX Edge load balancer distributes client connections directed at a single virtual IP address (VIP) across multiple destinations configured as members of a load balancing pool. It distributes incoming service requests evenly among multiple servers in such a way that the load distribution is transparent to users. Load balancing thus helps in achieving optimal resource utilization, maximizing throughput, minimizing response time, and avoiding overload.

Service Composer helps you provision and assign network and security services to applications in a virtual infrastructure. You map these services to a security group, and the services are applied to the virtual machines in the security group using a Security Policy.

Data Security provides visibility into sensitive data stored within your organization’s virtualized and cloud environments and reports any data security violations.

3rd-party solution providers can integrate their solutions with the NSX platform, thus enabling customers to have an integrated experience across VMware products and partner solutions. Data center operators can provision complex, multi-tier virtual networks in seconds, independent of the underlying network topology or components.

Image result for NSX Logical Switches Logical Routers

Check out the Vmware NSX Video Series 

Reference
VMware NSX 6.2 Document Center
https://pubs.vmware.com/NSX-62/index.jsp#com.vmware.nsx.install.doc/GUID-10944155-28FF-46AA-AF56-7357E2F20AF4.html

vSphere 6.5  Release -What Is New?

vSphere 6.5  Release -What Is New?

 

 

 

 

 

Download VMware vSphere 6.5 Technical White Paper

vSphere 6.5- Technical Overview

1. ESXI Virtual Server

2. Vcenter Server
*Migration
*Improved Appliance Management
*VMware Update Manager
*Native High Availability
*Built-in Backup / Restore
Performance Improvement in both vSphere Web Client  and
fully supported HTML5-based vSphere Client.

a. Migration
*vCenter Server Appliance built in installer Migration Tool.
*Migration Tool has several improvements over the recently released vSphere 6.0 Update 2m release.
Supported on Windows vCenter Server 5.5 and 6.0
( If you’re currently running a Windows vCenter Server 6.0, this is your chance to get to the vCenter Server Appliance using this Migration Tool. )

The Migration tool which allows for more granular selection of migrated data as follows:
*Configuration
*Configuration, events, and tasks
*Configuration, events, tasks, and performance metrics
*VMware Update Manager (VUM) is now part of the vCenter Server Appliance.
*Vcenter Inventory, and alarm data is migrated by default.

This will be huge for customers who have been waiting to migrate to the vCenter Server Appliance without managing a separate Windows server for VUM.

If you’ve already migrated to the vCenter Server Appliance 6.0 the upgrade process will migrate your VUM baselines and updates to the vCenter Server Appliance 6.5. 

*Improved Appliance Management
Another exclusive feature of the vCenter Server Appliance 6.5 is the improved appliance management capabilities. The vCenter Server Appliance Management Interface continues its evolution and exposes additional health and configurations. This simple user interface now shows Network and Database statistics, disk space, and health in addition to CPU and memory statistics which reduces the reliance on using a command line interface for simple monitoring and operational tasks.
*vCenter Server High Availability

*Active, Passive, and Witness nodes which are cloned from the existing vCenter Server.
*Failover within the vCenter HA cluster can occur when an entire node is lost (host failure for example) or when certain key services failures.
*vCenter Server 6.5 has a new native high availability solution that is available exclusively for the vCenter Server Appliance.
Backup and Restore
*Built-in backup and restore for the vCenter Server Appliance.
*Running embedded with the appliance.
*This new out-of-the-box functionality enables customers to backup vCenter Server and Platform Services Controller appliances directly from the VAMI or API, and also backs up both VUM and Auto Deploy.

vSphere Web Client
*Support based on the Adobe Flex platform and requires Adobe Flash.
*HTML5-based vSphere Client:
*Inventory tree is the default view
*Home screen reorganized
*Renamed “Manage” tab to “Configure”
*Removed “Related Objects” tab
*Performance improvements (VM Rollup at 5000 instead of 50 VMs)
*Live refresh for power states, tasks, and more!

vSphere Client
*Supported version of the HTML5-based vSphere Client
*Built in vSphere Client is built with  vCenter Server 6.5 (both Windows and Appliance)
*Clean, consistent UI built on VMware’s new Clarity UI standards
*Built on HTML5 so it is truly a cross-browser and cross-platform application
*No browser plugins to install/manage
*Integrated into vCenter Server for 6.5 and fully supported
*Fully supports Enhanced Linked Mode
*Users of the Fling have been extremely positive about its performance

3. Auto deploy

Auto Deploy 6.5 GUI Configuration. We will now walk through the new Auto Deploy GUI and create a custom ESXi image with deploy rules to boot ESXi hosts.

4. Reference

a. What’s New in vSphere 6.5 -vCenter Server
b. What’s New in vSphere 6.5- ESXI Host

c. What is New in vSphere 6.5-Technical Overview Guide
d. What’s New in vSphere 6.5