VMware


VMWARE 6.0
 https://blog.osnexus.com/2012/03/20/understanding-fc-fabric-configuration-5-paragraphs/
What is Hypervisor: which emulates things like cpu,kb,mouse,memory etc or all required hardware for OS. creates an environment which OS believes the all things as real is called Hypervisor.
Types of Hypervisor:  Type1 : is also called Bare metal Hypervisor. Which directly runs on physical machine and on which you run VM’s. Type 1 hypervisor is ESXi.
& Type2: example like VMware workstation, Vbox, GNS3 &  fusion.
ESXi files:

Platform controller also have 4th component lookup service.
Deploying vCSA from normal windows box to VM running on ESXi.
vCSA installation from .OVA: import .ova in vmware and then modify .vmx file and boot vcenter will be ready

VMware Tools provide extra software as well as required for graceful shutdowns.

VM Template : create a template of vm’s to deploy multiple times to save time.

OVF: is a set of files for VM which include file manifest.ovf, its tiny format.
OVA: It’s a format which makes a single file of VM which can be moved anywhere. So it’s the best format. It’s a full machine.
Snapshot:
Networking Basic:
                                                                            
 Router:
Freesco route as .ova : import as vm and use it as router if you want to communicate between vlans or to reach to internet.
PAT: Port Address Translation
There should not be any routing between ESX & storage but must be on dedicated network or in dedicated VLAN.
Also needs dedicated vmkernal port for data between storage and ESXI host. Don’t share management and storage networks.
ESX must have two ports for multipathing with storage.

You can make windows 2012 server, Openfiler  or ISCSI-Target  as ISCI target. (Means as a iscsi storage)

ISCSI datastore:
Multipathing: Add an additional vmkernal port to new uplink. Means two kernel ports connected to two uplinks.
NFS datastore:
Migration: moving a virtual machine which is not running from one ESXi host to another is called Migration.
 Vmotion: moving running vm is called vmotion.
Important points:
           1. Make sure you have same port group on destination machine,
           2. Also while vmotion of vmmachine  it should not be connected to local CD/DVD. 
           3. Make sure source and destination ESX machine running having same type of CPU.  
           4. Make vmotion on different network with separate switch n vmkernal port to save bandwidth of management port.
If vmotion only of vm not of the storage then it just need to replicate memory state of machine which will be very quick.
DRS: Distributed Resource Scheduler, There are three types of DRS setting. Manual, Partially Automated and Fully Automated.
Migration Threshold is like the priority : 1. Conservative ; as low priority 2. Aggressive as High Priority.

Distributed Switch: DV uplinks (Distributed Virtual Uplinks)
 
After cresting  DV switch then associate ESX hosts and VM’s to it.
Standard Switch Policy: Security; mac change & forged transmit, Network traffic shaping; traffic bandwidth limit change, Teaming- failover ; LACP
Promiscuous mode: if it’s in reject then it will unicast messages. Other machines will not be able to see others traffic which is not for it.
Beacon should be used if you are having more than one interfaces.
Notify Switch: If VMnic6 goes down then vmnic seven mask mac of nic 6 and serve the all 5 machines of vmnic6 and vmnic7.
When vmnic6 comes up then vmnic7 transfer all data back to vmnic6.
All similar settings are also at vm port group too.
VDI : Vmware horizon View,

Migrate VMK port to virtual distributed switch. Create a new DV switch and assign one new port group and reassign portgroup and physical nets those are attached to OLD standard switch. Distributed switch features;
Netflow: is to view top users and what’s the protocols are used and what type of traffic is flowing etc.
Port Mirroring: Mirroring of one network traffic to other to analyze the data for log analysis. Also used for intrusions detection and protection using protocol analyzer.
CDP: Cisco Discovery Protocol,   LLDP: Link Layer Discovery Protocol
vAPP: Logical Container, to align or control VM boot order for virtual machines owning apps like DB, Web etc… You can create new vAPP or clone from an existing  vAPP. Also you can clone VM’s inside it.
vAPP can restored from its template which can we imported or exported from template.
It is mainly used if you application needs two tier or three tier then some boot order is required.
CPU & Memory Control:
If your actual ram is 16G but you have 5vm’s and they need 20G of ram then you have to apply above techniques to extract juice of actual ram to facilitate vms.
TPS: Transparent page sharing.  It divide memory in pages and put hash on each page, if ESX find more than one identical hash it formats free hash and release page for vm’s use.
Balloon Driver: it’s inbuilt with VMware tools and vmware tool looks for memory which is not actively used and balloon driver pulls that memory from it and give it back to ESX server.
Reservation: is guaranteed value and limit is end value. Can be applied for CPU also but no techniques like memory. To increase CPU you have to increase ESXi Hosts.


Implement Reservation Limits: After setting reservation, limits if any  contention of resources in VMs comes then its solved by using assigned shares to each VM and accordingly resources are assigned to vms.
We can make resource pools and can set reservation and limits on that and pools can be assigned to VM’s, also we can set these settings on vapps.
Shares: these comes in picture if there is contention of resources.
Resource Pool Creation:  Don’t create nested pools but make expandable.
HA: High Availability :  it is used if one vm running on ESXi host and that host or its connectivity goes down then in this case that vm will be migrated to other ESXi host but that VM need brand new bootup which leads some downtime and not good for 100% uptime. Virtual SAN network is used for heart beat.
Isolation: in cluster or in HA if management connectivity goes down in all esxi hosts then it is called isolation. In this situation HA get information from datastore heartbeat to know that ESXi hosts are live. But if datastore connectivity goes down then ESXi uses VMCP(VM component Protection) for HA. 
FT is for 100% uptime: it runs two copies of vm as primary and secondary on different esxi servers. Changes from primary to secondary are always in sync and updated via a separate vm kernel network which is called FT logging. Both Primary and secondary uses uniq Datastores.
We will create a separate vmkernal port for logging & enable FT on VM too for which you want it.
Storage Policy:  Gold, Silver, Bronze, 32 extents (Luns) with 64G each.

Implementing Storage  Policy:
Add tag to DATA store & then create a policy and assign that tag to it and then assign this policy to VM, also check for compliance by migrating from old storage to compliance storage which is in policy.

vSAN: For vSAN we need minimum three ESXi hosts with three drives each with one ssd (for caching) and two HDD. Also we need to enable vSAN on Cluster level.   
Also if HA is enable then make it off to enable vSAN.
ADD LDAP to SSO: Go to Vsphere click administrator -à configuration-à identity Sources à + to add another domain

vSphere Authorization:  permissions and roles for users.


Alarms: 

Affinity Rules(VM/Host Rules and Host Groups): Is to keep similar things together or if two vm’s are send data to each other then it’s better to put them on one ESXi host to save network bandwidth.
It is defined at cluster level.
 

Customization Specification: it is used if you want to do some changes on multiple or hundreds or thousands of machines then make a customization and apply to all. Changes takes place by sysprep script in background inbuilt in this feature.

Content Library: Is a common place where you can share your files to ESX or VMs. Library cab be published to other users  which can be subscribed by them.
Create content library then put your common files and publish them for others use.

Additional Features:
 
DPM: Distributed Power management for saving power and to set scheduler.
Host Profile: Template for ESXi host to deploy common change on multiple ESX hosts. Example if you need to configure NTP service on multiple ESXi hosts that can done easily using host profile on vsphere home page.

Auto Deploy: PXE boot to install ESXi servers.
Update Manager:  Also known as VUM. It is used for patch management or vm tools update.  Till now it was installed on separate VM but from 6.5 now you can install it on Vcenter server.
VDP: Vsphere Data Protection. For backups of virtual machine. CBT(Changed Block Tracking).

VMware PSP and SATP in Plain English

VMware's PSA is awash in abbreviations and options
I am often questioned during my Storage for Virtual Environments seminar presentations about VMware’s Pluggable Storage Architecture (PSA). This system is fairly straightforward and concept: VMware provides native multipathing support for a variety of storage arrays, and allows third parties to substitute their own plug-ins at various points in the stack. But the profusion of acronyms and third-party options makes it difficult for end-users to figure out what is going on. In an effort to help, I present here another entry in my “VMware storage features in plain English” series.
Note: I am more of a storage guy than a virtualization expert. I consider myself one of those end-users who have had trouble figuring out what’s going on with PSA specifically, in VMware storage features in general. I welcome comments and suggestions for corrections or improvements to this and all of my articles. Thanks for your help!

Introducing Pluggable Storage Architecture (PSA)

Pluggable storage architecture was one of the major enhancements introduced in vSphere 4. Functionally similar to Microsoft’s MPIO stack for Windows, PSA includes native multipathing support and allows vendors to plug in their own advanced features.
I find the VMware diagram confusing. Is mine more or less accurate and readable?
The ESX kernel (VMkernel) walks down through three layers when communicating with storage:
  1. In the top layer, VMware native NMP or third-party MPP software decides which SATP to use, or whether to use the native interface. MASK_PATH also operates at this layer.
  2. The SATP layer includes native generic path selection (active/active, active/passive), standard ALUA, as well as allowing third-party plugins (SATP) to override its behavior. The SATP monitors these paths, reports changes, and initiates fail-over on the array as needed.
  3. At the PSP layer, software decides which physical channel to use for I/O requests.
There are three types of PSA plugins for vSphere 4:
  1. Storage Array Type Plug-In (SATP)
  2. Path Selection Plug-in (PSP)
  3. A complete third-party multipathing software stack (MPP)
As is the case with VAAI, VMware includes a number of third-party plug-ins in the ESXi install. Users can simply activate many of these according to their needs, though some require additional fees and licensing.
Storage Array Type Plug-in (SATP) List
Storage Array Type Plug-Ins (SATPs) to the VMware Pluggable Storage Architecture multipathing solution for the specific characteristics of the storage array. This is very important, since each storage array design differs substantially in detail and support, especially when it comes to load-balancing and failover between controllers, ports, and paths. So it is critical for VMware to have developed a standard interface to communicate with arrays.
SATPs allow load balancing across multiple paths, intelligent path selection, and over troubled conditions such as “chatter”, when passed rapidly fail back and forth between controllers.
The SATP has critical tasks to perform in the PSA stack:
  1. Decide which method of communication to use with the storage (PSA or native)
  2. Monitor the health of the physical I/O channels or paths
  3. Report any changes in the state of the paths up the stack
  4. Perform actions required to fail over storage between controllers on the array
VMware vSphere includes a variety of generic plugins for storage arrays. I’ve identified the following:
  • VMW_SATP_LOCAL – Local SATP for direct-attached devices
  • VMW_SATP_DEFAULT_AA — Generic for active/active arrays
  • VMW_SATP_DEFAULT_AP — Generic for active/passive arrays
  • VMW_SATP_ALUA — Asymmetric Logical Unit Access-compliant arrays
Although I have sometimes seen other SATP plug-ins mentioned, the following plug-ins are all that are listed in the VMware ESX Hardware Compatibility List.
  • VMW_SATP_LSI — LSI/NetApp arrays from Dell, HDS, IBM, Oracle, SGI
  • VMW_SATP_SVC — IBM SVC-based systems (SVC, V7000, Actifio)
  • VMW_SATP_CX — EMC/Dell CLARiiON  and Celerra (also VMW_SATP_ALUA_CX)
  • VMW_SATP_SYMM — EMC Symmetrix DMX-3/DMX-4/VMAX, Invista
  • VMW_SATP_INV — EMC Invista and VPLEX
  • VMW_SATP_EQL — Dell EqualLogic systems
EMC PowerPath and HDS HDLM also support a variety of storage arrays, but I would classify these as full MPP replacements for PSA, rather than SATP plug-ins.
You can see which SATP plug-ins are available using the following esxcli command:
esxcli nmp satp list
Path selection plug-in (PSP) List
In contrast to the diversity of VAAI and SATP plug-ins, the universe of path selection plug-ins is fairly small. Most storage arrays are supported with either Most Recently Used (MRU) or Fixed path selection approaches. Many also support Round Robin (RR) path selection. The only vendor with a specific PSP that is not also part of a full MPP (like EMC PowerPath or HDS HDLM) is Dell, which offers a special routed path selection plug-in for the EqualLogic iSCSI arrays.
  • VMW_PSP_MRU — Most-Recently Used (MRU) — Supports hundreds of storage arrays
  • VMW_PSP_FIXED — Fixed – Supports hundreds of storage arrays
  • VMW_PSP_RR — Round-Robin – Supports dozens of storage arrays
  • DELL_PSP_EQL_ROUTED — Dell EqualLogic iSCSI arrays
As mentioned, EMC PowerPath also offers path selection as a plug-in in addition to the full MPP stack. Many other vendors offer unique path selection plug-ins, over 100 in total, but these are not specifically called out in the VMware HCL apart from their existence. I would love to learn more about them, however.
You can see which SATP plug-ins are available using the following esxcli command:

Few Questions:

QUESTION NO:1
An administrator is planning a vSphere infrastructure with the following specific networking requirements:
• The ability to shape inbound (RX) traffic
• Support for Private VLANs (PVLANs)
• Support for LLDP (Link Layer Discovery Protocol)
What is the minimum vSphere Edition that will support these requirements?
A. vSphere Essentials Plus
B. vSphere Standard
C. vSphere Enterprise
D. vSphere Enterprise Plus
Answer: D
Explanation:

QUESTION NO:2
What two IT infrastructure components are virtualized by vSphere Essentials? (Choose two.)
A. Networks
B. Applications
C. Storage
D. Management
Answer: A,C
Explanation:

QUESTION NO:3
Which minor badge items make up the Efficiency badge score for an ESXi host in vCenter Operations Manager?
A. Workload, Anomalies, Faults
B. Workload, Stress, Density
C. Time Remaining, Capacity Remaining
D. Reclaimable Waste, Density
Answer: D
How to Disable unexposed features &  hardware devices, attack vectors for a virtual machine
To prevent the 30-50 second loss of connectivity during STP convergence, perform one of these options:

  • To set STP to Portfast on all switch ports that are connected to network adapters on an ESXi/ESX host
  • What is fault domain ?, https://cormachogan.com/2015/04/20/vsan-6-0-part-8-fault-domains/
  • Vm overrides ?
  • Vmware convertor ?
  • SMBIOS ? BIOS UUID ?
  • Voma ?
  • NUMA ?
  • VMCP : virtual machine component protection for APD and PDL protection.
  • vCloud Connector ?
  • vCloud Air is a public cloud computing service built on vSphere from VMware. vCloud Air has three "infrastructure as a service" (IaaS) subscription service types: dedicated cloud, virtual private cloud, and disaster recovery. vCloud Air also offers a pay-as-you-go service named Virtual Private Cloud OnDemand.

No comments:

Post a Comment

Troubleshooting NFS

Common NFS Errors "No such host" - Name of the server is specified incorrectly "No such file or directory" - Either...