Tuesday, August 30, 2016

HBA Storage path redundancy status using PowerCLI

HBA Storage path redundancy status


$views = Get-View -ViewType "HostSystem" -Property Name,Config.StorageDevice


$result = @()

foreach ($view in $views | Sort-Object -Property Name) {
    Write-Host "Checking" $view.Name

    $view.Config.StorageDevice.ScsiTopology.Adapter |?{ $_.Adapter -like "*FibreChannelHba*" } | %{
        $hba = $_.Adapter.Split("-")[2]

        $active = 0
        $standby = 0
        $dead = 0

        $_.Target | %{ 
            $_.Lun | %{
                $id = $_.ScsiLun

                $multipathInfo = $view.Config.StorageDevice.MultipathInfo.Lun | ?{ $_.Lun -eq $id }

                $a = [ARRAY]($multipathInfo.Path | ?{ $_.PathState -like "active" })
                $s = [ARRAY]($multipathInfo.Path | ?{ $_.PathState -like "standby" })
                $d = [ARRAY]($multipathInfo.Path | ?{ $_.PathState -like "dead" })

                $active += $a.Count
                $standby += $s.Count
                $dead += $d.Count
            }
        }

        $result += "{0},{1},{2},{3},{4}" -f $view.Name.Split(".")[0], $hba, $active, $dead, $standby
    }
}

ConvertFrom-Csv -Header "VMHost", "HBA", "Active", "Dead", "Standby" -InputObject $result | ft -AutoSize > "d:\temp\HBA\storagepaths $(get-date -f yyyy-MM-dd-hhmmss).txt"

How to get NIC speed using PowerCLI

Get NIC speed using PowerCLI script

Connect-VIServer vcenterservername

$VMhosts = Get-VMHost | Get-View
Foreach ($vmhost in $vmhosts){
   Write-Output $vmhost.Name
   $pnic = 0
   Do {    
      $Speed = $VMhost.Config.Network.Pnic[$pnic].LinkSpeed.SpeedMb    
      Write "Pnic$pnic $Speed"   
      $pnic ++
   } Until ($pnic -eq ($VMhost.Config.Network.Pnic.Length))}

Configuring NTP server using PowerCLI

Below scripts executes against all the ESXi hosts under the vCenter Server. 

1) Remove old NTP server settings & add new NTP server settings:

connect-viserver vcenterservername
$oldntpservers='192.168.1.254'
$newntpservers='192.168.2.254'
foreach($vmhost in get-vmhost){
#stop ntpd service
$vmhost|Get-VMHostService |?{$_.key -eq 'ntpd'}|Stop-VMHostService -Confirm:$false
#remove ntpservers 
$vmhost|Remove-VMHostNtpServer -NtpServer $oldntpservers -Confirm:$false
#add new ntpservers
$vmhost|Add-VmHostNtpServer -NtpServer $newntpservers
#start ntpd service
$vmhost|Get-VMHostService |?{$_.key -eq 'ntpd'}|Start-VMHostService
}

2) Add new NTP server settings:

Connect-VIServer vcenterservername
Get-VMHost | Add-VMHostNtpServer 192.168.1.253
Get-VMHost | Add-VMHostNtpServer 192.168.1.254
Get-VMHost | Get-VMHostFirewallException | where {$_.Name -eq "NTP client"} | Set-VMHostFirewallException -Enabled:$true
Get-VMHost | Get-VmHostService | Where-Object {$_.key -eq "ntpd"} | Start-VMHostService

Get-VMhost | Get-VmHostService | Where-Object {$_.key -eq "ntpd"} | Set-VMHostService -policy "automatic"

Tuesday, June 30, 2015

How to run EMC grab on ESXi ?

How to run EMC grab on ESXi ?


Download link for EMC GRAB for ESXi: ftp://ftp.emc.com/pub/emcgrab/ESXi/
Download link for EMC GRAB for ESX: ftp://ftp.emc.com/pub/emcgrab/ESX/

Steps:

1) Open cmd in Windows machine from where the ESX/ESXi server is accessible.

2) Execute java -version command to verify that Java is installed properly or not. If you get the output of this command with the java version information, then the java is installed properly, otherwise you need to install Java first. If it is required to install java, you can download the latest version of JDK from www.oracle.com > Java > Java SE.

3) Change your current working directory to the directory where you have placed EMC host grab utility. Suppose you have placed the grab utility in the C:\ drive then the command would be:

C:\> cd C:\EMC-ESXi-GRAB-1.3.X

4) Below is the command to gather the EMC host grabs for VMware host. Keep the information ready with you for IP address, username and password of ESXi Server, IP address of SPA and SPB, username and password for all the storages attached to that host as this command may ask to enter these details.

C:\EMC-ESXi-GRAB-1.3.X> emcgrab.exe –host <IP_Address_of_ESXi> -user <username_of_ESXi> -password <password_of_ESXi> -vmsupport


5) It will ask you to enter some information related to the customer site like name, address, site id etc. and after entering all the information, host grabs gathering process will start with “………………” and you will get cmd prompt back after some time. Let the command run properly.

6) Output will be saved under the folder name as “Output” in the same directory where you have placed ESXi host grab utility in your Windows machine. The output file will be in .zip format. In this case the path would be “C:\EMC-ESXi-GRAB-1.3.x\Output”

Saturday, October 4, 2014

Whats new in vSphere 5.5 ?

Whats new in vSphere 5.5?
  • Hot-Pluggable SSD PCI Express (PCIe) Devices - ability to hot-add or hot-remove the solid-state disks.
  • Support for Reliable Memory Technology - ESXi runs on Memory, if error occurs, it crashes and VMs too. To protect against memory errors, ESXi takes advantage of hardware vendor enable Reliable memory technology.
  • Enhancements for CPU C-States.
  • Storage - Support for 62TB VMDK- vSphere 5.5 increases the maximum size of a virtual machine disk file (VMDK) to 62TB (note the maximum VMFS volume size is 64TB where the max VMDK file size is 62TB).  The maximum size for a Raw Device Mapping (RDM) has also been increased to 62TB.
  • 16GB E2E support.
  • Expanded vGPU Support - 5.1 was limited to Nvidia - now supports Nvidia & AMD GPUs.
  • Doubled Host-Level Configuration Maximums.
  • 16Gb End-to-End Support – In vSphere 5.5 16Gb end-to-end FC support is now available.  Both the HBAs and array controllers can run at 16Gb as long as the FC switch between the initiator and target supports it.
  • Graphics acceleration now possible on Linux Guest OS.
  • vSphere App HA - works in conjunction with vSphere HA monitoring and VM monitoring to improve application up-time. can be configured to restart application service when issue is detected. Can also reset VM if Application fails to start.
  • For hosts with different CPU vendors in a cluster:
    • Per-Virtual-Machine CPU masking - hide or show NX/XD bit (No Execute / Execute Disable)
    • VMware Enhanced vMotion compatibility - On the hardware side - Intel & AMD put functions in the CPUs that would allow them to modify the CPU ID value returned by the CPUs. Intel calls this functionality as FlexMigration. AMD - embedded this into the AMD-V virtualization extenstions. On Software side, VMware created s/w that takes advantage of this hardware functionality to create a common CPU ID baseline for all servers within the cluster. Introduced in ESX/ESXi 3.5 Update2.

How Storage vMotion works ?

How Storage vMotion works?

Storage vMotion enables live migration of running virtual machine disk files from one storage location to another with no downtime or service disruption.

Benefits:
  • This simplifies storage array migration or storage upgrades.
  • Dynamically optimize storage I/O performance.
  • Efficiently utilize storage and manage capacity.
  • Manually balances the storage load.

Storage vMotion process:

  1. vSphere copies the non-volatile files that make up a VM: vmx, swp, logs & snapshots.
  2. vSphere starts a ghost or shadow VM on the destination datastore. Because the ghost VM does not yet have a virtual disk (that hasn't been copied over yet), it sits idle waiting for its virtual disks.
  3. Storage vMotion first creates a destination disk. Then a mirror device - a new driver that mirrors I/Os between source & destination.
  4. I/O mirroring in place, vSphere makes a single-pass copy of virtual disks from source to destination. As the changes are made to the source, the mirror driver ensures that changes are also reflected at the destination.
  5. When the virtual disk copy completes, vSphere quickly suspends & resumes in order to transfer control over to the ghost VM on the destination datastore.
  6. Files on the source datastore are deleted.

Virtual mode RDM storage vMotion - if you want to migrate only the vmdk mapping file - select “Same Format as source”.


Physical mode RDMs are not affected.

How VMware DRS works ?

How VMware DRS works?

VMware DRS aggregates the computing capacity across a collection of servers and intelligently allocates the available resources among the virtual machines based on predefined rules. When the virtual machine experiences increased load, DRS evaluates its priority.

VMware DRS allows you to control the placement of virtual machines on the hosts within the cluster by using affinity rules. By default, VMware DRS checks every 5mins to see if the cluster's workload is balanced. DRS is needed to be enabled for resource pools to be created.

DRS is invoked by certain actions in the cluster
  • adding or removing the ESXi host
  • changing resource settings on the VM

Automatic DRS mode determines the best possible distribution of virtual machines and the manual DRS mode provides recommendation for optimal placement of the virtual machines and leaves it the system administrator to decide.

Manual – every time you power on the VM, the cluster prompts you to select the ESXi host where the VM should be hosted. Recommends migration

Partially Automatic – every time you power on the VM, the cluster DRS automatically selects the ESXi host & Recommends migration

Fully Automatic – every time you power on the VM, the cluster DRS automatically selects the ESXi host & migration. Scaled from Conservative to Aggressive
  • Apply priority 1 recommendations - affinity rules & host maintenance
  • Apply priority 1 & 2 recommendations - promise significant improvement to cluster load balance
  • Apply priority 1, 2 & 3 recommendations - promise at-least good improvement to cluster load balance
  • Apply priority 1, 2, 3 & 4 recommendations - promise moderate improvement to cluster load balance
  • Apply all recommendations - promise even a slight improvement to cluster load balance. 

There are three major elements here:
  1. Migration Threshold
  2. Target host load standard deviation
  3. Current host load standard deviation

When you change the “Migration Threshold” the value of the “Target host load standard deviation” will also change. Two host cluster with threshold set to three has a THLSD of 0.2, a three host cluster has a THLSD of 0.163.

While the cluster is imbalanced (Current host load standard deviation > Target host load standard deviation) select a VM to migrate based on specific criteria and simulate a move and re-compute the “Current host load standard deviation” and add to the migration recommendation list. If the cluster is still imbalanced (Current host load standard deviation > Target host load standard deviation) repeat procedure.

How does DRS selects the best VM to move?

For each VM check if a VMotion to each of the hosts which are less utilized than source host would result in a less imbalanced cluster and meets the Cost Benefit and Risk Analysis criteria.