Wednesday, August 7, 2013

How to telnet from ESX host

How to "telnet" from ESX host to check for port connectivity

If the ESX server isn't having telnet installed, follow the steps below:

1) Put the ESX CDROM into the server.
2) Type:  "mount /mnt/cdrom"
3) Type:  "cd /mnt/cdrom/VMware/RPMS"
4) Type:  "rpm -Uvh telnet-0.17-26.EL3.3.i386.rpm"
5) Enable the Telnet outbound firewall port

(or)

Search for the rpm file "telnet-0.17-26.EL3.3.i386.rpm" from the below link and download appropriate file for the OS.


and run the command

"rpm -Uvh file_name.rpm"

for example,

"rpm -Uvh telnet-0.17-26.EL3.3.i386.rpm"

If the telnet is already installed, ignore the above steps.

To check for the port connectivity, run the below command:

telnet ip_address port_id

for example,

telnet 10.25.5.3 1918

Tuesday, August 6, 2013

vMotion of virtual machines fails at 82%

vMotion of virtual machines fails at 82% (How to ?)

vMotion of virtual machine fails at 82% with a error “A general system error occurred: Source detected that destination failed to resume.”. You won't be able to migrate a virtual machine.

The issue is caused by incorrect datastore information (UUID mismatch). 

So to confirm this run the below command on all ESX host in the cluster.

# vdf -h

The output appears similar to:

[ESX01 ~]# vdf -h
/vmfs/volumes/1dd794c6-cc279de7
600G 438G 161G 73% /vmfs/volumes/NFS01

[ESX02~]# vdf -h
/vmfs/volumes/36132c1c-6f72083e
600G 438G 161G 73% /vmfs/volumes/NFS01

[ESX03~]# vdf -h
/vmfs/volumes/1dd794c6-cc279de7
600G 438G 161G 73% /vmfs/volumes/NFS01

[ESX04~]# vdf -h
/vmfs/volumes/1dd794c6-cc279de7
600G 438G 161G 73% /vmfs/volumes/NFS01

[ESX05~]# vdf -h
/vmfs/volumes/1dd794c6-cc279de7
600G 438G 161G 73% /vmfs/volumes/NFS01

Compare the UUID on all ESX host got from the output of the command (given in bold). 

Here, the second host ESX02 sees the datastore with a different UUID when compared to other ESX hosts.

For ESX01, ESX03, ESX04 & ESX05, the UUID of the datastore remains the same (show below). Hence virtual machine can be recognized on the other host if it is vmotion’ed.

/vmfs/volumes/1dd794c6-cc279de7

And for ESX02 the UUID of the datastore is different (shown below).

/vmfs/volumes/36132c1c-6f72083e

To resolve this, run “esxcfg-nas –l” command in one of the ESX which is working perfectly and make a note of the NFS path. 

Then un-mount the datastore from the faulty ESX (ESX02) and remount the NFS datastore using the NFS path (in exact) noted from other ESX server.

Steps to un-mount the datastore & remount

1)    Change the cluster DRS setting to “manual”. (Or) Move the ESX to “maintenance mode” -> then, move ESX out of the ESX cluster -> and exit “maintenance mode”

2)    Make sure there are no powered on virtual machines in ESX02.

3)    Login to ESX server ESX02 using vsphere client.

4)    Go to “configurations”.

5)    Select “storage”.

6)    Right click the "NFS01" datastore and click “unmount”.

7)    Perform step 6 for all the NFS datastore until there is no datastore mounted.

8)    Then click “Add Storage”.

9)    Select “Network File System” -> click “next”

10) Use the output of “esxcfg-nas –l” command run on ESX

NFS01 is /vol/pecs_esx_nfs_vol01/esx_nfs_vol01_q from 10.xx.xx.xx mounted

Datastore Name: NFS01

Folder: /vol/pecs_esx_nfs_vol01/esx_nfs_vol01_q

Server: 10.xx.xx.xx

11) Fill the properties in “Add Storage” dialog box with the information as above.

12) Click “Next” -> Click “Finish”

13) Perform the steps 8 through 12 to mount all the NFS data stores.

14) Once done, compare the output of “esxcfg-nas –l” got from ESX02 with other ESX servers. Output should be similar.

15) Perform the vMotion of a test virtual machine to ESX02 and migration should 
be successful.

16) If the migration of test virtual machine is successful, then change the cluster DRS setting to “Automatic”. (Or) Move the ESX to “maintenance mode” -> then, move ESX into the ESX cluster -> and exit “maintenance mode”

How to configure Distributed Power Management on ESX server

Configure Distributed Power Management on ESX server

For Configuring Distributed Power Management first IMPI / iLO settings needs to configured & tested per ESX server.

Configuration of IPMI / iLO Settings.

1) Go to ESX server configuration & click on Power Management under software settings.


2) Click on Properties to configure the BMC Settings.

3) Provide the user name, password, iLO ip address & iLO MAC address for the ESX host.
IPMI / iLO configuration can be obtained from iLO login screen (HP Onboard Administrator).

 

4) Also change the NIC speed to auto negotiate, as some of the iLO cards supports 100 Mbps speed.


5) Check for Wake on LAN compatibility for all network adaptors for each ESX host.


6) Once all these settings are done we can test the IPMI / iLO configuration.
Prior to test the IPMI / iLO configuration evacuate all the VMs running on the specific ESX Server.


7) Once the VMs are evacuated from the ESX Server right click on the ESX server & click on Enter Standby Mode.





8) Host Placed in Standby Mode.


9) Cluster Summary once the host is in DPM mode.


10) Power on host using IPMI / iLO Settings.


11) Once power on command is sent it will use IPMI / iLO Settings to power on the Blade Server.


12) Once the blade is powered on it will show the normal status of the ESX server.
Now once the host is powered on, it will show the cluster HA configuration host failure tolerance is 1 host.


13) Now once the IPMI / iLO testing is successful we can configure the Cluster to be DPM enabled.

14) Right click on the cluster & click on edit settings.


15) Now go to Power Management. Prior to actual implementation, change the settings to Manual mode.

16) This will enable to DPM feature but will not execute any recommendations. It will just provide the recommendations & Administrator can execute those manually.

17) Once the Administrator is comfortable with the recommendations he can change the settings to automatic & adjust the DPM threshold level according to requirements.


18) Now click on host options & check the status of the ESX servers.
As of now DPM is showing as disabled as the cluster is not enabled with DPM feature. But you can see the last standby exit time & result as succeeded.


How to verify that the vCenter Server Agent Service is running on an ESX host

Verifying that the vCenter Server Agent Service is running on an ESX host

For troubleshooting purposes, it may be necessary to verify if the vCenter Server Agent Service (vmware-vpxa) is running. 

1) Log in to ESX host using root credentials from Putty session or directly from the console.

2) Type "ps -ef | grep vpxa" -> press enter.

If vmware-vpxa is running, you see output similar to:

[root@server]# ps -ef | grep vpxa
root     24663     1  0 15:44 ?        00:00:00 /bin/sh /opt/vmware/vpxa/bin/vmware-watchdog -s vpxa -u 30 -q 5 /opt/vmware/vpxa/sbin/vpxa
root     26639 24663  0 21:03 ?        00:00:00 /opt/vmware/vpxa/vpx/vpxa
root     26668 26396  0 21:23 pts/3    00:00:00 grep vpxa

If vmware-vpxa is not running, you see output similar to:

[root@server]# ps -ef | grep vpxa
root     26709 26396  0 21:24 pts/3    00:00:00 grep vpxa

Monday, August 5, 2013

vSphere ESX Patch release and its build number

vSphere ESX Patch release and its build number

Information about VMware vSphere ESX patch release and its corresponding build number are given below:

You can get more information about the patch release from the below link. 

You need to login to "my.vmware.com" to get access to the link.

https://my.vmware.com/group/vmware/patch#search

ESX 4.1 release and build

Release Name
Release Date
Build Number
ESX410-201307001
7/31/2013
1198252
ESX410-201304001
4/30/2013
1050704
ESX410-201301001
1/31/2013
988178
ESX410-201211001
11/15/2012
874690
ESX 4.1 Update 3
8/30/2012
800380
ESX410-201206001
6/14/2012
721871
ESX410-201205001
5/3/2012
702113
ESX410-201204001
4/26/2012
659051
ESX410-201201001
1/30/2012
582267
ESX410-201112001
8/12/2011
538358
ESX 4.1 Update 2
10/27/2011
502767
ESX410-201107001
7/28/2011
433742
ESX410-201104001
4/28/2011
381591
ESX 4.1 Update 1
2/10/2011
348481
ESX410-201011001
11/29/2010
320137
ESX410-201010001
11/15/2010
320092
ESX 4.1
7/13/2010
260247

ESX 4.0 release and build

Release Name
Release Date
Build Number
ESX400-201305001
5/30/2013
1070634
ESX400-201302001
2/7/2013
989856
ESX400-201209001
9/14/2012
787047
ESX400-201206001 
6/14/2012
721907
ESX400-201205001
5/3/2012
702116
ESX400-201203001
3/30/2012
660575
ESX400-201112401
12/13/2011
538074
ESX 4.0 Update 4
11/17/2011
504850
ESX400-201110001
10/13/2011
480973
ESX 4.0 Update 3
5/5/2011
398348
ESX400-201104001
4/28/2011
392990
ESX400-201103001
3/7/2011
360236
ESX400-201101001
1/4/2011
332073
ESX400-201009001
9/30/2010
294855
ESX 4 Update 2
6/10/2010
261974
ESX400-201005001
5/27/2010
256968
ESX400-201003001
4/1/2010
244038
ESX400-201002001
3/3/2010
236512
ESX400-200912001
1/5/2010
219382
ESX 4.0 Update 01a (re spin)
12/9/2009
213128
ESX 4 Update 1
11/19/2009
208167
ESX400-200909001
9/24/2009
193498
ESX400-200907001
8/6/2009
181792
ESX400-200906001
7/9/2009
175625
ESX 4
5/21/2009
164009

ESX 3.5 release and build

Release Name
Release Date
Build Number
ESX350-201302401-SG
2/21/2013
988599
ESX350-201206401-SG
6/14/2012
725354
ESX350-201205401-SG
5/3/2012
702112
ESX350-201203401-SG
3/9/2012
604481
ESX350-201105401-SG
6/2/2011
391406
ESX350-201012401-SG
12/7/2010
317866
ESX350-201008401-SG
8/31/2010
283373
ESX350-201006401-SG
6/24/2010
259926
ESX350-201003402-BG
3/29/2010
238493
ESX350-201003403-SG
3/29/2010
227413
ESX350-201002411-BG
2/16/2010
226117
ESX350-200912401-BG
12/29/2009
213532
ESX 3.5 Update 5
12/3/2009
207095
ESX350-200910406-SG
10/16/2009
199239
ESX350-200908405-BG
8/31/2009
184236
ESX350-200907407-BG
7/30/2009
176894
ESX350-200906403-BG
6/30/2009
169697
ESX350-200905401-BG
5/28/2009
163429
ESX350-200904401-BG
4/29/2009
158874
ESX 3.5 Update 4
3/30/2009
153875
ESX350-200903411-BG
3/20/2009
153840
ESX350-200901401-SG
1/30/2009
143128
ESX350-200901407-BG
1/30/2009
134105
ESX350-200811401-SG
12/2/2008
130756
ESX 3.5 Update 3
11/6/2008
123630
ESX350-200809404-SG

120512
ESX350-200808413-SG

111764
ESX350-200808401-BG

113339
ESX 3.5 Update 2 (reissued)
8/13/2008
110268
ESX350-200806812-BG
8/12/2008
110181
ESX 3.5 Update 2
7/25/2008
103908
ESX350-200806401-BG
6/16/2008
98103
ESX350-200805501-BG
6/3/2008
95350
ESX350-200804401-BG
4/30/2008
84374
ESX 3.5 Update 1
4/10/2008
82663
ESX 3.5
2/20/2008
64607

ESX 3.0 release and build

Release Name
Release Date
Build Number
ESX303-201102401-SG
2/15/2011
312855
ESX303-201002201-UG
3/8/2010
231127
ESX Server 3.0.3
8/8/2008
104629
ESX Server 3.0.2 Update 1
10/29/2007
61618
ESX Server 3.0.2
7/31/2007
52542
ESX Server 3.0.1
10/6/2006
32039
ESX Server 3.0.0
6/15/2006
27701