to see version
==============
vmware -v
vmware -lv
to patch
========
esxupdate
esxcli software
to see the patch logs
=====================
vi /var/log/vmware/esxupdate.log
configuration files for update
==============================
./etc/vmware/esxupdate/esxupdate.conf
to enable and disable firewall in esx 4.x branches
===============================
esxcfg-firewall --allowIncoming
esxcfg-firewall --allowOutgoing
to install debug tools
======================
copy link from the assocaited debug tools from buildweb going into the build tree.
run
lwp-download http://build-squid.eng.vmware.com/build/saved/builds/bora-171294/publish/vmware-esx-debug-tools-4.0.0-0.4.171294.i386.rpm
rpm -ivh vSmware-esx-debug-tools-4.0.0-0.4.171294.i386.rpm
to see hardware details
=================
smbiosdump
dmidcode
esxcli vaai device list
vim-cmd vmsvc/getallvms
vim-cmd vmsvc/power.on 96
esxcfg-scsidevs -l |grep Fibre |wc -l
esxcfg-scsidevs -l |grep iSCSI |wc -l
esxcli corestorage claimrule list
esxcli corestorage claimrule add --rule 103 --plugin=MASK_PATH --type=transport --transport=fc
esxcli corestorage claimrule load
esxcli corestorage claimrule run
TO CHANGE SSH CONFIG in ESX
====================
vi /etc/ssh/sshd_config
search for PermitRootLogin no
change to PermitRootLogin yes
service sshd restart
service network restart
If esxi 3.0 do this before you follow above steps
=================================
1) At the console of the ESXi host, press ALT-F1 to access the console window.
2) Enter unsupported in the console and then press Enter. You will not see the text you type in.
3) If you typed in unsupported correctly, you will see the Tech Support Mode warning and a password prompt. Enter the password for the root login.
4) You should then see the prompt of ~ #. Edit the file inetd.conf (enter the command vi /etc/inetd.conf).
5) Find the line that begins with #ssh and remove the #. Then save the file. If you're new to using vi, then move the cursor down to #ssh line and then press the Insert key. Move the cursor over one space and then hit backspace to delete the #. Then press ESC and type in :wq to save the file and exit vi. If you make a mistake, you can press the ESC key and then type it :q! to quit vi without saving the file.
6) Once you've closed the vi editor, run the command /sbin/services.sh restart to restart the management services. You'll now be able to connect to the ESXi host with a SSH client.
if Update for ESXi 3.5 Update 2 and esxi 4.0 then steps 5 & 6 goes to
=====================================================================
5) Find the lines that begins with #ssh and remove the #. Then save the file. If you're new to using vi, then move the cursor down to #ssh line and then press the Insert key. Move the cursor over one space and then hit backspace to delete the #. Then press ESC and type in :wq to save the file and exit vi. If you make a mistake, you can press the ESC key and then type it :q! to quit vi without saving the file. Note: there are two lines for SSH with ESXi 4.0 now - one for regular IP and the other for IPv6. You should
6) Once you've closed the vi editor, you can either restart the host or restart the inetd process. To restart inetd run ps | grep inetd to determine the process ID for the inetd process. The output of the command will be something like 1299 1299 busybox inetd, and the process ID is 1299. Then run kill -HUP <process_id> (kill -HUP 1299 in this example) and you'll then be able to access the host via SSH.
for changing gateway
==================
edit /etc/sysconfig/network
classic
DVD install
Embedded
thin -CD install in HDD
visor -USB install
To enter maintenance mode run the following command
vimsh -n -e /hostsvc/maintenance_mode_enter
or
vim-cmd hostsvc/maintenance_mode_enter
To exit maintenance mode run the following command
vimsh -n -e /hostsvc/maintenance_mode_exit
So your probably wondering how do I know if the host is in Maintenance Mode, here you go:
vimsh -n -e /hostsvc/runtimeinfo | grep inMaintenanceMode | awk ‘{print $3}’
to see the storage adapters
=======================
esxcfg-scsidevs -a
Logout from the targets and relogin
# vmkiscsiadm -m node -u
# vmkiscsi-tool -D vmhba37
to stop firewall rules
=================
/etc/init.d/iptables stop
esxcfg-firewall --allowincoming --allowoutgoing
to add DNS entries
===============
add in /etc/hosts
to enter tech support mode in ESXi
alt+F1, type unsupported provide root password
will get you the prompt
linux +VM
========
perlkit cmd for vmmigrate
./vmmigrate.pl --sourcehost wdc-tse-h30.wsl.vmware.com --targetdatastore Netapp_VMFS3_Repro --targethost wdc-tse-h27.wsl.vmware.com --vmname Testvm11 --targetpool TestRP --state poweredOn >>nohup.out &
repeated touch cmd
================
while [ 1 ]; do touch x; sleep 10; rm -rf x; done;
to install tools in linux vm. (spec. without runlevel6, no GUI)
===================
mount /dev/cdrom /media
9 cd /media/
10 rpm -ivh VMwareTools-4.0.0-164009.i386.rpm
11 /usr/bin/vmware-config-tools.pl
mkdir /iozone
2 scp root@10.131.10.109:/iozone/* /iozone/
3 vmware
4 cd /iozone
5 ./iozone-loop.sh
• Run ls –lR from the vmfs volume of the netapp LUN from ESX host (recursive list)
• Try exercising lock options from ESX using vmkfstools 'reserve happening from both hosts'
to issue LIP though the QLA HBA..
===========================
echo "scsi-qlalip" > /proc/scsi/qla2xxx/host_no
to get vmid's starting with "Test" in name
================================
vim-cmd vmsvc/getallvms|grep "Test" | cut -f 1 -d " "
to format with VMFS
================
fdisk -l
fdisk /dev/<devname>
Command (m for help): t
Selected partition 1
Hex code (type L to list codes):
Hex code (type L to list codes): fb
Changed system type of partition 1 to fb (VMware VMFS)
Command (m for help): w
iSCSI logouts and CHAP enablement <= 4.1
=============================
Logout from the targets and relogin
# vmkiscsiadm -m node -u
# vmkiscsi-tool -D vmhba37
Setup mutual CHAP on the initiator and target using the same CHAP name and password.
# vmkiscsi-tool -A -m CHAP -a "adapter adapassword 4" vmhba37
# vmkiscsi-tool -A -m CHAP -b 1 -a "equallogic equpassword 4" vmhba37
curl script triggering
===============
eg:
curl -kvv -u root:ca\$hc0w https://10.112.71.26
-k for ignoring security certificate
-vv for verbose
then interface
like https://10.112.71.26/mob
https://10.112.71.26/host
extended verbose logging lpfc qla emulex qlogic fc hba
=======================================================
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1005576
To enable verbose logging for Qlogic on ESXi,
====================================
esxcfg-module -s ql2xextended_error_logging=1 qla2xxx
To enable verbose logging for Emulex on ESXi
======================================
esxcfg-module -s lpfc_log_verbose=0x1851 lpfc820
Emulex
Emulex driver tuning is slightly more complicated, as there are many different parameters that can be enabled to fine tune logging. VMware recommends that you refer to Emulex documentation for more information on these parameters.
This table contains a partial list of parameters that can be enabled:
LOG_ELS 0x1 ELS events
LOG_DISCOVERY 0x2 Link discovery events
LOG_MBOX 0x4 Mailbox events
LOG_INIT 0x8 Initialization events
LOG_LINK_EVENT 0x10 Link events
LOG_FCP 0x40 FCP traffic history
LOG_NODE 0x80 Node table events
LOG_MISC 0x400 Miscellaneous events
LOG_SLI 0x800 SLI events
LOG_CHK_COND 0x1000 FCP Check condition flag
LOG_LIBDFC 0x2000 IOCTL events
LOG_ALL_MSG 0x7ffff LOG all messages
For complete list of parameters, see the Emulex® Drivers for VMware ESX/ESXi User Manual.
Note: The preceding link was correct as of July 25, 2012. If you find the link is broken, provide feedback and a VMware employee will update the link.
To enable all logging, run the command:
esxcfg-module -s lpfc_log_verbose=0x7ffff <driver_name>
Caution: The maximum verbosity level above generates significant logging output.
To enable a slightly limited verbose logging level, run the command:
esxcfg-module -s lpfc_log_verbose=0xc3 <driver_name>
To disable verbose logging, run the command:
esxcfg-module -s lpfc_log_verbose=0 <driver_name>
To enable logging in ESXi 5.0, run the esxcli system module parameters set command. For example, for Emulex light pulse fibre channelcard, run the command:
esxcli system module parameters set -p lpfc0_log_verbose=1 -m driver name
userworld binaries
===============
/etc/vmware/UserWorldBinaries.txt
vi /etc/vmware/pci.ids
cat /etc/vmware/vmware-devices.map |grep piix
esxcfg-module -l
vi /proc/scsi/ will have all the folder name for all the drivers loaded
FOR RUNNING io
/vmlibio/Storage/scripts/run_iozoneloop.pl
mkdir /vmlibio;
mount 10.17.4.57:/vmlibio /vmlibio;
cd /vmlibio/iscsi/tests-dev/scripts;
./run_bonnieloop.pl
for patching/updating 4.0 esxi boxes
===========================
esxupdate -b ESXi400-201003401-BG -m http://10.112.34.60/patch-depot/esx40patches/esx40-released/esx40-embedded/P05/metadata.zip update
if patch depot is not available, use ths foll link change according to build..
http://build-squid.eng.vmware.com/build/storage1/release/bora-347250/publish/ESXi40P05-HP-654880/vmw-ESXi-4.0.0-metadata.zip
wget http://build-squid.eng.vmware.com/build/storage1/release/bora-347250/publish/ESXi40P05-HP-654880/ESXi40P05-HP-654880.zip
unzip and install
for patching 3.5i boxes
=======================
cd to
C:\Program Files\VMware\VMware vSphere CLI\bin in your pc having vCLI installed
vihostupdate35 --server 10.112.224.137 <esx ip="" server=""> -i -b ESXe350-201102401-O-BG.zip
eg:
PowerCLI C:\Program Files\VMware\VMware vSphere CLI\bin> perl 'C:\Program Files\VMware\VMware vSphere CLI\bin\vihostupda
te35.pl' --server blr-cpd-249 -i -b "C:\Documents and Settings\lbalachandran\My Documents\Downloads\ESXe350-201203401-O-
SG.zip "
Enter username: root
Enter password:
unpacking C:\Documents and Settings\lbalachandran\My Documents\Downloads\ESXe350-201203401-O-SG.zip ...
( skipping verification : ESXe350-201203401-O-SG/ESXe350-201203401-I-SG.zip.sig )
unpacking ESXe350-201203401-O-SG/ESXe350-201203401-I-SG.zip ...
( skipping verification : ESXe350-201203401-O-SG/ESXe350-201203402-T-BG.zip.sig )
unpacking ESXe350-201203401-O-SG/ESXe350-201203402-T-BG.zip ...
( skipping verification : ESXe350-201203401-O-SG/ESXe350-201203403-C-BG.zip.sig )
unpacking ESXe350-201203401-O-SG/ESXe350-201203403-C-BG.zip ...
Installing : ESXe350-201203401-I-SG
Copy to server : VMware-image.tar.gz ...
Copy to server : VMware-OEM-image.tar.gz ...
Copy to server : descriptor.xml ...
Copy to server : install.sh ...
Copy to server : contents.xml.sig ...
Copy to server : contents.xml ...
Removed ESXe350-201203401-I-SG Success</esx></driver_name></driver_name></driver_name></devname></process_id>Installing : ESXe350-201203402-T-BG
Copy to server : VMware-tools.tar.gz ...
Copy to server : descriptor.xml ...
Copy to server : install.sh ...
Copy to server : contents.xml.sig ...
Copy to server : contents.xml ...
Removed ESXe350-201203402-T-BG SuccessInstalling : ESXe350-201203403-C-BG
Copy to server : VMware-viclient.tar.gz ...
Copy to server : descriptor.xml ...
Copy to server : install.sh ...
Copy to server : contents.xml.sig ...
Copy to server : contents.xml ...
Removed ESXe350-201203403-C-BG Success
The host needs to be rebooted for the new firmware to take effect.
Type 'yes' to continue:
yes
Rebooting host ...
to check log levels in 4.1
==========================
chkconfig --list
chkconfig --level 35 nfs on
to core dump vpxa
=================
Generate a live coredump on ESXi
To get a live core of a user-world (e.g. vpxa, hostd), execute:
vsish -e set /userworld/cartel/<cartel-id>/debug/livecore 1
This should start generating a zdump in /var/core (may take a few minutes to write out the core completely)
To get the vpxa cartel-id, execute:
ps -C | grep vpxa
The second number is the cartel-id.
check details in vm-support log
check in /tmp for each esxcfg-*** command output
also,.
/vm-support-folderr.../usr/lib/vmware/hostd/docroot/downloads/esxcfg-***.txt
get IP of a VM
==============
vim-cmd vmsvc/getallvms
vim-cmd vmsvc/get.guest <vmid> |grep ipAddress
to enable/disable advanced config option for emc navisphere registration
========================================================================
esxcfg-advcfg /Disk/EnableNaviReg -s 0
esxcfg-advcfg /Disk/EnableNaviReg -g
to get system uptime
=====================
esxcli system stats uptime
for loop in MN
===============
for i in $(seq 10 1 32); do `vim-cmd vmsvc/snapshot.remove 4 s-{$i} `; done;
iscsi command set
===================
2011-10-21T06:47:41Z shell[49394]: vmkiscsid -x "select * from ISID;"
2011-10-21T06:47:57Z shell[49394]: vmkiscsi-tool
2011-10-21T06:48:04Z shell[49394]: vmkiscsi-tool vmhba33
2011-10-21T06:48:11Z shell[49394]: vmkiscsi-tool vmhba33 -E -l
2011-10-21T06:48:46Z shell[49394]: vmkiscsi-tool vmhba33
2011-10-21T06:49:06Z shell[49394]: vmkiscsi-tool vmhba33 -V -l
2011-10-21T06:49:53Z shell[49394]: vmkiscsi-tool vmhba33 -N
2011-10-21T06:50:00Z shell[49394]: vmkiscsi-tool vmhba33 -N -l
2011-10-24T04:43:24Z shell[223450]: vmkiscsi-tool vmhba36 -C
2011-10-24T05:18:11Z shell[223450]: vmkiscsi-tool vmhba36 -C
2011-10-24T05:22:14Z shell[223450]: ls
2011-10-24T05:26:32Z shell[223143]: vmkiscsi-tool vmhba36 -S
2011-10-24T05:26:42Z shell[223143]: vmkiscsi-tool vmhba36
2011-10-24T05:26:49Z shell[223143]: vmkiscsi-tool vmhba36 -D
2011-10-24T05:27:06Z shell[223143]: vmkiscsi-tool vmhba36 -E
2011-10-24T06:03:43Z shell[223143]: vmkiscsid -x "select * from discovery"
2011-10-24T06:19:44Z shell[223143]: esxcli iscsi
2011-10-24T06:19:48Z shell[223143]: esxcli iscsi session list
2011-10-24T06:20:13Z shell[223143]: esxcli iscsi session remove -A vmhba36
2011-10-24T06:20:16Z shell[223143]: esxcli iscsi session
2011-10-24T06:20:18Z shell[223143]: esxcli iscsi session list
2011-10-24T06:20:20Z shell[223143]: esxcli iscsi
2011-10-24T06:20:26Z shell[223143]: esxcli iscsi adapter
2011-10-24T06:20:42Z shell[223143]: vmkiscsi-tool
2011-10-24T06:20:46Z shell[223143]: vmkiscsi-tool vmhba36
2011-10-24T06:21:25Z shell[223143]: esxcli iscsi adapter discovery
2011-10-24T06:21:31Z shell[223143]: esxcli iscsi adapter discovery rediscover
2011-10-24T06:21:36Z shell[223143]: esxcli iscsi adapter discovery rediscover -A vmhba36
2011-10-24T06:21:42Z shell[223143]: esxcli iscsi session list
2011-10-24T06:25:52Z shell[223143]: esxcli iscsi session list
2011-10-24T06:26:10Z shell[223143]: esxcli iscsi session remove -A vmhba36
2011-10-24T06:26:13Z shell[223143]: esxcli iscsi adapter discovery rediscover -A vmhba36
~ # esxcli iscsi session remove -A vmhba36
~ # esxcli iscsi adapter discovery rediscover -A vmhba36
Rediscovery started
vmkiscsid --dump-db
table names
-----------
InitiatorNodes
ISID
Targets
discovery
ifaces
internal
nodes
route
route_vs_iface
to save config changes persists after reboot..
-----------------------------------------------
esxcfg-boot -b is replaced by /sbin/auto-backup.sh in MN
esxtop for scsi reservations
=============================
If you want to see what host is issuing reservations, you can use esxtop
esxtop
and then following keystrokes
ufH<enter> will let you see a per disk RESV/s
An indirect way of searching the VM would be to find vdisk that are seeing changes in on-disk usage. this can be done using
watch -n1 du ./list_of_active_vmdks
bugs directory mounting
=======================
esxcfg-nas -a -o bugs.eng.vmware.com -s bugs bugs
cd /vmfs/volumes/bugs/files/0/0/7/7/1/7/1/9/
vmkfstools rdm creation syntax
===============================
vmkfstools -i /vmfs/volumes/abc/New\ Virtual\ Machine/New\ Virtual\ Machine.vmdk -d rdm:/vmfs/devices/disks/naa.600601608dd026009091cedd5800e111 /vmfs/volumes/datastore1/new
vmkfstools -D /vmfs/volumes/THIN-NetApp-10GB/v1.vmdk for lock information about the vmdk
vmkfstools -w /vmfs/volumes/THIN-NetApp-10GB/v1.vmdk will write zeroes to the thin vmdk
vmkfstools -Ph -v 10 /vmfs/volumes/THIN-NetApp-10GB/v1.vmdk
details about the volume creation/block size and file, pointer/sub blocks statistics.. UUID...
Native-snapshot enabled or not ?
--breaklock -B
--chainConsistent -e
--eagerzero -k
--fix -x
--lock -L
--migratevirtualdisk -M
--parseimage -Y
--punchzero -K
--snapshotdisk -I
--verbose -v
vmkfstools --extendedstatinfo /vmfs/volumes/fc-target/winxp-32/winxp-32.vmdk
Capacity bytes: 8589934592
Used bytes: 8589934592
Unshared bytes: 8589934592
look at https://opengrok.eng.vmware.com/source/xref/esx50u2.perforce.1666/bora/apps/vmkfstools/fstools.c#WriteZeros
for all options of vmkfstools
vaai reclaim enable
===================
esxcli system settings advanced list --option /VMFS3/EnableBlockDelete
for setting/enabling VAAI space reclamation
esxcli system settings advanced set --int-value 0 --option /VMFS3/EnableBlockDelete
for setting int value for adv config value..
esx 4.1 iscsi vmkernel port binding
=====================================
esxcli swiscsi nic add -n vmk0 -d vmhba32
egrep example
=============
esxcfg-scsidevs -l | egrep "Display Name:|VAAI Status:"
to get the locks in a vmdk
vmkfstools -D ./Windows\ XP\ Home\ Edition.vmdk
to get the mac address of all the activehosts using a volume..
vmkfstools --activehosts /vmfs/volumes/datastore1\ \(7\)/
to get geometry of the disk
vmkfstools -g /vmfs/volumes/datastore1\ \(7\)/2gbsparse.vmdk
disk format checker and fixing
vmkfstools -x check /vmfs/volumes/datastore1\ \(7\)/2gbsparse.vmdk
vmkfstools -x repair /vmfs/volumes/datastore1\ \(7\)/2gbsparse.vmdk
Remove/delete a datastore.
===================
~ # vim-cmd hostsvc/datastore/remove
Insufficient arguments.
Usage: remove name
to get LBA block mapping for the vmdk and to understand thin/thick/EZT format.
==================================
vmkfstools -t0 /vmfs/volumes/datastore1\ \(7\)/2gbsparse.vmdk
in esx 4.1
esxcli vms vm list
to display the vm world id and cartel id of the vmx
iscsi
4.0 to do port binding
======================
~ # vmkiscsi-tool -V -a vmk1 vmhba37
Adding NIC vmk1 ...
Added successfully.
~ # vmkiscsi-tool -V -a vmk2 vmhba37
Adding NIC vmk2 ...
Added successfully.
in 4.0 to logout the sessions
=============================
vmkiscsiadm -m node -u
to see the port bound to a swiscsi hba in 4.0
=========================================
~ # esxcli swiscsi nic list --adapter vmhba37
snapshot revert create
=======================
vim-cmd vmsvc/snapshot.revert 272 suppressPowerOff 7 0
snapshot.revert vmid suppressPowerOff [snapshotLevel] [snapshotIndex]
vim-cmd vmsvc/snapshot.create 272 SNAP1 1 1 1
snapshot.create vmid [snapshotName] [snapshotDescription] [includeMemory] [quiesced]
for psod'ing ESX
=================
vsish -e set /reliability/crashMe/Panic 1
refer
https://wiki.eng.vmware.com/ESXPlatformQA/KL/Coredump#Steps_to_force_vmkernel_core_dump
for VAAI related statistics in esxtop
======================================
esxcli storage core device vaai status get
to get unmap settings
vsish -e get /config/VMFS3/intOpts/EnableBlockDelete
VAAI setting
/config/VMFS3/intOpts/HardwareAcceleratedLocking
to set
esxcli system settings advanced set --int-value 1 --option /VMFS3/EnableBlockDelete
to issue delete
vmkfstools -y 60
a. ESX needs to send down the block delete commands.
(To check if this working correctly run: #esxtop -> u -> f -> 'O' -> "enter". This will enable VAAI stats column on the far right of the screen (so disable other stats). You should see the number of successful and failed delete commands under columns "DELETE" "DELETE_F" respectively)
DEVICE DELETE DELETE_F MBDEL/s
to download in esx 4.x time
===========================
lwp-download http://10.112.34.60/patch-depot/esx41patches/esx41-released/esx41-classic/P02/ESX410-201104001.zip
wget is used in 5.x
vmfs creation in 4.x through command line
=========================================
fdisk /vmfs/devices/disks/naa.6006016012d021000c2991b0fcb8e011
'n' will create new partition..
create primary partition and
type 't' and 'fb' to change the partition type to VMFS (fb).
then "w" to write changes to the disk..
vmkfstools -C vmfs3 -S 1GB /vmfs/devices/disks/naa.6006016012d021000c2991b0fcb8e011\:1
snapshot
==========
vim-cmd vmsvc/snapshot.revert 16 suppressPowerOff 5 0
will revert the snapshot to 5th level 0th index
1
|_2
|_3
|_4
|_5
-you are here..
if executed successfully it will display the snapshot tree..
note: remember non-existing index will just come out without returning any value.
to dispay the content of change tracking vmdk file.
/usr/lib/vmware/diskTest/bin/diskTool/diskTool -q -H dump rhel_clone_465_2_1-000007-ctk.vmdk
to set dhcp in linux vm
=======================
Edit/create the file /etc/sysconfig/network-scripts/ifcfg-eth0 to use DHCP.
Sample ifcfg-eth0 file:
DEVICE=eth0
USERCTL=no
ONBOOT=yes
BOOTPROTO=dhcp
BROADCAST=
NETWORK=
NETMASK=
IPADDR=
Activate the eth0 device by issuing /etc/rc.d/init.d/network restart
dhcp problem
============
Device eth0 does not seem to be present, delaying
initialization.
goto /sys/class/net
ll
see what interface "ethX" is displayed there..
create the same "ifcfg-ethX" under /etc/sysconfig/network-scripts/ with device=ethX entry.
esx code from any dbc machine
=============================
/build/trees/esx50u1/bora/modules/vmkernel
modules location in build-download site
========================================
http://build-download.eng.vmware.com/build/storage2/release/bora-469512/bora/build/esx/release/vmkmod-vmkernel64-signed/
has the modules... copy them to /usr/lib/vmware/vmkmod/
and use
vmkload_mod -l vscsi_checksumfilter
esxcfg-module -g vscsi_checksumfilter
append this in /etc/rc/local for staf installation in visor
===========================================================
# Start STAF
esxcfg-nas -a -o blr-buildserver.eng.vmware.com -s /stautomation pa-group-stautomation
esxcfg-nas -r
tar -xvf /vmfs/volumes/pa-group-stautomation//Vmvisor/staf.tar
setsid /bin/STAFProc >/dev/null 2>&1 &
staf uninstall in olinux servers
=================================
cd /usr/local/staf/
./STAFUnist
nfs client code
===============
/esx41u2.perforce.1666/bora/modules/vmkernel/nfsclient/
rpm contents query
==================
rpm -q --filesbypkg vmware-esx-vmkernel64-4.1.0-2.11.111111
vibcontent query
=================
vibauthor –i -v Vmware_bootbank_esx-base-5.0.0-0.1.14.659302.vib
Avoid vmkernel log rotate on visor
==================================
If you want to increase the size of the messages log file and increase the number of log files to be kept before rotation takes place do the following:
1. Kill syslog
~ # ps ax|grep syslog
4368 4368 busybox syslogd
~ # kill -9 4368
~ # ps ax|grep syslog
~ #
2. Start syslogd with new options (-b specifies the number of log rotate files).
~ # syslogd -i -s 1024 -b 99 –S -O <logfile>(size of log file is 1024KB and it will keep 99 log files before rotating)
e.g. syslogd -i -s 2097151 -b 99 -S -O /vmfs/volumes/local_dedicated_vmfs_1/vmkernel-logs/messages
3. Verify that syslog comes up with the new options.
~ # ps ax|grep syslog
269206 269206 busybox syslogd
For redirecting the log to a datastore add the following line to the /etc/syslog.conf.
logfile=/vmfs/volumes/<datastore-name>/<folder-name>/messages
Note: got the above content from https://wiki.eng.vmware.com/wiki/index.php?title=Gnarkhede/Tips-Tricks
syslog installer
==============
syslog vc plugin installer https://wiki.eng.vmware.com/MNLogging
netdump installer..
loation:
https://buildweb.eng.vmware.com/ob/590025/
disable enable vmknic /nic from host
==================================
while [ 1 ]; do esxcfg-vmknic -D iSCSI; sleep 600; esxcfg-vmknic -e iSCSI; sleep 600;esxcfg-vmknic -D iSCSI1; sleep 600; esxcfg-vmknic -e iSCSI1; sleep 600;done
p4web account
=============
qa-automation /mts-automation
U9NqLq11qjd2u7T
user: qa-automation
pass: u8Y9EDeEugY9u8A7UZy
http://bugzilla.eng.vmware.com/show_bug.cgi?id=927303
vmlibio
=======
qavmlib01
mount 10.17.4.57:/vmlibio /vmlibio
[root@blr-2nd-1-dhcp607 Common]# showmount -e 10.17.4.57
/vmlibrary
/vmlib3d (everyone)
/vmlibesxps 10.0.0.0/8
/vmlib1 /vmlib2
/vmlibst /vmlibappfvt /vmlibcbs /vmlibvmplayr mover,nasadmin1,chrom-http-rhel5
/vmlib-vtaf 10.0.0.0/8
/vmlibio
blr-cpd-253
============
all ESX in one server ctrl+R, choose cntr mgmt and choose boot disk..
VD 0 ESXi 5.0
VD 1 ESXi 4.1
VD 2 ESXi 5.1
STORAGE firmware information
============================
vsi node
/storage/scsifw/devices/eui.0017380012780046
adding vmkernel dhcp for ESX 3.x
================================
esxcfg-vswitch -A VMkernel vSwitch0
esxcfg-vmknic -a -i DHCP VMkernel
to know the type of traffic in scsi device/hba
==============================================
/> cat /storage/scsifw/adapters/vmhba32/info
Adapter information {
adapter:vmhba32
driver:ata_piix
channels:2
PCI bus:0
PCI slot:31
PCI function:2
max SG length:128
PAE capable:0
Underlying transport:Transport Type: 4 -> IDE
Adapter scan state:0
}
to check inodes occupied by files in the Visorfs
==================================================
to be run from / or dir where visorfs are mounted.
vdu -d 2
DRTS in 3.x as well as 4.x
--------------------------
#export RAW_DEVICE=/vmfs/devices/disks/vml.xxxxxxxxxxx (Do not set this while running the test on NAS volume)
#export VMTREE=/storage10/release/bora-142291/bora
#export VMBLD=release (release/beta depends on the build you are using is release or beta build respectively)
#export VMFS_VOLUME=/vmfs/volumes/DRTSstorage (DRTSstorage is the datastore assigned for the DRTS tests )
Run the diskRegressionTest.bash
#cd /storage10/release/bora-142291/bora/apps/diskTest/
#./diskRegressionTest.bash 2
To run the DRTS with the Quick COW, Deep COW or Big COW options do the followings
Install esx debug tools.
#export QUICK_COW_TEST=1 or export DEEP_COW_TEST=1 or BIG_COW_TEST=1
#export PROGDIR=/usr/lib/vmware/diskTest/bin
#export CMDDIR=/usr/lib/vmware/diskTest/cmdFiles
#export FUNCDIR=/usr/lib/vmware/diskTest/scripts
Then run diskRegressionTest.bash
#cd /storage10/release/bora-142291/bora/apps/diskTest/
#./diskRegressionTest.bash 2
few vmware advanced config
==========================
overrideDuplicateImageDetection
DataMover
Disk busreset/devicereset
vmknic setting for vmotion
CBRC enabling
scsi
migrate checksum calculation
COW max heap size.
VMFS tests
==========
/exit14/home/qa/vmfs/test-bin
storage
=======
BR. 1 BoilerMaker --> 8.0.1
RR.0 Rolling Rock --> 8.1
SN.0 Sierra Nevada --> 8.2
for viewing Files Ptr Blocks stats
====================================
vmkfstools -Ph -v 10 /vmfs/volumes/SAS-DS-1/
VMFS-5.54 file system spanning 2 partitions.
File system label (if any): SAS-DS-1
Mode: public
Capacity 273 GB, 240.0 GB available, file block size 1 MB
Volume Creation Time: Fri Jul 27 04:11:18 2012
Files (max/free): 130689/130649
Ptr Blocks (max/free): 64512/64462
Sub Blocks (max/free): 32697/32697
Secondary Ptr Blocks (max/free): 256/256
File Blocks (overcommit/used/overcommit %): 0/33748/0
Ptr Blocks (overcommit/used/overcommit %): 0/50/0
Sub Blocks (overcommit/used/overcommit %): 0/0/0
UUID: 501214e6-5a15aa70-29c7-002564fc6ddc
Partitions spanned (on "lvm"):
naa.5000c5001aaa6257:1
naa.5000c5001aaa6b4f:1
DISKLIB-LIB : Getting VAAI support status for /vmfs/volumes/SAS-DS-1/
Is Native Snapshot Capable: NO
solving gpt partition errors using dd for esx 4.0 when parted and fdisk doesnt help
====================================================================================
fdisk /vmfs/devices/disks/naa.6006016078802c00ae2927c47eb7e111 then p
Disk /vmfs/devices/disks/naa.6006016078802c00ae2927c47eb7e111: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
wipe first 34 sectors ( bs =512)
dd if=/dev/zero of=/vmfs/devices/disks/naa.6006016078802c00ae2927c47eb7e111 bs=512 count=34 conv=notrunc
wipe last 34 sectors of bs =512) ( seekoffset is the bytes/512 -34 )
~ # dd if=/dev/zero of=/vmfs/devices/disks/naa.6006016078802c00ae2927c47eb7e111 bs=512 count=34 seek=8225246 conv=notrunc
dd if=/dev/zero of=/dev/sdr bs=512 count=34 conv=notrunc
dd if=/dev/zero of=/dev/sdr seek=$(($((21474836480/512))-34)) bs=512 count=34 conv=notrunc
dd if=/dev/zero of=$disk bs=512 count=34 conv=notrunc
dd if=/dev/zero of=$disk seek=$(($SIZE - 34)) bs=512 count=34
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1008886
egrep usage
============
esxcfg-mpath -l |egrep "NMP|sas
cut usage
=========
esxcfg-scsidevs -c |grep 6008* |cut -f 1 -d ' '
naa.60080e50001b0eb4000001e24bad7080
naa.60080e50001b0eb4000003344e7d6dd6
naa.60080e50001b0eb4000003364e7d6e13
naa.60080e50001b0eb4000003384e7d6e44
naa.60080e50001b0eb40000033a4e7d6e66
naa.60080e50001b1006000003814e7d6a2a
naa.60080e50001b1006000003844e7d6c82
naa.60080e50001b1006000003864e7d6cb8
naa.60080e50001b1006000003884e7d6cea
naa.60080e50001b10060000038a4e7d6d27
nmp path selection policy (psp) setting
=======================================
esxcli storage nmp psp roundrobin deviceconfig get -d naa.60080e50001b10060000038a4e7d6d27
Nimbus
======
/mts/git/bin/nimbus-vcdeploy --list
/mts/git/bin/nimbus-vcdeploy --extendLease 60 Win-VC5.0
/mts/git/bin/nimbus-vcdeploy --kill <vmname>
https://wiki.eng.vmware.com/Nimbus/VC
/mts/git/bin/nimbusvc-clui
rbvcloudsh> help
List of commands:
destroy - Delete vApp
help - List of commands
ip - Show IP address of vApp's VMs
kill - PowerOff and Delete vApp
list - List all vApps
off - Power Off vApp
on - Power On vApp
quit - Quit
rdp - RDP (Remote Desktop) into VC vApp
rlui - Connect rlui to VC vApp
suspend - Suspend vApp
view - VM Remote Console of vApp
// - Switch to ruby mode
to get the vib/profile applied after reboot
===========================================
esxcli software vib get [cmd options]
Description:
get Displays detailed information about one or more installed VIBs
Cmd options:
--rebooting-image
Displays information for the ESXi image which becomes active after a reboot, or nothing if the pending-reboot image has not been created yet. If not specified, information from the current ESXi image in memory will be returned.
coredump on a live system (ESX 5.x)
==========================
localcli --plugin-dir /usr/lib/vmware/esxcli/int/ debug livedump perform
Extract the coredump with:
esxcfg-dumppart -C -D active
useful apps/binary in dbc server
================================
bld info 520743 will give details of any build number
-bash-4.1$ hpqc.py -h
usage: hpqc.py [-h] [-v] {testcase,testinstance,testset,testrun} ...
HPQC CLI
optional arguments:
-h, --help show this help message and exit
-v, --verbose verbose output
subcommands:
{testcase,testinstance,testset,testrun}
testcase help performing test case related operations
testinstance help performing test instances related operations
testset help performing test set related operations
testrun help performing testrun related operations
NOTE: set the following environment variables to use this tool 'HPQC_DOMAIN',
'HPQC_PROJECT' & 'HPQC_API_KEY'
under /build/apps/bin/
1_stage.pl gen-build.py l10n-util.py pylib
addbug.py generate-wer-mapping.cmd logconfig queueweb-poll
aggregate_and_alert.py generate-wer-mapping.py logger.py queueweb-poll.py
at.sh gen-suitebuild makeBldDepot.py README
autotriage gen-suitebuild.py makeDepot.py registerbugnum
autotriage.py getaccesslog.py make-review removeclients.py
autotriage_run get-branch-spec make-review-support resumeHungShell.py
bld get-branch-spec.cmd manageBuilds.pl reviewboard
bld-admin.pl get-branch-spec.py manage_service.py rpmsign.pl
bld.bat get-recommended-build-info.sh mesg rpmsign_testkey
blddownload getstablecln.py modbld runalltests
blddownload.cmd git my_waitforbuild.sh runintegrationtests
blddownload.py git-p4 newcygwin.cmd rununittests
bld.py git-p4.bat newcygwin.py scanp4.pl
bld-qls git_rb notify-poller scons
bld-qls.bat git_rb.py notify-poller.py send-email
bld_test git_rb_test NTLocal-daemon.py show-diffs
bld_test.cmd git_rb_test.py NTLocal-p4-2branch.pl show_me_dup_comp
bld_test.py gobuild NTLocal-p4.pl show_me_dup_comp.py
bora-floppy-updater.py gobuild-cachemgr opensource-build.pl sourcefile.py
bugnumreport gobuild-cachemgr.py p4 storage-broker
build-break-file-bug gobuild-cb p4_bmps_info storage-broker.py
build-break-file-bug.py gobuild-cb.cmd p4_bmps_info.bat storage-can-remove-component
build_event_listener gobuild-cb-getstablecln p4_bmps_info.py storage-can-remove-component.py
build_event_listener.py gobuild-cb-getstablecln.cmd p4-build-2branch.pl storage-collect-p4-build
build_event_publisher gobuild-cb-getstablecln.py p4-build.pl storage-collect-p4-build.py
build_event_publisher.py gobuild-cb-poll p4-build-wrapper storage-controller
build-graph gobuild-cb-poll.py p4-build-wrapper.py storage-controller.py
build-graph.py gobuild-cb.py p4.cmd storage-is-build-watched
build_mounts.pl gobuild-check-dependencies p4_crossport storage-is-build-watched.py
build_mounts.sh gobuild-check-dependencies.py p4_crossport.bat storage-manager
build_qatests_interface gobuild.cmd p4_crossport.py storage-manager.py
build_qatests_interface.bat gobuild-create-component p4-get storage_policy.cfg
build_restore_common.py gobuild-create-component.cmd p4-get.cmd storage_policy_helper.py
build_restore_db.py gobuild-create-component.py p4-get.py storage_policy.py
build_restore.py gobuild-deps p4_help storage_prioritize_buildtree
build_restore_reaper.py gobuild-deps.cmd p4_help.bat storage_prioritize_buildtree.py
build_restore_request.bat gobuild-deps.py p4_help.py storage_stats
build_restore_request.py gobuildd-install p4lockandsync storage_stats.py
cb_manager gobuildd-install.cmd p4lockandsync.cmd svs
cb_manager.py gobuildd-install.py p4lockandsync.py svs-clean.py
cbot-verify-config gobuild-graphs p4_login svs.cmd
cbot-verify-config.cmd gobuild-graphs.py p4_login.bat svs-submit
cbot-verify-config.py gobuild-graphs-templates p4_login.py svs-submit.py
check-accessed-files gobuild-list-components p4_logout svs-wrapper.py
check-accessed-files.cmd gobuild-list-components.cmd p4_logout.bat sync-toolchain
check-accessed-files.py gobuild-list-components.py p4_logout.py sync-toolchain.cmd
check-disk-space.py gobuild-lookup-component p4_manage_client sync-toolchain.py
cleanCOSCache.pl gobuild-lookup-component.cmd p4_manage_client.bat tests
cleanCurrentComponents.pl gobuild-lookup-component.py p4_manage_client.py toolchain
cleanloopback.py gobuild-manifest-create p4merge toolchain-client
config.l4p gobuild-manifest-create.cmd p4_pending toolchain-client.cmd
console-os-cache-rebuild.pl gobuild-manifest-create.py p4_pending.bat toolchain-client.py
console-os-start.sh gobuild-parallel-download p4_pending.py trim_client.py
convertcln.py gobuild-parallel-download.cmd p4ro unmount-loopback
convertCSets2XML.pl gobuild-parallel-download.py p4ro.bat unmount-loopback.py
cpandcs.pl gobuild-populate-cachemgr p4ro.py updatebug.py
cpandwait.pl gobuild-populate-cachemgr.py p4scan validateCS.pl
craftLbl4build.pl gobuild-populate-component p4scan.py verify-branch
csv-table.py gobuild-populate-component.cmd p4_test_submit verify-branch.py
data gobuild-populate-component.py p4_test_submit.bat virus-scan.bat
dbc gobuild-populate-deliverable p4_test_submit.py virus-scan.pl
dbc.bat gobuild-populate-deliverable.cmd p4tim vmsupport-net-analyzer.py
dbcCheckExec.sh gobuild-populate-deliverable.py p4tim.bat vmw_ldap_info
dbc_ibprofile.xml gobuild.py p4tim.data.template vmw_ldap_info.cmd
debug-uw-proxy gobuild-sandbox-deps p4tim.json.template vmw_ldap_info.py
debugzilla.py gobuild-sandbox-deps.py p4tim.py waitforbuild
detect-bad-succeeded-builds gobuild-sandbox-queue p4v waitforbuild2
detect-bad-succeeded-builds.py gobuild-sandbox-queue.cmd p4vinst.exe waitforbuild2.cmd
detect-long-builds gobuild-sandbox-queue.py p4-wrapper waitforbuild.cmd
detect-long-builds.py gobuild-target-lastchange p5 waitforbuild.py
distp4 gobuild-target-lastchange.cmd p5.bat winlock.py
distp4.py gobuild-target-lastchange.py passman xbuild
driver_versioning gobuild-update-deps passman.bat zerocopy-poll
errparse.cmd gobuild_update_deps.py passman.py zerocopy-poll.cmd
errparse.py gobuild-verify-components perllib zerocopy-poll-install
esx gobuild-verify-components.cmd post-review zerocopy-poll-install.cmd
esxsign gobuild-verify-components.py post-review.bat zerocopy-poll-install.py
fcache hpqc.py post-review.py zerocopy-poll.py
fcache.cmd isbldmachine processmonitor.py zerocopy-rewrite-url
fcache.py iscons proj-integ zerocopy-rewrite-url-pao
findGuilty.pl iscons.cmd proj-integ.py zerocopy-rewrite-url.py
G11n itCsvMgr.py publish-vai-catalog zerocopy-rewrite-url-w2
gcb l10n-util publish-vai-catalog.py
gcb_test l10n-util.cmd pxe-deploy
generate missing vmdk on your own
==================================
create a dummy thin vmdk with adapter type and size same as source-flat.vmdk.
remove dummy-flat. rename dummy to source.vmdk and edit the contents to point to source-flat and remove thin-provisioned line..
if further probs, try changing adapter type to buslogic/lsisas/lsilogic
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1002511
Mask Paths
==============
You can prevent the ESX/ESXi host from accessing storage devices or LUNs or from using individual paths to a LUN. Use the vSphere CLI commands to mask the paths.
When you mask paths, you create claim rules that assign the MASK_PATH plug-in to the specified paths.
Procedure
1
Check what the next available rule ID is.
esxcli corestorage claimrule list
The claim rules that you use to mask paths should have rule IDs in the range of 101 – 200. If this command shows that rule 101 and 102 already exist, you can specify 103 for the rule to add.
2
Assign the MASK_PATH plug-in to a path by creating a new claim rule for the plug-in.
esxcli corestorage claimrule add -P MASK_PATH
For information on command-line options, see esxcli corestorage claimrule Options.
3
Load the MASK_PATH claim rule into your system.
esxcli corestorage claimrule load
4
Verify that the MASK_PATH claim rule was added correctly.
esxcli corestorage claimrule list
5
If a claim rule for the masked path exists, remove the rule.
esxcli corestorage claiming unclaim
6
Run the path claiming rules.
esxcli corestorage claimrule run
After you assign the MASK_PATH plug-in to a path, the path state becomes irrelevant and is no longer maintained by the host. As a result, commands that display the masked path's information might show the path state as dead.
Example: Masking a LUN
In this example, you mask the LUN 20 on targets T1 and T2 accessed through storage adapters vmhba2 and vmhba3.
#esxcli corestorage claimrule list
#esxcli corestorage claimrule add -P MASK_PATH -r 109 -t location -A vmhba2 -C 0 -T 1 -L 20
#esxcli corestorage claimrule add -P MASK_PATH -r 110 -t location -A vmhba3 -C 0 -T 1 -L 20
#esxcli corestorage claimrule add -P MASK_PATH -r 111 -t location -A vmhba2 -C 0 -T 2 -L 20
#esxcli corestorage claimrule add -P MASK_PATH -r 112 -t location -A vmhba3 -C 0 -T 2 -L 20
#esxcli corestorage claimrule load
#esxcli corestorage claimrule list
#esxcli corestorage claiming unclaim -t location -A vmhba2
#esxcli corestorage claiming unclaim -t location -A vmhba3
# esxcli corestorage claimrule run
to get vml mapping to naa id
============================
ls -alh /vmfs/devices/disks
storage test scripts
====================
http://build-squid.eng.vmware.com/build/storage60/release/bora-919276/bora/vmkernel/tests/storage/
SATP plugin example link and instructions..
==========================================
http://build-download.eng.vmware.com/build/storage2/release/bora-469512/bora/build/esx/release/vmkmod-vmkernel64-signed/vmw_satp_example
http://build-download.eng.vmware.com/build/storage2/release/bora-469512/bora/build/esx/release/vmkmod-vmkernel64-signed/vmw_satp_debug_example
//depot/documentation/CPD-Patch/MN-esx50/Patch02/ST Storage logs/Test execution instructions.docx
rescan
======
esxcfg-rescan -u <adapter name=""> should be used for retrieving the latest re-sized value of the LUN not the -A option, which just scans all adapters for discovering new LUN's/paths and deleting paths
enabling SATA as ssd
====================
esxcli storage nmp satp rule add --satp VMW_SATP_LOCAL --device mpx.vmhba1:C0:T2:L0 --option=enable_ssd
Next you will need to reclaim your device so that the new rule is applied:
~ # esxcli storage core claiming reclaim -d mpx.vmhba1:C0:T2:L0
to disable or enable any service in ESXi
========================================
To disable usbarbitrator service, just like with any other service use chkconfig
to get epoch ID
===============
http://engweb.eng.vmware.com/bugs/files/0/0/9/7/0/6/8/7/qca.py.gz will give the changed extents from prev.
diskTool -H name.vmdk |grep -i epoch
format is 64 bit alphanumeric with - and / eg: 52 db 01 42 8a 7a 52 30-dd 94 4e 3b ff 39 3f d0/71
Marking HDD as SSD:
===================
esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d mpx.vmhba1:C0:T2:L0 -o enable_ssd
esxcli storage core claiming reclaim -d mpx.vmhba1:C0:T2:L0
- See more at: http://www.virtuallyghetto.com/2013/08/quick-tip-marking-hdd-as-ssd-or-ssd-as.html#sthash.IwqBG9wF.dpuf
attach all detached luns
=========================
for i in `esxcli storage core device detached list|awk '{print $1}'`; do esxcli storage core device detached remove -d $i; done
detach all luns except busy ( busy will through error)
=======================================================
for i in `esxcfg-scsidevs -c|awk '{print $1}'`; do esxcli storage core device set -d $i --state=off; done;
list all detached luns.
=======================
esxcli storage core device detached list
esxcli storage vmfs snapshot list
esxcli storage vmfs extent list
esxcli storage filesystem list
esxcli storage filesystem automount
mount all the unmounted volumes
================================
for i in `esxcli storage filesystem list |awk '{print $2}'`; do esxcli storage filesystem mount -l $i
; done
see the mounting status
=======================
esxcli storage filesystem list |awk '{print $2 "\t" $3}'
display stats from all the leaves under a vsi node with 8 lines after the pattern patch
========================================================================================
~ # for i in $(vsish -e ls /storage/scsifw/paths ); do echo $i; vsish -e get /storage/scsifw/paths/${i}stats |grep -A 8 -i split-type; done
remove multiple disks from multiple vms.. delete
================================================
#vim-cmd /vmsvc/getallvms
# for i in $(seq 5 1 29); do for j in $(seq 1 1 16); do echo "$i $j"; vim-cmd /vmsvc/device.diskremove $i 0 $j a; done; done
for i in $(seq 1 1 25); do ls /vmfs/volumes/Sharedtests/linked_clone1$i/linked_clone1${i}_*;done
modulo operator for doing alternative operrations
=================================================
let j=0;while [ 1 ]; do let j++; sleep 15; for i in $(seq 0 1 8);do echo $j; if [ `expr $j % 2` -eq 0 ]; then echo "mod inside"; esxcfg-mpath --path vmhba3:C0:T0:L$i -s off; esxcfg-mpath --path vmhba3:C0:T0:L$i -s active; else echo "mod outside"; esxcfg-mpath --path vmhba3:C0:T1:L$i -s off; esxcfg-mpath --path vmhba3:C0:T1:L$i -s active; fi; done; done
let j=0;while [ 1 ]; do let j++; sleep 15; for i in $(seq 0 1 8);do echo $j; if [ `expr $j % 2` -eq 0 ]; then echo "mod inside"; esxcfg-mpath --path vmhba2:C0:T0:L$i -s off; esxcfg-mpath --path vmhba2:C0:T0:L$i -s active; else echo "mod outside"; esxcfg-mpath --path vmhba2:C0:T1:L$i -s off; esxcfg-mpath --path vmhba2:C0:T1:L$i -s active; fi; done; done
To know whether the ESX is booted from pxe/local and to know the boot options and modules..
=========================================================================================
$ bootOption -p
Booted via (g)PXE : 1
# bootOption -h
bootOption
-r --raw Print w/o annotation
-a --all Print f,i,u,w,R boot options
-f --fsck FS Check Mode
-i --audit Audit Mode
-R --rollback Booted after a rollback
-p --pxeboot Booted via (g)PXE
-k --kernel Kernel used to boot
-m --modules List of modules used to boot
-o --options List of boot options used to boot
-b --bootOptionFiles List of boot option files to boot
-c --bootOptionFileContent Content of a boot option file used to boot
-C --bootOptionFilesContent Content of all boot option files used to boot
# bootOption -roC
vmbTrustedBoot=false tboot=0x0x101b000 debugLogToSerial=1 jumpstart.randomize autoPartition=TRUE autoPartitionOnlyOnceAndSkipSsd
bootOption -p
important ports (through https) in VC server/VCVA server
========================================
5480 - initial VCVA config..
9443 - NGC web client
440 - hostd
6501- rbd -Rule Based Deployment service
6502- autodeploy
heartbeat log
=============
2013-06-09T16:02:24.403Z cpu2:8236)HBX: 255: Reclaimed heartbeat for volume 5171da34-ccaf9c84-1c9e-d89d676a3530 (LUN15): [Timeout] [HB state abcdef02 offset 3784704 gen 189 stampUS 321599093148 uuid 51afbefc-aa484e98-b3e4-d89d676ac618 jrnl <fb 25335=""> dr$
native driver proc scsi node viewing
===============================
/usr/lib/vmware/vmkmgmt_keyval/vmkmgmt_keyval -i Emulex/lpfc
change all unlocked disks to thin
======================== ===========
include code to turn off the vmx and try otherwise to convert all powered on vms disks to thin
find /vmfs/volumes/ -maxdepth 5 -name '*.vmdk' -exec vmkfstools -K "{}" \;
to display multipath information
=============================
esxcli storage core path list
esxcli storage nmp device list
to know the firmware versions of the controllers in the ESX server
===================================================================
/usr/lib/vmware/vm-support/bin/swfw.sh |egrep -i "Description|VersionString"
to know the firmware versions of the controllers in the ESX server
list all the worlds used by stortage devices
============================================
esxcli storage core device world list
-Get a list of the worlds that are currently using devices on the ESX host
identify the advanced hidden config options under VMFS in ESX
========================================================
ls the vsish node /config/VMFS3/intOpts/
vsish -e ls /config/VMFS3/intOpts/
get the IP of the VC server managing a particular host (vc ip)
==============================================================
cat /etc/vmware/vpxa/vpxa.cfg |grep serverIp
to display all the network connection from the ESX host
========================================================
esxcli network ip connection list
to identify all the VI clients actively managing/accessing the host
====================================================================
esxcli network ip connection list |grep 443 and identify which IP entry is having more no of connections to 443 port
to get the list of all running VMs in a host
===============================================
esxcli vm process list
SVA VSA specific optimization to VMFS resource cluster allocation for better performance
=====================================================================================
vsish -e get /config/VMFS3/intOpts/EnableSVAVMFS
Vmkernel Config Option {
Default value:0
Min value:0
Max value:1
Current value:0
hidden config option:1
Description:Enable SVA specific optimization to VMFS resource cluster allocation
}
binedit location:
=================
vmfs also has to be used from here.
//depot/vtaf/vtaf21/Test/Storage/StorageFVT/VOMA/vmfs3
vaai related
==============
esxcli storage vmfs unmap -n 2000 -l dummy
vmkfstools -y /vmfs/volumes/52e095fc-71fd56b4-c617-d4ae52e90109
to view vaai stats
esxtop
u
f
o
p
press all alphabets having * after pressing “f”
get vaai status for volume
esxcli storage core device vaai status get
Thanks & Regards,
Bala
debug logging enablement
=========================
Here are the commands to turn on both libfc and libfcoe logging.
esxcli system module parameters set -p debug_logging=0xf -m libfc esxcli system module parameters set -p debug_logging=0x2 -m libfcoe
TSC timer/ACPI timer
====================
~ # zcat var/log/boot.gz |grep "reference timer"
0:00:00:05.187 cpu0:32768)Timer: 900: TSC disabled as reference timer by config option
0:00:00:05.187 cpu0:32768)Timer: 4139: reference timer is 24-bit ACPI PM at 3579545 Hz
disbling TSC provide boottime option timerEnableTSC=0
~ # esxcli system settings kernel list -o timerEnableACPI
Name Type Description Configured Runtime Default
--------------- ---- ----------------------------------------------- ---------- ------- -------
timerEnableACPI Bool Enable ACPI PM timer as system reference timer. TRUE TRUE TRUE
~ # esxcli system settings kernel list -o timerForceTSC
Name Type Description Configured Runtime Default
------------- ---- ----------------------------------------- ---------- ------- -------
timerForceTSC Bool Always use TSC as system reference timer. FALSE FALSE FALSE
~ # esxcli system settings kernel list -o timerForceACPI
Invalid Key Name: timerForceACPI
~ # esxcli system settings kernel list -o timerEnableTSC
Name Type Description Configured Runtime Default
-------------- ---- ------------------------------------- ---------- ------- -------
timerEnableTSC Bool Enable TSC as system reference timer. TRUE TRUE TRUE
~ #
mclock /no mclock
=================
Disable mClock scheduler
~ # localcli system settings advanced set -o /Disk/SchedulerWithReservation --int-value 0
esxcli system settings advanced list -o /Disk/SchedulerWithReservation
Path: /Disk/SchedulerWithReservation
Type: integer
Int Value: 0
Default Int Value: 1
Min Value: 0
Max Value: 1
String Value:
Default String Value:
Valid Characters:
Description: Disk IO scheduler (0:default 1:mclock)
10.112.68.24 vc FOR apd BUG
The field for the device is specifically reserved for indicating the path status of the device.
--> When the device has more than one path to the target( storage array), the path status is "on".
--> When all the paths to the target are down ( either off / dead ) the device status is "dead".
--> If there is only one path to the target then the status is "degraded".
--> When a device is unmapped from the storage array while ESX was using the device the status of the device is "not connected".
--> If ESX fails to recognize the state of the device ( if all above mentioned scenarios are not applicable) then the device status is "unknown".
taking ESXI live hostd dump without crashing
=============================
vmkbacktrace -w -n hostd
run commands by reading from a file.
====================================
cat cmds.txt|while read LINE; do echo $LINE; $LINE;sleep 4 ;done
to run repeated ly
===================
echo "cat cmds.txt|while read LINE; do echo $LINE; $LINE;sleep 4 ;done" >run.sh;
while [ 1 ]; do sh run.sh; done
get scsidevs storage details for all devices from vsish node
============================================================
~ # for i in `esxcfg-scsidevs -c |awk '{print $1}'`; do echo $i; vsish -e get /storage/scsifw/devices/$i/info |egrep -i "naa|dev|local|ssd"; done
</fb></adapter></vmname></folder-name></datastore-name></logfile></enter></vmid></cartel-id>
</div>