NetApp: Difference between revisions
Jump to navigation
Jump to search
Line 71: | Line 71: | ||
4. Exit the cluster SSH session | 4. Exit the cluster SSH session | ||
5. Open up SSH sessions to all | 5. Open up SSH sessions to all SP's in the cluster using the admin credentials | ||
6. Enter to system console mode: | 6. Enter to system console mode: |
Revision as of 09:49, 17 October 2022
CLI
Aggregate's
Show aggregate's
storage aggregate show
Example output:
Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ netapp01_01_0_root 2.91TB 1.52TB 48% online 1 netapp01-01 raid_dp, normal netapp01_01_1_esxi 37.83TB 6.38TB 83% online 13 netapp01-02 raid_dp, normal
Snap mirror
Show snapmirror progress
snapm show -fields total-progress
Example output:
source-path destination-path total-progress -------------------------------------- --------------------------------------------- -------------- filersrv001_esxh:vol43_cluster003 filersrv002_esxh:vol43_cluster003_mirror 7.44GB filersrv001_esxh:vol46_cluster001 filersrv002_esxh:vol46_cluster001_mirror 131.3GB filersrv001_esxh:vol47_cluster001 filersrv003_exch:vol47_cluster001_mirror - filersrv001_esxh:vol49_cluster001 filersrv002_esxh:vol49_cluster001_mirror - filersrv001_esxh:vol51_cluster003 filersrv002_esxh:vol51_cluster003_mirror 104.1GB
Show snapmirror for volumes in transferring state
snapmirror show -status transferring
Example output:
Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- filersrv001_esxh:vol43_cluster001 DP filersrv002_esxh:vol43_cluster001_mirror Snapmirrored Transferring 8.19GB false 06/19 15:14:34
Misc
Show number of inodes
vvolume show -vserver testsvm001 -volume testvolume01 -fields files
Example output:
vserver volume files ----------------- ------------- -------- testsvm001 testvolume01 84999981
Increase number of inodes
volume modify -vserver testsvm001 -volume testvolume01 <number>
Cluster shutdown
Shutdown
1. SSH To the cluster address
2. Set a maintenance window:
system node autosupport invoke -node * -type all -message "MAINT=8h Power Maintenance"
3. Get a list of all the SP's and their IP's:
system service-processor show -node * -fields address
4. Exit the cluster SSH session
5. Open up SSH sessions to all SP's in the cluster using the admin credentials
6. Enter to system console mode:
system console
7. In one of the SSH sessions halt all the nodes in the cluster:
system node halt -node * -skip-lif-migration-before-shutdown true -ignore-quorum-warnings true -inhibit-takeover true
For clusters with SnapMirror Synchronous operating in StrictSync mode:
system node halt -node * -skip-lif-migration-before-shutdown true -ignore-quorum-warnings true -inhibit-takeover true -ignore-strict-sync-warnings true
8. Respond to all the questions about confirming if you want to stop node XXXXX with y
9. Wait until all SSH sessions are at the LOADER-A> prompt <comments />