NetApp: Difference between revisions

From roonics
Jump to navigation Jump to search
No edit summary
No edit summary
 
(18 intermediate revisions by the same user not shown)
Line 1: Line 1:
test
=CLI=
==Aggregate's==
===Show aggregate's===
<pre>storage aggregate show</pre>
 
Example output:
<pre style="color: silver; background: black; width: 800px">
Aggregate    Size Available Used% State  #Vols  Nodes            RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
netapp01_01_0_root
            2.91TB    1.52TB  48% online      1 netapp01-01    raid_dp,
                                                                  normal
netapp01_01_1_esxi
          37.83TB    6.38TB  83% online      13 netapp01-02    raid_dp,
                                                                  normal
</pre>
 
==Snap mirror==
===Show snapmirror progress===
<pre>snapm show -fields total-progress</pre>
 
Example output:
<pre style="color: silver; background: black; width: 800px">
source-path                            destination-path                              total-progress
-------------------------------------- --------------------------------------------- --------------
filersrv001_esxh:vol43_cluster003    filersrv002_esxh:vol43_cluster003_mirror    7.44GB
filersrv001_esxh:vol46_cluster001    filersrv002_esxh:vol46_cluster001_mirror    131.3GB
filersrv001_esxh:vol47_cluster001    filersrv003_exch:vol47_cluster001_mirror    -
filersrv001_esxh:vol49_cluster001    filersrv002_esxh:vol49_cluster001_mirror    -
filersrv001_esxh:vol51_cluster003    filersrv002_esxh:vol51_cluster003_mirror    104.1GB
</pre>
 
===Show snapmirror for volumes in transferring state===
<pre>snapmirror show -status transferring</pre>
 
Example output:
<pre style="color: silver; background: black; width: 800px">
Source            Destination Mirror  Relationship  Total            Last
Path        Type  Path        State  Status        Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
filersrv001_esxh:vol43_cluster001
            DP  filersrv002_esxh:vol43_cluster001_mirror
                              Snapmirrored
                                      Transferring  8.19GB    false  06/19 15:14:34
</pre>
 
==Misc==
===Show number of inodes===
<pre>vvolume show -vserver testsvm001 -volume testvolume01 -fields files</pre>
Example output:
<pre style="color: silver; background: black; width: 800px">
vserver          volume        files
----------------- ------------- --------
testsvm001        testvolume01  84999981
 
</pre>
 
===Increase number of inodes===
<pre>volume modify -vserver testsvm001 -volume testvolume01 <number></pre>
 
==Cluster shutdown/startup==
===Shutdown===
1.  SSH To the cluster address
 
2.  Set a maintenance window:
<pre>system node autosupport invoke -node * -type all -message "MAINT=8h Power Maintenance"</pre>
 
3.  Get a list of all the SP's and their IP's:
<pre>system service-processor show -node * -fields address</pre>
 
4.  Exit the cluster SSH session
 
5.  Open up SSH sessions to all SP's in the cluster using the admin credentials
 
6.  Enter to system console mode:
<pre>system console</pre>
 
7.  In one of the SSH sessions halt all the nodes in the cluster:
<pre>system node halt -node * -skip-lif-migration-before-shutdown true -ignore-quorum-warnings true -inhibit-takeover true</pre>
 
For clusters with SnapMirror Synchronous operating in StrictSync mode:
<pre>system node halt -node * -skip-lif-migration-before-shutdown true -ignore-quorum-warnings true -inhibit-takeover true -ignore-strict-sync-warnings true</pre>
 
8.  Respond to all the questions about confirming if you want to stop node XXXXX with y
 
9.  Wait until all SSH sessions are at the LOADER-A> prompt
 
===Startup===
1.  Ensure storage switches are online
 
2.  Power on the NetApp disk shelves
 
3.  Power up the NetApp controllers
 
4.  It should boot straight up but if you connect to the SP and its sat at a LOADER-A> prompt you can type:
<pre>boot_ontap</pre>
 
 
 
 
 
 
 


[[Category:netapp]]
[[Category:netapp]]
[[Category:storage]]
[[Category:storage]]
[[Category:san]]
[[Category:san]]
‎<comments />

Latest revision as of 10:22, 18 October 2022

CLI

Aggregate's

Show aggregate's

storage aggregate show

Example output:

Aggregate     Size Available Used% State   #Vols  Nodes            RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
netapp01_01_0_root
            2.91TB    1.52TB   48% online       1 netapp01-01    raid_dp,
                                                                   normal
netapp01_01_1_esxi
           37.83TB    6.38TB   83% online      13 netapp01-02    raid_dp,
                                                                   normal

Snap mirror

Show snapmirror progress

snapm show -fields total-progress

Example output:

source-path                            destination-path                              total-progress
-------------------------------------- --------------------------------------------- --------------
filersrv001_esxh:vol43_cluster003    filersrv002_esxh:vol43_cluster003_mirror    7.44GB
filersrv001_esxh:vol46_cluster001    filersrv002_esxh:vol46_cluster001_mirror    131.3GB
filersrv001_esxh:vol47_cluster001    filersrv003_exch:vol47_cluster001_mirror    -
filersrv001_esxh:vol49_cluster001    filersrv002_esxh:vol49_cluster001_mirror    -
filersrv001_esxh:vol51_cluster003    filersrv002_esxh:vol51_cluster003_mirror    104.1GB

Show snapmirror for volumes in transferring state

snapmirror show -status transferring

Example output:

Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
filersrv001_esxh:vol43_cluster001
            DP   filersrv002_esxh:vol43_cluster001_mirror
                              Snapmirrored
                                      Transferring   8.19GB    false   06/19 15:14:34

Misc

Show number of inodes

vvolume show -vserver testsvm001 -volume testvolume01 -fields files

Example output:

vserver           volume        files
----------------- ------------- --------
testsvm001        testvolume01  84999981

Increase number of inodes

volume modify -vserver testsvm001 -volume testvolume01 <number>

Cluster shutdown/startup

Shutdown

1. SSH To the cluster address

2. Set a maintenance window:

system node autosupport invoke -node * -type all -message "MAINT=8h Power Maintenance"

3. Get a list of all the SP's and their IP's:

system service-processor show -node * -fields address

4. Exit the cluster SSH session

5. Open up SSH sessions to all SP's in the cluster using the admin credentials

6. Enter to system console mode:

system console

7. In one of the SSH sessions halt all the nodes in the cluster:

system node halt -node * -skip-lif-migration-before-shutdown true -ignore-quorum-warnings true -inhibit-takeover true

For clusters with SnapMirror Synchronous operating in StrictSync mode:

system node halt -node * -skip-lif-migration-before-shutdown true -ignore-quorum-warnings true -inhibit-takeover true -ignore-strict-sync-warnings true

8. Respond to all the questions about confirming if you want to stop node XXXXX with y

9. Wait until all SSH sessions are at the LOADER-A> prompt

Startup

1. Ensure storage switches are online

2. Power on the NetApp disk shelves

3. Power up the NetApp controllers

4. It should boot straight up but if you connect to the SP and its sat at a LOADER-A> prompt you can type:

boot_ontap

‎<comments />