Netapp Clustered Ontap CLI


Netapp Clustered Ontap

Netapp Clustered Ontap CLI Pocket Guide

On this page I will be constantly adding Netapp Clustered Data Ontap CLI commands as an easy reference Pocket Guide

(Updated 31-December-2016)


set -privilege advanced (Enter into privilege mode)

set -privilege diagnostic (Enter into diagnostic mode)

set -privilege admin (Enter into admin mode)

system timeout modify 30 (Sets system timeout to 30 minutes)

system node run – node local sysconfig -a (Run sysconfig on the local node)

The symbol ! means other than in clustered ontap i.e. storage aggregate show -state !online (show all aggregates that are not online)

node run -node -command sysstat -c 10 -x 3 (Running the sysstat performance tool with cluster mode)

system node image show (Show the running Data Ontap versions and which is the default boot)

dashboard performance show (Shows a summary of cluster performance including interconnect traffic)

node run * environment shelf (Shows information about the Shelves Connected including Model Number)

network options switchless-cluster show (Displays if nodes are setup for cluster switchless or switched – need to be in advanced mode)

network options switchless-cluster modify true (Sets the nodes to use cluster switchless, setting to false sets the node to use cluster switches – need to be in advanced mode)


security login unlock -username diag (Unlock the diag user)

security login password -username diag (Set a password for the diag user)

security login show -username diag (Show the diag user)


system configuration backup create -backup-name node1-backup -node node1 (Create a cluster backup from node1)

system configuration backup create -backup-name node1-backup -node node1 -backup-type node (Create a node backup of node1)

system configuration backup upload -node node1 -backup node1.7z -destination (Uploads a backup file to ftp)


To look at the logs within clustered ontap you must log in as the diag user to a specific node

set -privilege advanced

systemshell -node

username: diag


cd /mroot/etc/mlog

cat command-history.log | grep volume (searches the command-history.log file for the keyword volume)

exit (exits out of diag mode)


system coredump status (shows unsaved cored, saved cores and partial cores)

system coredump show (lists coredump files and panic dates)


system node image get -package http://webserver/ -replace-package true (Copies the firmware file from the webserver into the mroot directory on the node)

system node service-processor image update -node node1 -package -update-type differential (Installs the firmware package to node1)

system node service-processor show (Show the service processor firmware levels of each node in the cluster)

system node service-processor image update-progress show (Shows the progress of a firmware update on the Service Processor)

service-processor reboot-sp -node NODE1 (reboot the sp of node1)

Disk Shelves

storage shelf show (an 8.3 command that displays the loops and shelf information)


system node autosupport budget show -node local (In diag mode – displays current time and size budgets)

system node autosupport budget modify -node local -subsystem wafl -size-limit 0 -time-limit 10m (In diag mode – modification as per Netapp KB1014211)

system node autosupport show -node local -fields max-http-size,max-smtp-size (Displays max http and smtp sizes)

system node autosupport modify -node local -max-http-size 0 -max-smtp-size 8MB (modification as per Netapp KB1014211)


set -privilege advanced (required to be in advanced mode for the below commands)

cluster statistics show (shows statistics of the cluster – CPU, NFS, CIFS, FCP, Cluster Interconnect Traffic)

cluster ring show -unitname vldb (check if volume location database is in quorum)

cluster ring show -unitname mgmt (check if management application is in quorum)

cluster ring show -unitname vifmgr (check if virtual interface manager is in quorum)

cluster ring show -unitname bcomd (check if san management daemon is in quorum)

cluster unjoin (must be run in priv -set admin, disjoins a cluster node. Must also remove its cluster HA partner)

debug vreport show (must be run in priv -set diag, shows WAFL and VLDB consistency)

event log show -messagename scsiblade.* (show that cluster is in quorum)

cluster kernel-service show -list (in diag mode, displays in quorum information)

debug smdb table bcomd_info show (displays database master / secondary for bcomd)


system node rename -node -newname

system node reboot -node NODENAME -reason ENTER REASON (Reboot node with a given reason. NOTE: check ha policy)


system node run -node * options flexscale.enable on (Enabling Flash Cache on each node)

system node run -node * options flexscale.lopri_blocks on (Enabling Flash Cache on each node)

system node run -node * options flexscale.normal_data_blocks on (Enabling Flash Cache on each node)

node run NODENAME stats show -p flexscale (fashcache configuration)

node run NODENAME stats show -p flexscale-access (display flash cache statistics)


storage aggregate modify -hybrid-enabled true (Change the AGGR to hybrid)

storage aggregate add-disks -disktype SSD (Add SSD disks to AGGR to begin creating a flash pool)

priority hybrid-cache set volume1 read-cache=none write-cache=none (Within node shell and diag mode disable read and write cache on volume1)


storage failover takeover -bynode (Initiate a failover)

storage failover giveback -bynode (Initiate a giveback)

storage failover modify -node -enabled true (Enabling failover on one of the nodes enables it on the other)

storage failover show (Shows failover status)

storage failover modify -node -auto-giveback false (Disables auto giveback on this ha node)

storage failover modify -node -auto-giveback enable (Enables auto giveback on this ha node)

aggregate show -node NODENAME -fields ha-policy (show SFO HA Policy for aggregate)


aggr create -aggregate -diskcount -raidtype raid_dp -maxraidsize 18 (Create an AGGR with X amount of disks, raid_dp and raidgroup size 18)

aggr offline | online (Make the aggr offline or online)

aggr rename -aggregate -newname

aggr relocation start -node node01 -destination node02 -aggregate-list aggr1 (Relocate aggr1 from node01 to node02)

aggr relocation show (Shows the status of an aggregate relocation job)

aggr show -space (Show used and used% for volume foot prints and aggregate metadata)

aggregate show (show all aggregates size, used% and state)

aggregate add-disks -aggregate -diskcount (Adds a number of disks to the aggregate)

reallocate measure -vserver vmware -path /vol/datastore1 -once true (Test to see if the volume datastore1 needs to be reallocated or not)

reallocate start -vserver vmware -path /vol/datastore1 -force true -once true (Run reallocate on the volume datastore1 within the vmware vserver)


storage disk assign -disk 0a.00.1 -owner (Assign a specific disk to a node) OR

storage disk assign -count -owner (Assign unallocated disks to a node)

storage disk show -ownership (Show disk ownership to nodes)

storage disk show -state broken | copy | maintenance | partner | percent | reconstructing | removed | spare | unfail |zeroing (Show the state of a disk)

storage disk modify -disk NODE1:4c.10.0 -owner NODE1 -force-owner true (Force the change of ownership of a disk)

storage disk removeowner -disk NODE1:4c.10.0 -force true (Remove ownership of a drive)

storage disk set-led -disk Node1:4c.10.0 -action blink -time 5 (Blink the led of disk 4c.10.0 for 5 minutes. Use the blinkoff action to turn it off)


vserver setup (Runs the clustered ontap vserver setup wizard)

vserver create -vserver -rootvolume (Creates a new vserver)

vserver show (Shows all vservers in the system)

vserver show -vserver (Show information on a specific vserver)


volume create -vserver -volume -aggregate -size 100GB -junction-path /eng/p7/source (Creates a Volume within a vserver)

volume move -vserver -volume -destination-aggregate -foreground true (Moves a Volume to a different aggregate with high priority)

volume move -vserver -volume -destination-aggregate -cutover-action wait (Moves a Volume to a different aggregate with low priority but does not cutover)

volume move trigger-cutover -vserver -volume (Trigger a cutover of a volume move in waiting state)

volume move show (shows all volume moves currently active or waiting. NOTE: You can only do 8 volume moves at one time, more than 8 and they get queued)

system node run – node vol size 400g (resize volume_name to 400GB) OR

volume size -volume -new-size 400g (resize volume_name to 400GB)

volume modify -vserver -filesys-size-fixed false -volume (Turn off fixed file sizing on volumes)

volume recovery-queue purge-all (An 8.3 command that purges the volume undelete cache)

volume show -vserver SVM1 -volume * -autosize true (Shows which volumes have autosize enabled)

volume show -vserver SVM1 -volume * -atime-update true (Shows which volumes have update access time enabled)

volume modify -vserver SVM1 -volume volume1 -atime-update false (Turns update access time off on the volume)


lun show -vserver (Shows all luns belonging to this specific vserver)

lun modify -vserver -space-allocation enabled -path (Turns on space allocation so you can run lun reclaims via VAAI)

lun geometry -vserver path /vol/vol1/lun1 (Displays the lun geometry)


vserver modify -4.1 -pnfs enabled (Enable pNFS. NOTE: Cannot coexist with NFSv4)


storage show adapter (Show Physical FCP adapters)

fcp adapter modify -node NODENAME -adapter 0e -state down (Take port 0e offline)

node run fcpadmin config (Shows the config of the adapters – Initiator or Target)

node run -t target 0a (Changes port 0a from initiator or target – You must reboot the node)


vserver cifs create -vserver -cifs-server -domain (Enable Cifs)

vserver cifs share create -share-name root -path / (Create a CIFS share called root)

vserver cifs share show

vserver cifs show


vserver cifs options modify -vserver -smb2-enabled true (Enable SMB2.0 and 2.1)


volume snapshot create -vserver vserver1 -volume vol1 -snapshot snapshot1 (Create a snapshot on vserver1, vol1 called snapshot1)

volume snapshot restore -vserver vserver1 -volume vol1 -snapshot snapshot1 (Restore a snapshot on vserver1, vol1 called snapshot1)

volume snapshot show -vserver vserver1 -volume vol1 (Show snapshots on vserver1 vol1)

snap autodelete show -vserver SVM1 -enabled true (Shows which volumes have autodelete enabled)


volume create -vserver -volume vol10_mirror -aggregate -type DP (Create a destinaion Snapmirror Volume)

snapmirror create -vserver -source-path sysadmincluster://vserver1/vol10 -destination -path sysadmincluster://vserver1/vol10_mirror -type DP (Create a snapmirror relationship for sysadmincluster)

snapmirror initialize -source-path sysadmincluster://vserver1/vol10 -destination-path sysadmincluster://vserver1/vol10_mirror -type DP -foreground true (Initialize the snapmirror example)

snapmirror update -source-path vserver1:vol10 -destination-path vserver2:vol10_mirror -throttle 1000 (Snapmirror update and throttle to 1000KB/sec)

snapmirror modify -source-path vserver1:vol10 -destination-path vserver2:vol10_mirror -throttle 2000 (Change the snapmirror throttle to 2000)

snapmirror restore -source-path vserver1:vol10 -destination-path vserver2:vol10_mirror (Restore a snapmirror from destination to source)

snapmirror show (show snapmirror relationships and status)

NOTE: You can create snapmirror relationships between 2 different clusters by creating a peer relationship


snapmirror create -source-path vserver1:vol5 -destination-path vserver2:vol5_archive -type XDP -schedule 5min -policy backup-vspolicy (Create snapvault relationship with 5 min schedule using backup-vspolicy)

NOTE: Type DP (asynchronous), LS (load-sharing mirror), XDP (backup vault, snapvault), TDP (transition), RST (transient restore)


volume efficiency on -vserver SVM1 -volume volume1 (Turns Dedupe on for this volume)

volume efficiency start -vserver SVM1 -volume volume1 -dedupe true -scan-old-data true (Starts a volume efficiency dedupe job on volume1, scanning old data)

volume efficiency start -vserver SVM1 -volume volume1 -dedupe true (Starts a volume efficiency dedupe job on volume1, not scanning old data)

volume efficiency show -op-status !idle (This will display the running volume efficiency tasks)


network interface show (show network interfaces)

network interface modify -vserver vserver1 -lif cifs1 -address -netmask -force-subnet-association (Data Ontap 8.3 – forces the lif to use an IP address from the subnet range that has been setup)

network port show (Shows the status and information on current network ports)

network port modify -node * -port -mtu 9000 (Enable Jumbo Frames on interface vif_name>

network port modify -node * -port -flowcontrol-admin none (Disables Flow Control on  port data_port_name)

network interface revert * (revert all network interfaces to their home port)

ifgrp create -node -ifgrp -distr-func ip -mode multimode (Create an interface group called vif_name on node_name)

network port ifgrp add-port -node -ifgrp -port (Add a port to vif_name)

net int failover-groups create -failover-group data__fg -node -port (Create a failover group – Complete on both nodes)

ifgrp show (Shows the status and information on current interface groups)

net int failover-groups show (Show Failover Group Status and information)

node run node1 ifstat -a (shows interface statistics such as crc errors)

node run node1 ifstat -z (clears interface statistics, optionally specify the interface name to clear for that specific interface)


network interface show-routing-group (show routing groups for all vservers)

network routing-groups show -vserver vserver1 (show routing groups for vserver1)

network routing-groups route create -vserver vserver1 -routing-group -destination -gateway (Creates a default route on vserver1)

ping -lif-owner vserver1 -lif data1 -destination (ping via vserver1 using the data1 port)


services dns show (show DNS)


vserver services unix-user show

vserver services unix-user create -vserver vserver1 -user root -id 0 -primary-gid 0 (Create a unix user called root)

vserver name-mapping create -vserver vserver1 -direction win-unix -position 1 -pattern (.+) -replacement root (Create a name mapping from windows to unix)

vserver name-mapping create -vserver vserver1 -direction unix-win -position 1 -pattern (.+) -replacement sysadmin011 (Create a name mapping from unix to windows)

vserver name-mapping show (Show name-mappings)


vserver services nis-domain create -vserver vserver1 -domain vmlab.local -active true -servers (Create nis-domain called vmlab.local pointing to

vserver modify -vserver vserver1 -ns-switch nis-file (Name Service Switch referencing a file)

vserver services nis-domain show


system services ntp server create -node -server (Adds an NTP server to node_name)

system services ntp config modify -enabled true (Enable ntp)

system node date modify -timezone (Sets timezone for Area/Location Timezone. i.e. Australia/Sydney)

node date show (Show date on all nodes)


timezone -timezone Australia/Sydney (Sets the timezone for Sydney. Type ? after -timezone for a list)

date 201307090830 (Sets date for yyyymmddhhmm)

date -node (Displays the date and time for the node)


ucadmin show -node NODENAME (Show CNA ports on specific node)

ucadmin -node NODENAME -adapter 0e -mode cna (Change adapter 0e from FC to CNA. NOTE: A reboot of the node is required)


show-periodic -object volume -instance volumename -node node1 -vserver vserver1 -counter total_ops|avg_latency|read_ops|read_latency (Show the specific counters for a volume)

statistics show-periodic 0object nfsv3 -instance vserver1 -counter nfsv3_ops|nfsv3_read_ops|nfsv3_write_ops|read_avg_latency|write_avg_latency (Shows the specific nfsv3 counters for a vserver)

sysstat -x 1 (Shows counters for CPU, NFS, CIFS, FCP, WAFL)


Removing a port from an ifgrp – To remove a port from an ifgrp, you must first shut down any sub-interfaces of that ifgrp. For example if your ifgrp is named a0a and you have a vlan on it called a0a-100, you must first shut down a0a-100, you will then be able to remove the port from the ifgrp

FCoE – If you run multiple 10Gb Converged Network Adapters connecting to Nexus 5k, you will not be allowed, and it’s not supported, to run more than 1 link per port-channel per switch and use this port-channel or interface for a virtual fiber channel interface (VFC). For example, if I have 2 x CNA’s in my Netapp Clustered Ontap FAS, and I connect e1a and e2a to Nexus Switch 1, and then connect e1b and e2b to Nexus Switch 2, and create 1 port channel, you will get an error message when you try to bind the interface to the vfc. The error is “VFC cannot be bound to Port Channel as it has more than one member”

FCoE Lif Moves – To move a FCoE lif from it’s current home-port in clustered ontap, you must first offline the FCoE lif, perform the lif move and then online the lif

If you have any technical questions about this tutorial or any other tutorials on this site, please open a new thread in the forums and the community will be able to help you out.

All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

  • Harinder Singh

    Hello, is the page in reference to netapps newer release of Clustered Data OnTap ?

    • Hi Harinder, yes these are Data Ontap Clustered Mode cli commands

  • Harinder Singh

    Thanks ! very helpful …

  • henrypan