Wednesday, August 27, 2014

SYSSTAT Howto: A Deployment and Configuration Guide for Linux Servers


SYSSTAT is a software application comprised of several tools that offers advanced system performance monitoring. It provides the ability to create a measurable baseline of server performance, as well as the capability to formulate, accurately assess and conclude what led up to an issue or unexpected occurrence. In short, it lets you peel back layers of the system to see how it’s doing... in a way it is the blinking light telling you what is going on, except it blinks to a file. SYSSTAT has broad coverage of performance statistics and will watch the following server elements:

  • Input/Output and transfer rate statistics (global, per device, per partition, per network filesystem and per Linux task / PID)
  • CPU statistics (global, per CPU and per Linux task / PID), including support for virtualization architectures
  • Memory and swap space utilization statistics
  • Virtual memory, paging and fault statistics
  • Per-task (per-PID) memory and page fault statistics
  • Global CPU and page fault statistics for tasks and all their children
  • Process creation activity
  • Interrupt statistics (global, per CPU and per interrupt, including potential APIC interrupt sources)
  • Extensive network statistics: network interface activity (number of packets and kB received and transmitted per second, etc.) including failures from network devices; network traffic statistics for IP, TCP, ICMP and UDP protocols based on SNMPv2 standards.
  • NFS server and client activity
  • Socket statistics
  • Run queue and system load statistics
  • Kernel internal tables utilization statistics
  • System and per Linux task switching activity
  • Swapping statistics
  • TTY device activity

Scope

This article covers a brief overview of how the SYSSTAT utility works, initial configuration, deployment and testing on Linux based servers. It includes an optional system configuration guide for writing SYSSTAT data into a MySQL database. This article is not intended to be an in-depth explanation of the inner workings of SYSSTAT, nor a detailed manual on database storage operations.
Now... on to the interesting parts of SYSSTAT!

Overview

The SYSSTAT software application is composed of several utilities. Each utility has a specific function:
  • iostat reports CPU statistics and input/output statistics for devices, partitions and network filesystems.
  • mpstat reports individual or combined processor related statistics.
  • pidstat reports statistics for Linux tasks (processes) : I/O, CPU, memory, etc.
  • sar collects, reports and saves system activity information (CPU, memory, disks, interrupts, network interfaces, TTY, kernel tables, NFS, sockets etc.)
  • sadc is the system activity data collector, used as a backend for sar.
  • sa1 collects and stores binary data in the system activity daily data file. It is a front end to sadc designed to be run from cron.
  • sa2 writes a summarized daily activity report. It is a front end to sar designed to be run from cron.
  • sadf displays data collected by sar in multiple formats (CSV, XML, etc.) This is useful to load performance data into a database, or import them in a spreadsheet to make graphs.
The four main components used in collection activities are sarsa1, sa2 and cronSar is the systemactivity reporter. This tool will display interpreted results from the collected data. Sar is ran interactively by an administrator via command line. When a sar file is created, it is written into the /var/log/sa directory and named sar##. The ## is a numerical value that represents the day of the month (i.e. sa03 would be the third day of the month). The numerical value changes accordingly without system administrator intervention. There are many option flags to choose from to display data in a sar file to view information about server operations, such as cpu, network activity, NFS and sockets. These options can be viewed by reviewing the man pages of sar.
Sa1 is the internal mechanism that performs the actual statistical collection and writes the data to a binary file at specified times. Information is culled from the /proc directory where the Linux kernel writes and maintains pertinent data while the operating system is running. Similar to sar, the binary file is written into/var/log/sa and named sa##. Again, the ## represents the day of the month (i.e. sar03 would be the third day of the month). Once more, the numerical value changes accordingly without system administrator intervention.
Sa2 is responsible for converting the sa1 binary file into a human readable format. Upon successful creation of the binary file sa## it becomes necessary to set up a cron task that will call the sa2 libraries to convert the sa1 binary file into the human-readable sar file. SYSSTAT utilizes the scheduled cron command execution to draw and record specified performance data based upon pre-defined parameters. It is not necessary to run the sa2 cron at the same time or as often as the sa1 cron. The sa2 function will create and write the sar file to the /var/log/sa directory.
How often SYSSTAT “wakes up” to record and what data is captured, is determined by your operational needs, regulatory requirements and purposes of the server being monitored. These logs can be rotated to a central logging server and stored for analysis at a later date if desired.

SYSSTAT Configuration

Now that you have the 40,000-foot overview of the components, onward to the nitty gritty of building out your SYSSTAT capabilities. The following is a suggested base configuration. You can tweak for your environment, but I will step through a traditional set up. My testing environment utilized a SUSE 10 Linux server.
As a side bar, every Linux-based server I have come across, installed or worked with has the SYSSTAT package deployed as part of the base server set up at installation. I would suggest however that you always look at the option to upgrade to the latest version of SYSSTAT. It will offer bug correction and more performance monitoring elements such as:
  • Autoconf support added.
  • Addition of a new command ("pidstat") aimed at displaying statistics for processes, threads and their children (CPU, memory, I/O, task switching activity...)
  • Better hotplug CPU support.
  • New VM paging metrics added to sar -B.
  • Added field tcp-tw (number of sockets in TIME_WAIT state) to sar -n SOCK.
  • iostat can now display the registered device name of device-mapper devices.
  • Timestamped comments can now be inserted into data files created by sadc.
  • XML Schema document added. Useful with sadf -x.
  • National Language Support improved: Added Danish, Dutch, Kirghiz, Vietnamese and Brazilian Portuguese translations.
  • Options -x and -X removed from sar. You should now use pidstat instead.
  • Some obsolete fields (super*, dquot* and rtsig*) were removed from sar -v. Added field pty-nr (number of pseudo-terminals).

Create Scheduled SYSSTAT Monitoring

First things first--we need to tell our machine to record sar data. The kernel needs to be aware that it is to run SYSSTAT to collect metrics. Depending upon your distribution, at installation it creates a basic cron named sysstat.cron in /etc/sysstat/ for SUSE, or in /etc/cron.d/sysstat for Red Hat. If you use SUSE, it is recommended to create SYSSTAT‚Äôs cron job as a soft link assignment in /etc/cron.d pointing to the sysstat.cron in /etc/sysstat/. I would also suggest that several gathering times be created based on times when a server has the potential to be more active. This will ensure collection of accurate statistics. What good is it to see that your server is never busy at 2 in the morning? Data collected during off peak hours would skew later analysis and has the potential to cause erroneous interpretation.
 #Crontab for sysstat
 #Activity reports culled and updated every 10 minutes everyday
 -*/10 *   * * *     root  /usr/lib/sa/sa1
 #Update log sar reports every day at 2330 hours
 30 23 * * *      root  /usr/lib/sa/sa2 –A
After the softlink has been created, restart the cron daemon to allow it to reload the new assignment:
 # rccron restart
Let's use a script to create a sar backup file and offload to a specified location:
 #!/bin/bash
 # Created 04-AUG-09 / Kryptikos

  # Script to create backup and rename with current hostname and date of sar file and offload to storage facility.

  # Cycle through directory once looking for pertinent sar files.

  ls -1 /var/log/sa/sar* | while read sarname
         do
           mv "$sarname" $(echo "$HOSTNAME"_"$sarname "_`date +"%Y%m%d”`.bkup)
         done

 # This section will need to be modified. It is a place holder for code to offload the designated sar backup
 # file just created to the localhost, a central logging host or database server. This is dependent upon your ops.
 # i.e. scp /var/log/<sarlogname> <username>@<host>:/<desired location>

 <insert some code to handle the transfer via scp / ssh / mv / nc / or other method>

  exit
SYSSTAT will now run and collect the sar log, rename it and then offload it to the location you prefer.
Well that’s great you say, but what if you don’t have time to comb through 30 days worth of sar reports and just need a quick snap shot? Another neat thing you can do with this tool is capture real time statistics of what is going on with your machine. For instance at the command line you can enter:
 # sar –n NFS 5 3
That will have SYSSTAT report all NFS activity in five second intervals for 3 times and report it back to the terminal.
 root@mymachine # sar -n NFS 5 3
 Linux 2.6.18 (mymachine)        08/04/2009

 02:50:39 PM    call/s retrans/s    read/s   write/s  access/s  getatt/s
 02:50:44 PM      0.00      0.00      0.00      0.00      0.00      0.00
 02:50:49 PM      0.00      0.00      0.00      0.00      0.00      0.00
 02:50:54 PM      0.00      0.00      0.00      0.00      0.00      0.00
 Average:         0.00      0.00      0.00      0.00      0.00      0.00
So you have your sar data recording, and you now know how to use it for real-time checking. You are still left with quite a bit of data to comb through. It might be that you don’t need to look through your data until there is a problem and you’d like to track back at what point in time your server started having issues. The next section of the article deals with an advanced configuration for storage of the sar data for later retrieval.

MySQL Database Configuration

If you are running a server that does not carry a heavy user load it is okay to have the sar data stay local on the box. However, with the large volumes of system performance data that will be collected from a Linux server farm running numerous applications, I would suggest establishing a database for storing the relevant SYSSTAT information. By utilizing a MySQL database, customized data may be reviewed at any time and allow for the creation of reports, including charts, that are more granular in nature. It would also allow for analysis of cross sections of pertinent SYSSTAT data from multiple servers at one time. This would alleviate requiring an administrator/engineer reviewing individual sar log files attempting to troubleshoot or identify issues line by line. The use of a database decreases the time required to locate and diagnose root cause(s) of a server issue. This section covers database creation, setup and methodology to import the recorded logs. It is recommended to install and utilize MySQL version 5.1 or later for utilization of enhanced features and increased performance.

Start the MySQL Daemon

It stands to reason that to run the MySQL daemon you must have already installed MySQL. If you have not installed MySQL now's the perfect time to pause and grab the latest copy. You should be able to utilize your distribution's package manager to install the database, however if not, it is just as easy (and the method I typically use) to simply pull a copy fromhttp://dev.mysql.com/downloads/mysql/5.1.html#downloads. Once you have MySQL installed you can start it with the following command:
 # <location of mysql>/bin/mysqld_safe --user=mysql --local-infile=1
The option --user= tells the daemon to run as user mysql (must be a local POSIX account). It is not recommended to run the MySQL daemon as root. The option --local-infile=1 tells the daemon to enableLOAD DATA LOCAL INFILE, which allows pushing tabbed, csv and txt files into a database from a file stored locally on the MySQL server.

Create the SYSSTAT Database

Getting down to business now that the database is up and running, it is time to create the infrastructure we want to hang our sar data upon.
  1. Connect to the MySQL server from the command line:
    # <location of mysql >/bin/ mysql --user=<username> -p
    Again, the user must exist on the MYSQL server; it is not a POSIX user account. The -p prompts for a password to connect to the MYSQL server.
  2. From the MySQL prompt check that the database does not already exist:
    mysql > SHOW databases;
  3. Create the database:
    mysql > CREATE DATABASE <name of database>;
  4. Grant privileges on the newly created database to a specified MySQL user account:
    mysql > GRANT ALL ON <name of database>.* TO ‚"<mysql username>‚"@‚"localhost‚";
This grants the MYSQL user specified full control over the database but only when connecting from the localhost the MYSQL daemon is running on. If you prefer to access from alternative locations for administrative purposes, execute the additional command:
mysql > GRANT ALL ON <name of database>.* TO ‘<mysql username>’@’%’
It is possible to control and granulize access via certain networks or domains. For security, if a database is deployed, I would create a "workhorse" account to perform the upload. This workhorse account would only have UPDATE privileges on the desired tables. In my case I chose to name my database "systat_collection".
Before:
 +------------------------------+
 | Database                     |
 +------------------------------+
 | information_schema           | 
 | menagerie                    | 
 | MySQL                        | 
 | test                         | 
 +------------------------------+
 4 rows in set (0.00 sec)
After:
 +------------------------------+
 | Database                     |
 +------------------------------+
 | information_schema           | 
 | menagerie                    | 
 | MySQL                        | 
 | sysstat_collection        |
 | test                         | 
 +------------------------------+
 5 rows in set (0.00 sec)

Create the Necessary Tables

We now have our database, but we need to take it one more step. The database has to be "made-ready" to accept incoming sar data. This is done by building tables. Think of tables as a bookcase. Each block (table) will hold books (sar data). The easiest method to insert tables into your database is to create and utilize sql scripts. These scripts can be quickly invoked by the MySQL daemon and pushed inside the database. Each script should have a unique name that ends with the .sql extension. A basic SYSSTAT configuration would require 18 tables. I've written an example sql script for you to use:
Example SQL script: create_cpuutilization_table.sql
 # Created 04-AUG-09 / Kryptikos

  # Drop the CPU UTILIZATION table if it exists, then recreate it.

  DROP TABLE IF EXISTS cpuutilization;

  CREATE TABLE cpuutilization
 (
   hostname    VARCHAR(20),
   datestamp   DATE,
   time              VARCHAR(8),
   cpu                VARCHAR(3),
   pct_user       DECIMAL(10,2),
   pct_nice        DECIMAL(10,2),
   pct_system  DECIMAL(10,2),
   pct_iowait    DECIMAL(10,2),
   pct_steal       DECIMAL(10,2),
   pct_idle         DECIMAL(10,2)
 );
Breaking that down into understandable chunks, CPU utilization is one of the items SYSSTAT will record. SYSSTAT will stamp the kernel, hostname, date, time and then sar value in the string (see the output in the “real-time” example earlier in the article). I know what kernel version I am using so really all I am interested in is hostname (because I capture multiple servers to this database), date, time and cpu elements. Each sar value has its own elements. The quickest way to view this is to use the man pages of sar to see what values sar records.
Remembering thinking about a database table as a bookshelf, the above values hostname, datestamp, time, cpu, pct_user, pct_nice, etc. are the open shelves. You have to have the shelf before you can place a book. For each book type (sar element) you have you need a shelf (table point).

Deploy the Tables into the MySQL Database

The great thing about .sql scripts is they can be invoked directly by the MySQL daemon. It is not necessary to log into and obtain a MySQL prompt from the server. You can simply feed the script file to the daemon which will parse and execute the commands on your behalf. Lather, rinse and repeat for each table you wish to insert into your database, or create a bash script to feed the .sql scripts all at once. The following is the command structure to execute the table creation script:
# /usr/local/ mysql /bin/ mysql -u <user> -p -D <database> < create_cpuutilization_table.sql
Again, the -u tells the daemon to run the script as the specified MySQL user account (not POSIX). The user must have privileges on the database to allow modification. The -p again prompts for the user password and the -D specifies which database you want to execute the script contents upon. The re-direct sign ‚"<" feeds the script to the daemon. You can by-pass the password prompt and stream your password directly in by changing -p to --password=<passwd value>.

Loading SYSSTAT Logs Into the MySQL Database

Moving right along, once the cron job and backup script has run, it is necessary to format the data from the sar file in order to prepare it for loading into the SYSSTAT database. The elegance of sar is it loads the data into tabulated columns in one large file and breaks apart sections by blank lines. By utilizing the stream editor (sed) and translate capabilities we can quickly parse the sar file into bite size pieces ready to load into its respective table. A few things are worthy to note here. I like to recommend processing sar data before 0000 hours (midnight) to maintain time/date integrity. Second, in preparation I like to stash and load sar data from its own directory source, say /var/log/sysstatd/prepare, or from the /tmp directory. In the environment I work in I have numerous servers reporting and prefer one location where logs are stored for sar.
The script listed below is an example of formatting and uploading data into a database. Variables inside the script should be changed to fulfill operational requirements. Spaces are included to increase legibility in this article:
 #!/bin/bash
 # Created 04-AUG-09 / Kryptikos

 # This script will parse the sar file and prepare/format the data to make it
 # available to upload into a MySQL database. It will then call and upload the
 # data into the selected MySQL database and its respective tables.

 # Set miscellaneous variables needed.
 DATESTAMP=$(date '+%Y-%m-%d')
 WORKDIR=/tmp/sysstatdbprepare
 FMATDIR=/tmp/sysstatdbformatted

 # Begin main.

 # Change into work directory.

 cd $WORKDIR

 # Start preprocessing formatting.

 for file in `dir -d *`;
 do

 # Prepare and format designated hosts' sar log files to be loaded into MYSQL database:

 sed -n "/proc/,/cswch/ p" $file | sed "$ d" | tr -s [:blank:] | sed -n '1h;2,$H;${g;s/ /,/g;p}' | sed '/Average:/ d' | sed "s/^/$HOSTNAME,$DATESTAMP,/" | sed '$d' > "$FMATDIR"/"$file"_taskcreation.csv

 sed -n "/cswch/,/CPU/ p" $file | sed "$ d" | tr -s [:blank:] | sed -n '1h;2,$H;${g;s/ /,/g;p}' | sed '/Average:/ d' | sed "s/^/$HOSTNAME,$DATESTAMP,/" | sed '$d' > "$FMATDIR"/"$file"_systemswitchingactivity.csv

 sed -n "/user/,/INTR/ p" $file | sed "$ d" | tr -s [:blank:] | sed -n '1h;2,$H;${g;s/ /,/g;p}' | sed '/Average:/ d' | sed "s/^/$HOSTNAME,$DATESTAMP,/" | sed '$d' > "$FMATDIR"/"$file"_cpuutilization.csv

 sed -n "/INTR/,/CPU/ p" $file | sed "$ d" | tr -s [:blank:] | sed -n '1h;2,$H;${g;s/ /,/g;p}' | sed '/Average:/ d' | sed "s/^/$HOSTNAME,$DATESTAMP,/" | sed '$d' > "$FMATDIR"/"$file"_irqinterrupts.csv

 sed -n "/i000/,/pswpin/ p" $file | sed "$ d" | tr -s [:blank:] | sed -n '1h;2,$H;${g;s/ /,/g;p}' | sed '/Average:/ d' | sed "s/^/$HOSTNAME,$DATESTAMP,/" | sed '$d' > "$FMATDIR"/"$file"_inputactivityperprocperirq.csv

 sed -n "/pswpin/,/tps/ p" $file | sed "$ d" | tr -s [:blank:] | sed -n '1h;2,$H;${g;s/ /,/g;p}' | sed '/Average:/ d' | sed "s/^/$HOSTNAME,$DATESTAMP,/" | sed '$d' > "$FMATDIR"/"$file"_swappingstatistics.csv

 sed -n "/tps/,/frmpg/ p" $file | sed "$ d" | tr -s [:blank:] | sed -n '1h;2,$H;${g;s/ /,/g;p}' | sed '/Average:/ d' | sed "s/^/$HOSTNAME,$DATESTAMP,/" | sed '$d' > "$FMATDIR"/"$file"_iotransferrate.csv

 sed -n "/frmpg/,/TTY/ p" $file | sed "$ d" | tr -s [:blank:] | sed -n '1h;2,$H;${g;s/ /,/g;p}' | sed '/Average:/ d' | sed "s/^/$HOSTNAME,$DATESTAMP,/" | sed '$d' > "$FMATDIR"/"$file"_memorystatistics.csv

 sed -n "/TTY/,/IFACE/ p" $file | sed "$ d" | tr -s [:blank:] | sed -n '1h;2,$H;${g;s/ /,/g;p}' | sed '/Average:/ d' | sed "s/^/$HOSTNAME,$DATESTAMP,/" | sed '$d' > "$FMATDIR"/"$file"_ttydeviceactivity.csv

 sed -n "/IFACE/,/rxerr/ p" $file | sed "$ d" | tr -s [:blank:] | sed -n '1h;2,$H;${g;s/ /,/g;p}' | sed '/Average:/ d' | sed "s/^/$HOSTNAME,$DATESTAMP,/" | sed '$d' > "$FMATDIR"/"$file"_networkstatistics.csv

 sed -n "/rxerr/,/call/ p" $file | sed "$ d" | tr -s [:blank:] | sed -n '1h;2,$H;${g;s/ /,/g;p}' | sed '/Average:/ d' | sed "s/^/$HOSTNAME,$DATESTAMP,/" | sed '$d' > "$FMATDIR"/"$file"_networkstatisticserrors.csv

 sed -n "/call/,/scall/ p" $file | sed "$ d" | tr -s [:blank:] | sed -n '1h;2,$H;${g;s/ /,/g;p}' | sed '/Average:/ d' | sed "s/^/$HOSTNAME,$DATESTAMP,/" | sed '$d' > "$FMATDIR"/"$file"_networkstatisticsnfsclientactvty.csv

 sed -n "/scall/,/pgpgin/ p" $file | sed "$ d" | tr -s [:blank:] | sed -n '1h;2,$H;${g;s/ /,/g;p}' | sed '/Average:/ d' | sed "s/^/$HOSTNAME,$DATESTAMP,/" | sed '$d' > "$FMATDIR"/"$file"_networkstatisticsnfsserveractvty.csv

 sed -n "/pgpgin/,/kbmemfree/ p" $file | sed "$ d" | tr -s [:blank:] | sed -n '1h;2,$H;${g;s/ /,/g;p}' | sed '/Average:/ d' | sed "s/^/$HOSTNAME,$DATESTAMP,/" | sed '$d' > "$FMATDIR"/"$file"_pagingstatistics.csv

 sed -n "/kbmemfree/,/dentunusd/ p" $file | sed "$ d" | tr -s [:blank:] | sed -n '1h;2,$H;${g;s/ /,/g;p}' | sed '/Average:/ d' | sed "s/^/$HOSTNAME,$DATESTAMP,/" | sed '$d' > "$FMATDIR"/"$file"_memoryswapspaceutilization.csv

 sed -n "/dentunusd/,/totsck/ p" $file | sed "$ d" | tr -s [:blank:] | sed -n '1h;2,$H;${g;s/ /,/g;p}' | sed '/Average:/ d' | sed "s/^/$HOSTNAME,$DATESTAMP,/" | sed '$d' > "$FMATDIR"/"$file"_inodefilekerneltable.csv

 sed -n "/totsck/,/runq-sz/ p" $file | sed "$ d" | tr -s [:blank:] | sed -n '1h;2,$H;${g;s/ /,/g;p}' | sed '/Average:/ d' | sed "s/^/$HOSTNAME,$DATESTAMP,/" | sed '$d' > "$FMATDIR"/"$file"_networkstatisticssocket.csv

 sed -n "/runq-sz/,// p" $file | sed "$ d" | tr -s [:blank:] | sed -n '1h;2,$H;${g;s/ /,/g;p}' | sed '/Average:/ d' | sed "s/^/$HOSTNAME,$DATESTAMP,/" | sed '$d' > "$FMATDIR"/"$file"_queuelengthloadavgs.csv

 done

 # Kick off uploading formatted data into MYSQL database:

 # Change into format directory.

 cd $FMATDIR

 # Pushing data into MYSQL via -e flag.

 for file in `dir -d *`;
 do

 /usr/local/MySQL/bin/MySQL -u <MySQLuser> --password=<password> -D <database> -e "LOAD DATA LOCAL INFILE '/tmp/sysstatdbformatted/${file}' INTO TABLE `echo $file | sed 's/\.csv//g' | awk -F_ '{print $2}'` FIELDS TERMINATED BY ',' IGNORE 1 LINES;"

Wrap-Up and Overview

That's it, just a few scripts and a little bit of time to set up automation to inject your data into a database. If everything is correct syntax wise on your scripts, you should now be able to log into the MySQL server and see the data loaded into the tables. I tend to use MySQL Administrator (GUI tool) to log in and look at the databases and tables. It is a bit quicker to use for checking.
The following overview points are suggested recommendations to implement SYSSTAT on your Linux server(s):
  • Schedule cron via soft link assignment in /etc/cron.d pointing to the sysstat.cron in /etc/sysstat/.
  • Schedule multiple sa1 function crons: record statistics more often during higher utilization times and less during off-peak hours with respect to server operation.
  • Utilize MYSQL database to store collected data for minimum of 30 - 45 days before purging records and restarting storage process.
  • If database option is not chosen: write sar files to a central logging server and rename with hostname and current date values.
  • If database option is not chosen: store renamed sar files for 25 - 30 days until purged by central logging server.
SYSSTAT can give you a wealth of information as to what is going on with your server. It gives you the chance to watch a historical trend of when your server is getting utilized, how heavy the use is and a host of other empirical data. It will allow you to focus and determine root-cause analysis if you suddenly find your server having issues. You are only limited to your imagination as to what you could use it for to compliment troubleshooting. If you do end up using a database there are packages out there that will generate pretty graphs for easier interpretation, or you could even scribble up some PHP code and pull up the data via a web browser.
The thing I love about Linux is how I can continue to break things apart, learn how they work and then deploy based on my needs. If you have suggestion as to content, or have questions please feel free to comment. Either way, hope this helped a little bit for your environment.

Monday, August 25, 2014

NetApp Commandline Cheatsheet



This is a quick and dirty NetApp commandline cheatsheet on most of the common commands used, this is not extensive so check out the man pages and NetApp documentation. I will be updating this document as I become more familar with the NetApp application.
Server
Startup and Shutdown
Boot Menu1) Normal Boot.
2) Boot without /etc/rc.
3) Change password.
4) Clean configuration and initialize all disks.
5) Maintenance mode boot.
6) Update flash from backup config.
7) Install new software first.
8) Reboot node.
Selection (1-8)?
  • Normal Boot - continue with the normal boot operation
  • Boot without /etc/rc - boot with only default options and disable some services
  • Change Password - change the storage systems password
  • Clean configuration and initialize all disks - cleans all disks and reset the filer to factory default settings
  • Maintenance mode boot - file system operations are disabled, limited set of commands
  • Update flash from backup config - restore the configuration information if corrupted on the boot device
  • Install new software first - use this if the filer does not include support for the storage array
  • Reboot node - restart the filer
startup modes
  • boot_ontap - boots the current Data ONTAP software release stored on the boot device
  • boot primary - boots the Data ONTAP release stored on the boot device as the primary kernel
  • boot_backup - boots the backup Data ONTAP release from the boot device
  • boot_diags - boots a Data ONTAP diagnostic kernel
Note: there are other options but NetApp will provide these as when necessary
shutdown
halt [-t <mins>] [-f]
-t = shutdown after minutes specified
-f = used with HA clustering, means that the partner filer does not take over
restartreboot [-t <mins>] [-s] [-r] [-f]

-t = reboot in specified minutes
-s = clean reboot but also power cycle the filer (like pushing the off button)
-r = bypasses the shutdown (not clean) and power cycles the filer
-f = used with HA clustering, means that the partner filer does not take over
System Privilege and System shell
Privilegepriv set [-q] [admin | advanced]
Note: by default you are in administrative mode

-q = quiet suppresses warning messages
Access the systemshell## First obtain the advanced privileges
priv set advanced

## Then unlock and reset the diag users password
useradmin diaguser unlock
useradmin diaguser password

## Now you should be able to access the systemshell and use all the standard Unix
## commands
systemshell
login: diag
password: ********
Licensing and Version
licenses (commandline)## display licenses
license

## Adding a license
license add <code1> <code2>
## Disabling a license
license delete <service>
Data ONTAP versionversion [-b]

-b = include name and version information for the primary, secondary and diagnostic kernels and the firmware
Useful Commands
read the messages filerdfile /etc/messages
write to a filewrfile -a <file> <text>

# Examples
wrfile -a /etc/test1 This is line 6 # comment here
wrfile -a /etc/test1 "This is line \"15\"."
System Configuration
General informationsysconfig
sysconfig -v
sysconfig -a (detailed)
Configuration errorssysconfig -c
Display disk devicessysconfig -d
sysconfig -A
Display Raid group informationsysconfig -V
Display arregates and plexessysconfig -r
Display tape devicessysconfig -t
Display tape librariessysconfig -m
Environment Information
General informationenvironment status
Disk enclosures (shelves)environment shelf [adapter]
environment shelf_power_status
Chassisenvironment chassis all
environment chassis list-sensors
environment chassis Fans
environment chassis CPU_Fans
environment chassis Power
environment chassis Temperature
environment chassis [PS1|PS2]
Fibre Channel Information
Fibre Channel statsfcstat link_status
fcstat fcal_stat
fcstat device_map
SAS Adapter and Expander Information
Shelf informationsasstat shelf
sasstat shelf_short
Expander informationsasstat expander
sasstat expander_map
sasstat expander_phy_state
Disk informationsasstat dev_stats
Adapter informationsasstat adapter_state
Statistical Information
Systemstats show system
Processorstats show processor
Diskstats show disk
Volumestats show volume
LUNstats show lun
Aggregatestats show aggregate
FCstats show fcp
iSCSIstats show iscsi
CIFSstats show cifs
Networkstats show ifnet
Storage
Storage Commands
Displaystorage show adapter
storage show disk [-a|-x|-p|-T]
storage show expander
storage show fabric
storage show fault
storage show hub
storage show initiators
storage show mc
storage show port
storage show shelf
storage show switch
storage show tape [supported]
storage show acp

storage array show
storage array show-ports
storage array show-luns
storage array show-config
Enablestorage enable adapter
Disablestorage disable adapter
Rename switchstorage rename <oldname> <newname>
Remove portstorage array remove-port <array_name> -p <WWPN>
Load Balancestorage load balance
Power Cyclestorage power_cycle shelf -h
storage power_cycle shelf start -c <channel name>
storage power_cycle shelf completed
Disks
Disk Information
Disk nameThis is the physical disk itself, normally the disk will reside in a disk enclosure, the disk will have a pathname like 2a.17 depending on the type of disk enclosure
  • 2a = SCSI adapter
  • 17 = disk SCSI ID
Any disks that are classed as spare will be used in any group to replace failed disks. They can also be assigned to any aggregate. Disks are assigned to a specific pool.
Disk Types
Dataholds data stored within the RAID group
SpareDoes not hold usable data but is available to be added to a RAID group in an aggregate, also known as a hot spare
ParityStore data reconstruction information within the RAID group
dParityStores double-parity information within the RAID group, if RAID-DP is enabled
Disk Commands
Displaydisk show
disk show <disk_name>

disk_list

sysconfig -r
sysconfig -d
## list all unnassigned/assigned disks
disk show -n
disk show -a
Adding (assigning)## Add a specific disk to pool1 the mirror pool
disk assign <disk_name> -p 1

## Assign all disk to pool 0, by default they are assigned to pool 0 if the "-p"
## option is not specififed
disk assign all -p 0
Remove (spin down disk)disk remove <disk_name>
Reassigndisk reassign -d <new_sysid>
Replacedisk replace start <disk_name> <spare_disk_name>
disk replace stop <disk_name>

Note: uses Rapid RAID Recovery to copy data from the specified file system to the specified spare disk, you can stop this process using the stop command
Zero spare disksdisk zero spares
fail a diskdisk fail <disk_name>
Scrub a diskdisk scrub start
disk scrub stop
Sanitizedisk sanitize start <disk list>
disk sanitize abort <disk_list>
disk sanitize status
disk sanitize release <disk_list>

Note: the release modifies the state of the disk from sanitize to spare. Sanitize requires a license. 
Maintanencedisk maint start -d <disk_list>
disk maint abort <disk_list>
disk maint list
disk maint status

Note: you can test the disk using maintain mode
swap a diskdisk swap
disk unswap

Note: it stalls all SCSI I/O until you physically replace or add a disk, can used on SCSI disk only.
Statisicsdisk_stat <disk_name>
Simulate a pulled diskdisk simpull <disk_name>
Simulate a pushed diskdisk simpush -l
disk simpush <complete path of disk obtained from above command>

## Example
ontap1> disk simpush -l
The following pulled disks are available for pushing:
                         v0.16:NETAPP__:VD-1000MB-FZ-520:14161400:2104448

ontap1> disk simpush v0.16:NETAPP__:VD-1000MB-FZ-520:14161400:2104448
Aggregates
Aggregate States
OnlineRead and write access to volumes is allowed
RestrictedSome operations, such as parity reconstruction are allowed, but data access is not allowed
OfflineNo access to the aggregate is allowed
Aggregate Status Values
32-bitThis aggregate is a 32-bit aggregate
64-bitThis aggregate is a 64-bit aggregate
aggrThis aggregate is capable of contain FlexVol volumes
copyingThis aggregate is currently the target aggregate of an active copy operation
degradedThis aggregate is contains at least one RAID group with single disk failure that is not being reconstructed
double degradedThis aggregate is contains at least one RAID group with double disk failure that is not being reconstructed (RAID-DP aggregate only)
foreignDisks that the aggregate contains were moved to the current storage system from another storage system
growingDisks are in the process of being added to the aggregate
initializingThe aggregate is in the process of being initialized
invalidThe aggregate contains no volumes and none can be added. Typically this happend only after an aborted "aggr copy" operation
ironingA WAFL consistency check is being performewd on the aggregate
mirror degradedThe aggregate is mirrored and one of its plexes is offline or resynchronizing
mirroredThe aggregate is mirrored
needs checkWAFL consistency check needs to be performed on the aggregate
normalThe aggregate is unmirrored and all of its RAID groups are functional
out-of-dateThe aggregate is mirrored and needs to be resynchronized
partialAt least one disk was found for the aggregate, but two or more disks are missing
raid0The aggrgate consists of RAID 0 (no parity) RAID groups
raid4The agrregate consists of RAID 4 RAID groups
raid_dpThe agrregate consists of RAID-DP RAID groups
reconstructAt least one RAID group in the aggregate is being reconstructed
redirectAggregate reallocation or file reallocation with the "-p" option has been started on the aggregate, read performance will be degraded
resyncingOne of the mirror aggregates plexes is being resynchronized
snapmirrorThe aggregate is a SnapMirror replica of another aggregate (traditional volumes only)
tradThe aggregate is a traditional volume and cannot contain FlexVol volumes.
verifyingA mirror operation is currently running on the aggregate
wafl inconsistentThe aggregate has been marked corrupted; contact techincal support
Aggregate Commands
Displayingaggr status
aggr status -r
aggr status <aggregate> [-v]
Check you have spare disksaggr status -s
Adding (creating)## Syntax - if no option is specified then the defult is used
aggr create <aggr_name> [-f] [-m] [-n] [-t {raid0 |raid4 |raid_dp}] [-r raid_size] [-T disk_type] [-R rpm>] [-L] [-B {32|64}] <disk_list>

## create aggregate called newaggr that can have a maximum of 8 RAID groups
aggr create newaggr -r 8 -d 8a.16 8a.17 8a.18 8a.19
## create aggregated called newfastaggr using 20 x 15000rpm disks
aggr create newfastaggr -R 15000 20

## create aggrgate called newFCALaggr (note SAS and FC disks may bge used)
aggr create newFCALaggr -T FCAL 15
Note:
-f = overrides the default behavior that does not permit disks in a plex to belong to different disk pools
-m = specifies the optional creation of a SyncMirror
-n = displays the results of the command but does not execute it
-r = maximum size (number of disks) of the RAID groups for this aggregate
-T = disk type ATA, SATA, SAS, BSAS, FCAL or LUN
-R = rpm which include 5400, 7200, 10000 and 15000
Remove(destroying)aggr offline <aggregate>
aggr destroy <aggregate>
Unremoving(undestroying)aggr undestroy <aggregate>
Renameaggr rename <old name> <new name>
Increase size## Syntax
aggr add <aggr_name> [-f] [-n] [-g {raid_group_name | new |all}] <disk_list>

## add an additonal disk to aggregate pfvAggr, use "aggr status" to get group name
aggr status pfvAggr -r
aggr add pfvAggr -g rg0 -d v5.25

## Add 4 300GB disk to aggregate aggr1
aggr add aggr1 4@300
offlineaggr offline <aggregate>
onlineaggr online <aggregate>
restricted stateaggr restrict <aggregate>
Change an aggregate options## to display the aggregates options
aggr options <aggregate>

## change a aggregates raid group
aggr options <aggregate> raidtype raid_dp

## change a aggregates raid size
aggr options <aggregate> raidsize 4
show space usageaggr show_space <aggregate>
Mirroraggr mirror <aggregate>
Split mirroraggr split <aggregate/plex> <new_aggregate>
Copy from one agrregate to another## Obtain the status
aggr copy status

## Start a copy
aggr copy start <aggregate source> <aggregate destination>

## Abort a copy - obtain the operation number by using "aggr copy status"
aggr copy abort <operation number>

## Throttle the copy 10=full speed, 1=one-tenth full speed
aggr copy throttle <operation number> <throttle speed> 
Scrubbing (parity)## Media scrub status
aggr media_scrub status
aggr scrub status

## start a scrub operation
aggr scrub start [ aggrname | plexname | groupname ]

## stop a scrub operation
aggr scrub stop [ aggrname | plexname | groupname ]

## suspend a scrub operation
aggr scrub suspend [ aggrname | plexname | groupname ]

## resume a scrub operation
aggr scrub resume [ aggrname | plexname | groupname ]
Note: Starts parity scrubbing on the named online aggregate. Parity scrubbing compares the data disks to the
parity disk(s) in their RAID group, correcting the parity disk’s contents as necessary. If no name is
given, parity scrubbing is started on all online aggregates. If an aggregate name is given, scrubbing is
started on all RAID groups contained in the aggregate. If a plex name is given, scrubbing is started on
all RAID groups contained in the plex.
Look at the following system options:
raid.scrub.duration 360
raid.scrub.enable on
raid.scrub.perf_impact low
raid.scrub.schedule
Verify (mirroring)## verify status
aggr verify status

## start a verify operation
aggr verify start [ aggrname ]

## stop a verify operation
aggr verify stop [ aggrname ]

## suspend a verify operation
aggr verify suspend [ aggrname ]

## resume a verify operation
aggr verify resume [ aggrname ]
Note: Starts RAID mirror verification on the named online mirrored aggregate. If no name is given, then
RAID mirror verification is started on all online mirrored aggregates. Verification compares the data in
both plexes of a mirrored aggregate. In the default case, all blocks that differ are logged, but no changes
are made.
Media Scrubaggr media_scrub status

Note: Prints the media scrubbing status of the named aggregate, plex, or group. If no name is given, then
status is printed for all RAID groups currently running a media scrub. The status includes a
percent-complete and whether it is suspended.
Look at the following system options:

raid.media_scrub.enable on
raid.media_scrub.rate 600
raid.media_scrub.spares.enable on
Volumes
Volume States
OnlineRead and write access to this volume is allowed.
RestrictedSome operations, such as parity reconstruction, are allowed, but data access is not allowed.
OfflineNo access to the volume is allowed.
Volume Status Values
access deniedThe origin system is not allowing access. (FlexCache volumes
only.)
active redirectThe volume's containing aggregate is undergoing reallocation (with the -p option specified). Read performance may be reduced while the volume is in this state.
connectingThe caching system is trying to connect to the origin system. (FlexCache volumes only.)
copyingThe volume is currently the target of an active vol copy or snapmirror operation.
degradedThe volume's containing aggregate contains at least one degraded RAID group that is not being reconstructed after single disk failure.
double degradedThe volume's containing aggregate contains at least one degraded RAID-DP group that is not being reconstructed after double disk failure.
flexThe volume is a FlexVol volume.
flexcacheThe volume is a FlexCache volume.
foreignDisks used by the volume's containing aggregate were moved to the current storage system from another storage system.
growingDisks are being added to the volume's containing aggregate.
initializingThe volume's containing aggregate is being initialized.
invalidThe volume does not contain a valid file system.
ironingA WAFL consistency check is being performed on the volume's containing aggregate.
lang mismatchThe language setting of the origin volume was changed since the caching volume was created. (FlexCache volumes only.)
mirror degradedThe volume's containing aggregate is mirrored and one of its plexes is offline or resynchronizing.
mirroredThe volume's containing aggregate is mirrored.
needs checkA WAFL consistency check needs to be performed on the volume's containing aggregate.
out-of-dateThe volume's containing aggregate is mirrored and needs to be resynchronized.
partialAt least one disk was found for the volume's containing aggregate, but two or more disks are missing.
raid0The volume's containing aggregate consists of RAID0 (no parity) groups (array LUNs only).
raid4The volume's containing aggregate consists of RAID4 groups.
raid_dpThe volume's containing aggregate consists of RAID-DP groups.
reconstructAt least one RAID group in the volume's containing aggregate is being reconstructed.
redirectThe volume's containing aggregate is undergoing aggregate reallocation or file reallocation with the -p option. Read performance to volumes in the aggregate might be degraded.
rem vol changedThe origin volume was deleted and re-created with the same name. Re-create the FlexCache volume to reenable the FlexCache relationship. (FlexCache volumes only.)
rem vol unavailThe origin volume is offline or has been deleted. (FlexCache volumes only.)
remote nvram errThe origin system is experiencing problems with its NVRAM. (FlexCache volumes only.)
resyncingOne of the plexes of the volume's containing mirrored aggregate is being resynchronized.
snapmirroredThe volume is in a SnapMirror relationship with another volume.
tradThe volume is a traditional volume.
unrecoverableThe volume is a FlexVol volume that has been marked unrecoverable; contact technical support.
unsup remote volThe origin system is running a version of Data ONTAP the does not support FlexCache volumes or is not compatible with the version running on the caching system. (FlexCache volumes only.)
verifyingRAID mirror verification is running on the volume's containing aggregate.
wafl inconsistentThe volume or its containing aggregate has been marked corrupted; contact technical support .
General Volume Operations (Traditional and FlexVol)
Displayingvol status
vol status -v (verbose)
vol status -l (display language)
Remove (destroying)vol offline <vol_name>
vol destroy <vol_name>
Renamevol rename <old_name> <new_name>
onlinevol online <vol_name>
offlinevol offline <vol_name>
restrictvol restrict <vol_name>
decompressvol decompress status
vol decompress start <vol_name>
vol decompress stop <vol_name>
Mirroringvol mirror volname [-n][-v victim_volname][-f][-d <disk_list>]
Note:
Mirrors the currently-unmirrored traditional volume volname, either with the specified set of disks or with the contents of another unmirrored traditional volume victim_volname, which will be destroyed in the process.

The vol mirror command fails if either the chosen volname or victim_volname are flexible volumes. Flexible volumes require that any operations having directly to do with their containing aggregates be handled via the new aggr command suite.
Change languagevol lang <vol_name> <language>
Change maximum number of files## Display maximum number of files
maxfiles <vol_name>

## Change maximum number of files
maxfiles <vol_name> <max_num_files>
Change root volumevol options <vol_name> root
Media Scrubvol media_scrub status [volname|plexname|groupname -s disk-name][-v]

Note: Prints the media scrubbing status of the named aggregate, volume, plex, or group. If no name is given, then
status is printed for all RAID groups currently running a media scrub. The status includes a
percent-complete and whether it is suspended.
Look at the following system options:

raid.media_scrub.enable on
raid.media_scrub.rate 600
raid.media_scrub.spares.enable on
FlexVol Volume Operations (only)
Adding (creating)## Syntax
vol create vol_name [-l language_code] [-s {volume|file|none}] <aggr_name> size{k|m|g|t}
## Create a 200MB volume using the english character set
vol create newvol -l en aggr1 200M

## Create 50GB flexvol volume
vol create vol1 aggr0 50g
additional disks## add an additional disk to aggregate flexvol1, use "aggr status" to get group name
aggr status flexvol1 -r
aggr add flexvol1 -g rg0 -d v5.25
Resizingvol size <vol_name> [+|-] n{k|m|g|t}

## Increase flexvol1 volume by 100MB
vol size flexvol1 + 100m
Automatically resizingvol autosize vol_name [-m size {k|m|g|t}] [-I size {k|m|g|t}] on

## automatically grow by 10MB increaments to max of 500MB
vol autosize flexvol1 -m 500m -I 10m on
Determine free space and Inodesdf -Ah
df -I
Determine sizevol size <vol_name>
automatic free space preservationvol options <vol_name> try_first [volume_grow|snap_delete]
Note:
If you specify volume_grow, Data ONTAP attempts to increase the volume's size before deleting any Snapshot copies. Data ONTAP increases the volume size based on specifications you provided using the vol autosize command.

If you specify snap_delete, Data ONTAP attempts to create more free space by deleting Snapshot copies, before increasing the size of the volume. Data ONTAP deletes Snapshot copies based on the specifications you provided using the snap autodelete command.
display a FlexVol volume's containing aggregatevol container <vol_name>
Cloningvol clone create clone_vol [-s none|file|volume] -b parent_vol [parent_snap]

vol clone split start
vol clone split stop
vol clone split estimate
vol clone split status
Note: The vol clone create command creates a flexible volume named clone_vol on the local filer that is a clone of a "backing" flexible volume named par_ent_vol. A clone is a volume that is a writable snapshot of another volume. Initially, the clone and its parent share the same storage; more storage space is consumed only as one volume or the other changes.
Copyingvol copy start [-S|-s snapshot] <vol_source> <vol_destination>
vol copy status

vol copy abort <operation number>
vol copy throttle <operation_number> <throttle value 10-1>
## Example - Copies the nightly snapshot named nightly.1 on volume vol0 on the local filer to the volume vol0 on remote ## filer named toaster1.
vol copy start -s nightly.1 vol0 toaster1:vol0
Note: Copies all data, including snapshots, from one volume to another. If the -S flag is used, the command copies all snapshots in the source volume to the destination volume. To specify a particular snapshot to copy, use the -s flag followed by the name of the snapshot. If neither the -S nor -s flag is used in the command, the filer automatically creates a distinctively-named snapshot at the time the vol copy start command is executed and copies only that snapshot to the destination volume.

The source and destination volumes must either both be traditional volumes or both be flexible volumes. The vol copy command will abort if an attempt is made to copy between different volume types.

The source and destination volumes can be on the same filer or on different filers. If the source or destination volume is on a filer other than the one on which the vol copy start command was entered, specify the volume name in the filer_name:volume_name format.
Traditional Volume Operations (only)
adding (creating)vol|aggr create vol_name -v [-l language_code] [-f] [-m] [-n] [-v] [-t {raid4|raid_dp}] [-r raidsize] [-T disk-type] -R rpm] [-L] disk-list

## create traditional volume using aggr command
aggr create tradvol1 -l en -t raid4 -d v5.26 v5.27

## create traditional volume using vol command
vol create tradvol1 -l en -t raid4 -d v5.26 v5.27

## Create traditional volume using 20 disks, each RAID group can have 10 disks
vol create vol1 -r 10 20
additional disksvol add volname[-f][-n][-g <raidgroup>]{ ndisks[@size]|-d <disk_list> }

## add another disk to the already existing traditional volume
vol add tradvol1 -d v5.28
splittingaggr split <volname/plexname> <new_volname>
Scrubing (parity)## The more new "aggr scrub " command is preferred

vol scrub status [volname|plexname|groupname][-v]

vol scrub start [volname|plexname|groupname][-v]
vol scrub stop [volname|plexname|groupname][-v]

vol scrub suspend [volname|plexname|groupname][-v]
vol scrub resume [volname|plexname|groupname][-v]

Note: Print the status of parity scrubbing on the named traditional volume, plex or RAID group. If no name is provided, the status is given on all RAID groups currently undergoing parity scrubbing. The status includes a percent-complete as well as the scrub’s suspended status (if any). 
Verify (mirroring)## The more new "aggr verify" command is preferred

## verify status
vol verify status

## start a verify operation
vol verify start [ aggrname ]

## stop a verify operation
vol verify stop [ aggrname ]

## suspend a verify operation
vol verify suspend [ aggrname ]

## resume a verify operation
vol verify resume [ aggrname ]
Note: Starts RAID mirror verification on the named online mirrored aggregate. If no name is given, then
RAID mirror verification is started on all online mirrored aggregates. Verification compares the data in
both plexes of a mirrored aggregate. In the default case, all blocks that differ are logged, but no changes
are made.
FlexCache Volumes
FlexCache Consistency
DelegationsYou can think of a delegation as a contract between the origin system and the caching volume; as long as the caching volume has the delegation, the file has not changed. Delegations are used only in certain situations.

When data from a file is retrieved from the origin volume, the origin system can give a delegation for that file to the caching volume. Before that file is modified on the origin volume, whether due to a request from another caching volume or due to direct client access, the origin system revokes the delegation for that file from all caching volumes that have that delegation.
Attribute cache timeoutsWhen data is retrieved from the origin volume, the file that contains that data is considered valid in the FlexCache volume as long as a delegation exists for that file. If no delegation exists, the file is considered valid for a certain length of time, specified by the attribute cache timeout.

If a client requests data from a file for which there are no delegations, and the attribute cache timeout has been exceeded, the FlexCache volume compares the file attributes of the cached file with the attributes of the file on the origin system.
write operation proxyIf a client modifies a file that is cached, that operation is passed back, or proxied through, to the origin system, and the file is ejected from the cache.

When the write is proxied, the attributes of the file on the origin volume are changed. This means that when another client requests data from that file, any other FlexCache volume that has that data cached will re-request the data after the attribute cache timeout is reached.
FlexCache Status Values
access deniedThe origin system is not allowing FlexCache access. Check the setting of the flexcache.access option on the origin system.
connectingThe caching system is trying to connect to the origin system.
lang mismatchThe language setting of the origin volume was changed since the FlexCache volume was created.
rem vol changedThe origin volume was deleted and re-created with the same name. Re-create the FlexCache volume to reenable the FlexCache relationship.
rem vol unavailThe origin volume is offline or has been deleted.
remote nvram errThe origin system is experiencing problems with its NVRAM.
unsup remote volThe origin system is running a version of Data ONTAP that either does not support FlexCache volumes or is not compatible with the version running on the caching system.
FlexCache Commands
Displayvol status
vol status -v <flexcache_name>

## How to display the options available and what they are set to
vol help options
vol options <flexcache_name>
Display free spacedf -L
Adding (Create)## Syntax
vol create <flexcache_name> <aggr> [size{k|m|g|t}] -S origin:source_vol

## Create a FlexCache volume called flexcache1 with autogrow in aggr1 aggregate with the source volume vol1
## on storage netapp1 server
vol create flexcache1 aggr1 -S netapp1:vol1
Removing (destroy)vol offline < flexcache_name>
vol destroy <flexcache_name>
Automatically resizingvol options <flexcache_name> flexcache_autogrow [on|off]
Eject file from cacheflexcache eject <path> [-f]
Statistics## Client stats
flexcache stats -C <flexcache_name>

## Server stats
flexcache stats -S <volume_name> -c <client>

## File stats
flexcache fstat <path>
FlexClone Volumes
FlexClone Commands
Displayvol status
vol status <flexclone_name> -v

df -Lh
adding (create)## Syntax
vol clone create clone_name [-s {volume|file|none}] -b parent_name [parent_snap]

## create a flexclone called flexclone1 from the parent flexvol1
vol clone create flexclone1 -b flexvol1
Removing (destroy)vol offline <flexclone_name>
vol destroy <flexclone_name>
splitting## Determine the free space required to perform the split
vol clone split estimate <flexclone_name>

## Double check you have the space
df -Ah

## Perform the split
vol clone split start <flexclone_name>

## Check up on its status
vol colne split status <flexclone_name>

## Stop the split
vol clone split stop <flexclone_name>
log file/etc/log/clone

The clone log file records the following information:
• Cloning operation ID
• The name of the volume in which the cloning operation was performed
• Start time of the cloning operation
• End time of the cloning operation
• Parent file/LUN and clone file/LUN names
• Parent file/LUN ID
• Status of the clone operation: successful, unsuccessful, or stopped and some other details
Deduplication
Deduplication Commands
start/restart deduplication operationsis start -s <path>

sis start -s /vol/flexvol1

## Use previous checkpoint
sis start -sp <path>
stop deduplication operationsis stop <path>
schedule deduplicationsis config -s <schedule> <path>

sis config -s mon-fri@23 /vol/flexvol1

Note: schedule lists the days and hours of the day when deduplication runs. The schedule can be of the following forms:
  • day_list[@hour_list]
    If hour_list is not specified, deduplication runs at midnight on each scheduled day.
  • hour_list[@day_list]
    If day_list is not specified, deduplication runs every day at the specified hours.
  • • -
    A hyphen (-) disables deduplication operations for the specified FlexVol volume.
enablingsis on <path>
disablingsis off <path>
statussis status -l <path>
Display saved spacedf -s <path>
QTrees
QTree Commands
Displayqtree status [-i] [-v]

Note:
The -i option includes the qtree ID number in the display.
The -v option includes the owning vFiler unit, if the MultiStore license is enabled.
adding (create)## Syntax - by default wafl.default_qtree_mode option is used
qtree create path [-m mode]

## create a news qtree in the /vol/users volume using 770 as permissions
qtree create /vol/users/news -m 770
Removerm -Rf <directory>
Renamemv <old_name> <new_name>
convert a directory into a qtree directory## Move the directory to a different directory
mv /n/joel/vol1/dir1 /n/joel/vol1/olddir

## Create the qtree
qtree create /n/joel/vol1/dir1

## Move the contents of the old directory back into the new QTree
mv /n/joel/vol1/olddir/* /n/joel/vol1/dir1

## Remove the old directory name
rmdir /n/joel/vol1/olddir
statsqtree stats [-z] [vol_name]

Note:
-z = zero stats
Change the security style## Syntax
qtree security path {unix | ntfs | mixed}
## Change the security style of /vol/users/docs to mixed
qtree security /vol/users/docs mixed
Quotas
Quota Commands
Quotas configuration file/mroot/etc/quotas
Example quota file
##                                           hard limit | thres |soft limit
##Quota Target       type                    disk  files| hold  |disk  file
##-------------      -----                   ----  -----  ----- ----- ----
*                    tree@/vol/vol0           -     -      -     -     -     # monitor usage on all qtrees in vol0
/vol/vol2/qtree      tree                    1024K  75k    -     -     -     # enforce qtree quota using kb
tinh                 user@/vol/vol2/qtree1   100M   -      -     -     -     # enforce users quota in specified qtree
dba                  group@/vol/ora/qtree1   100M   -      -     -     -     # enforce group quota in specified qtree

# * = default user/group/qtree 
# - = placeholder, no limit enforced, just enable stats collection

Note: you have lots of permutations, so checkout the documentation    
Displayingquota report [<path>]
Activatingquota on [-w] <vol_name>

Note:
-w = return only after the entire quotas file has been scanned 
Deactivitatingquota off [-w] <vol_name>
Reinitializingquota off [-w] <vol_name>
quota on [-w] <vol_name>
Resizingquota resize <vol_name>

Note: this commands rereads the quota file
Deletingedit the quota file

quota resize <vol_name>
log messagingquota logmsg
LUNs, igroups and LUN mapping
LUN configuration
Displaylun show
lun show -m
lun show -v
Initialize/Configure LUNs, mappinglun setup
Note: follow the prompts to create and configure LUN's
Createlun create -s 100m -t windows /vol/tradvol1/lun1
Destroylun destroy [-f] /vol/tradvol1/lun1
Note: the "-f" will force the destroy
Resizelun resize <lun path> <size>
lun resize /vol/tradvol1/lun1 75m
Restart block protocol accesslun online /vol/tradvol1/lun1
Stop block protocol accesslun offline /vol/tradvol1/lun1
Map a LUN to an initiator grouplun map /vol/tradvol1/lun1 win_hosts_group1 0
lun map -f /vol/tradvol1/lun2 linux_host_group1 1

lun show -m
Note: use "-f" to force the mapping
Remove LUN mappinglun show -m
lun offline /vol/tradvol1
lun unmap /vol/tradvol1/lun1 win_hosts_group1 0
Displays or zeros read/write statistics for LUNlun stats /vol/tradvol1/lun1
Commentslun comment /vol/tradvol1/lun1 "10GB for payroll records"
Check all lun/igroup/fcp settings for correctnesslun config_check -v
Manage LUN cloning# Create a Snapshot copy of the volume containing the LUN to be cloned by entering the following command
snap create tradvol1 tradvol1_snapshot_08122010
# Create the LUN clone by entering the following command
lun clone create /vol/tradvol1/clone_lun1 -b /vol/tradvol1/tradvol1_snapshot_08122010 lun1
Show the maximum possible size of a LUN on a given volume or qtreelun maxsize /vol/tradvol1
Move (rename) LUNlun move /vol/tradvol1/lun1 /vol/tradvol1/windows_lun1
Display/change LUN serial numberlun serial -x /vol/tradvol1/lun1
Manage LUN propertieslun set reservation /vol/tradvol1/hpux/lun0
Configure NAS file-sharing propertieslun share <lun_path> { none | read | write | all }
Manage LUN and snapshot interactionslun snap usage -s <volume> <snapshot>
igroup configuration
displayigroup show
igroup show -v
igroup show iqn.1991-05.com.microsoft:xblade
create (iSCSI)igroup create -i -t windows win_hosts_group1 iqn.1991-05.com.microsoft:xblade
create (FC)igroup create -i -f windows win_hosts_group1 iqn.1991-05.com.microsoft:xblade
destroyigroup destroy win_hosts_group1
add initiators to an igroupigroup add win_hosts_group1 iqn.1991-05.com.microsoft:laptop
remove initiators to an igroupigroup remove win_hosts_group1 iqn.1991-05.com.microsoft:laptop
renameigroup rename win_hosts_group1 win_hosts_group2
set O/S typeigroup set win_hosts_group1 ostype windows
Enabling ALUAigroup set win_hosts_group1 alua yes

Note: ALUA defines a standard set of SCSI commands for discovering and managing multiple paths to LUNs on Fibre Channel and iSCSI SANs. ALUA enables the initiator to query the target about path attributes, such as primary path and secondary path. It also enables the target to communicate events back to the initiator. As long as the host supports the ALUA standard, multipathing software can be developed to support any array. Proprietary SCSI commands are no longer required.
iSCSI commands
displayiscsi initiator show
iscsi session show [-t]
iscsi connection show -v
iscsi security show
statusiscsi status
startiscsi start
stopiscsi stop
statsiscsi stats
nodenameiscsi nodename

# to change the name
iscsi nodename <new name>
interfacesiscsi interface show

iscsi interface enable e0b
iscsi interface disable e0b
portalsiscsi portal show

Note: Use the iscsi portal show command to display the target IP addresses of the storage system. The storage system's target IP addresses are the addresses of the interfaces used for the iSCSI protocol
accesslistsiscsi interface accesslist show

Note: you can add or remove interfaces from the list
Port Sets
displayportset show
portset show portset1
igroup show linux-igroup1
createportset create -f portset1 SystemA:4b
destroyigroup unbind linux-igroup1 portset1
portset destroy portset1
addportset add portset1 SystemB:4b
removeportset remove portset1 SystemB:4b
bindingigroup bind linux-igroup1 portset1
igroup unbind linux-igroup1 portset1
FCP service
displayfcp show adapter -v
daemon statusfcp status
startfcp start
stopfcp stop
statsfcp stats -i interval [-c count] [-a | adapter]
fcp stats -i 1
target expansion adaptersfcp config <adapter> [down|up]

fcp config 4a down
target adapter speedfcp config <adapter> speed [auto|1|2|4|8]
fcp config 4a speed 8
set WWPN #fcp portname set [-f] adapter wwpn
fcp portname set -f 1b 50:0a:09:85:87:09:68:ad
swap WWPN #fcp portname swap [-f] adapter1 adapter2
fcp portname swap -f 1a 1b
change WWNN# display nodename
fcp nodename

fcp nodename [-f]nodename
fcp nodename 50:0a:09:80:82:02:8d:ff
Note: The WWNN of a storage system is generated by a serial number in its NVRAM, but it is stored ondisk. If you ever replace a storage system chassis and reuse it in the same Fibre Channel SAN, it is possible, although extremely rare, that the WWNN of the replaced storage system is duplicated. In this unlikely event, you can change the WWNN of the storage system.
WWPN Aliases - displayfcp wwpn-alias show
fcp wwpn-alias show -a my_alias_1
fcp wwpn-alias show -w 10:00:00:00:c9:30:80:2
WWPN Aliases - createfcp wwpn-alias set [-f] alias wwpn

fcp wwpn-alias set my_alias_1 10:00:00:00:c9:30:80:2f
WWPN Aliases - removefcp wwpn-alias remove [-a alias ... | -w wwpn]
fcp wwpn-alias remove -a my_alias_1
fcp wwpn-alias remove -w 10:00:00:00:c9:30:80:2
Snapshotting and Cloning
Snapshot and Cloning commands
Display clonessnap list
create clone# Create a LUN by entering the following command
lun create -s 10g -t solaris /vol/tradvol1/lun1
# Create a Snapshot copy of the volume containing the LUN to be cloned by entering the following command
snap create tradvol1 tradvol1_snapshot_08122010
# Create the LUN clone by entering the following command
lun clone create /vol/tradvol1/clone_lun1 -b /vol/tradvol1/lun1 tradvol1_snapshot_08122010
destroy clone# display the snapshot copies
lun snap usage tradvol1 tradvol1_snapshot_08122010
# Delete all the LUNs in the active file system that are displayed by the lun snap usage command by entering the following command
lun destroy /vol/tradvol1/clone_lun1
# Delete all the Snapshot copies that are displayed by the lun snap usage command in the order they appear
snap delete tradvol1 tradvol1_snapshot_08122010
clone dependencyvol options <vol_name> <snapshot_clone_dependency> on
vol options <vol_name> <snapshot_clone_dependency> off
Note: Prior to Data ONTAP 7.3, the system automatically locked all backing Snapshot copies when Snapshot copies of LUN clones were taken. Starting with Data ONTAP 7.3, you can enable the system to only lock backing Snapshot copies for the active LUN clone. If you do this, when you delete the active LUN clone, you can delete the base Snapshot copy without having to first delete all of the more recent backing Snapshot copies.

This behavior in not enabled by default; use the snapshot_clone_dependency volume option to enable it. If this option is set to off, you will still be required to delete all subsequent Snapshot copies before deleting the base Snapshot copy. If you enable this option, you are not required to rediscover the LUNs. If you perform a subsequent volume snap restore operation, the system restores whichever value was present at the time the Snapshot copy was taken.
Restoring snapshotsnap restore -s payroll_lun_backup.2 -t vol /vol/payroll_lun 
splitting the clonelun clone split start lun_path

lun clone split status lun_path
stop clone splittinglun clone split stop lun_path
delete snapshot copysnap delete vol-name snapshot-name

snap delete -a -f <vol-name>
disk space usagelun snap usage tradvol1 mysnap
Use Volume copy to copy LUN'svol copy start -S source:source_volume dest:dest_volume

vol copy start -S /vol/vol0 filerB:/vol/vol1
The estimated rate of change of data between Snapshot copies in a
volume
snap delta /vol/tradvol1 tradvol1_snapshot_08122010
The estimated amount of space freed if you delete the specified
Snapshot copies
snap reclaimable /vol/tradvol1 tradvol1_snapshot_08122010
File Access using NFS
Export Options
actual=<path>Specifies the actual file system path corresponding to the exported file system path.
anon=<uid>|<name>Specifies the effective user ID (or name) of all anonymous or root NFS client users that access the file system path.
nosuidDisables setuid and setgid executables and mknod commands on the file system path.
ro | ro=clientidSpecifies which NFS clients have read-only access to the file system path.
rw | rw=clientidSpecifies which NFS clients have read-write access to the file system path.
root=clientidSpecifies which NFS clients have root access to the file system path. If you specify the root= option, you must specify at least one NFS client identifier. To exclude NFS clients from the list, prepend the NFS client identifiers with a minus sign (-).
sec=sectypeSpecifies the security types that an NFS client must support to access the file system path. To apply the security types to all types of access, specify the sec= option once. To apply the security types to specific types of access (anonymous, non-super user, read-only, read-write, or root), specify the sec= option at least twice, once before each access type to which it applies (anon, nosuid, ro, rw, or root, respectively).
security types could be one of the following:
noneNo security. Data ONTAP treats all of the NFS client's users as anonymous users.
sysStandard UNIX (AUTH_SYS) authentication. Data ONTAP checks the NFS credentials of all of the
NFS client's users, applying the file access permissions specified for those users in the NFS server's /etc/passwd file. This is the default security type.
krb5Kerberos(tm) Version 5 authentication. Data ONTAP uses data encryption standard (DES) key
encryption to authenticate the NFS client's users.
krb5iKerberos(tm) Version 5 integrity. In addition to authenticating the NFS client's users, Data
ONTAP uses message authentication codes (MACs) to verify the integrity of the NFS client's remote procedure requests and responses, thus preventing "man-in-the-middle" tampering.
krb5pKerberos(tm) Version 5 privacy. In addition to authenticating the NFS client's users and verifying data integrity, Data ONTAP encrypts NFS arguments and results to provide privacy.

Examplesrw=10.45.67.0/24
ro,root=@trusted,rw=@friendly
rw,root=192.168.0.80,nosuid
Export Commands
Displayingexportfs
exportfs -q <path>
create# create export in memory and write to /etc/exports (use default options)
exportfs -p /vol/nfs1
# create export in memory and write to /etc/exports (use specific options)
exportsfs -io sec=none,rw,root=192.168.0.80,nosuid /vol/nfs1

# create export in memory only using own specific options
exportsfs -io sec=none,rw,root=192.168.0.80,nosuid /vol/nfs1
remove# Memory only
exportfs -u <path>

# Memory and /etc/exportfs
exportfs -z <path>
export allexportfs -a
check accessexportfs -c 192.168.0.80 /vol/nfs1
flushexportfs -f
exportfs -f <path>
reloadexportfs -r
storage pathexportfs -s <path>
Write export to a fileexportfs -w <path/export_file>
fencing# Suppose /vol/vol0 is exported with the following export options:

   -rw=pig:horse:cat:dog,ro=duck,anon=0

# The following command enables fencing of cat from /vol/vol0
exportfs -b enable save cat /vol/vol0

# cat moves to the front of the ro= list for /vol/vol0:

   -rw=pig:horse:dog,ro=cat:duck,anon=0
statsnfsstat

File Access using CIFS
Useful CIFS options
change the security styleoptions wafl.default_security_style {ntfs | unix | mixed}
timeoutoptions cifs.idle_timeout time
Performanceoptions cifs.oplocks.enable on

Note: Under some circumstances, if a process has an exclusive oplock on a file and a second process attempts to open the file, the first process must invalidate cached data and flush writes and locks. The client must then relinquish the oplock and access to the file. If there is a network failure during this flush, cached write data might be lost.
CIFS Commands
useful files
/etc/cifsconfig_setup.cfg
/etc/usermap.cfs
/etc/passwd
/etc/cifsconfig_share.cfg


Note: use "rdfile" to read the file
CIFS setupcifs setup

Note: you will be prompted to answer a number of questions based on what requirements you need.
startcifs restart
stopcifs terminate

# terminate a specific client
cifs terminate <client_name>|<IP Address>
sessionscifs sessions
cifs sessions <user>
cifs sessions <IP Address>

# Authentication
cifs sessions -t

# Changes
cifs sessions -c

# Security Info
cifs session -s
Broadcast messagecifs broadcast * "message"
cifs broadcast <client_name> "message"
permissionscifs access <share> <user|group> <permission>

# Examples
cifs access sysadmins -g wheel Full Control
cifs access -delete releases ENGINEERING\mary
Note: rights can be Unix-style combinations of r w x - or NT-style "No Access", "Read", "Change", and "Full Control"
statscifs stat <interval>
cifs stat <user>
cifs stat <IP Address>
create a share# create a volume in the normal way
# then using qtrees set the style of the volume {ntfs | unix | mixed}
# Now you can create your share
cifs shares -add TEST /vol/flexvol1/TEST -comment "Test Share " -forcegroup workgroup -maxusers 100
change share characteristicscifs shares -change sharename {-browse | -nobrowse} {-comment desc | - nocomment} {-maxusers userlimit | -nomaxusers} {-forcegroup groupname | -noforcegroup} {-widelink | -nowidelink} {-symlink_strict_security | - nosymlink_strict_security} {-vscan | -novscan} {-vscanread | - novscanread} {-umask mask | -noumask {-no_caching | -manual_caching | - auto_document_caching | -auto_program_caching}

# example
cifs shares -change <sharename> -novscan
home directories# Display home directories
cifs homedir

# Add a home directory
wrfile -a /etc/cifs_homedir.cfg /vol/TEST

# check it
rdfile /etc/cifs_homedir.cfg

# Display for a Windows Server
net view \\<Filer IP Address>

# Connect
net use * \\192.168.0.75\TEST

Note: make sure the directory exists
domain controller# add a domain controller
cifs prefdc add lab 10.10.10.10 10.10.10.11
# delete a domain controller
cifs prefdc delete lab

# List domain information
cifs domaininfo
# List the preferred controllers
cifs prefdc print

# Restablishing
cifs resetdc
change filers domain passwordcifs changefilerpwd
Tracing permission problemssectrace add [-ip ip_address] [-ntuser nt_username] [-unixuser unix_username] [-path path_prefix] [-a]
#Examples
sectrace add -ip 192.168.10.23
sectrace add -unixuser foo -path /vol/vol0/home4 -a
# To remove
sectrace delete all
sectrace delete <index>

# Display tracing
sectrace show

# Display error code status
sectrace print-status <status_code>
sectrace print-status 1:51544850432:32:78 

File Access using FTP
Useful Options
Enableoptions ftpd.enable on
Disableoptions ftpd.enable off
File Lockingoptions ftpd.locking delete
options ftpd.locking none

Note: To prevent users from modifying files while the FTP server is transferring them, you can enable FTP file locking. Otherwise, you can disable FTP file locking. By default, FTP file locking is disabled.
Authenication Styleoptions ftpd.auth_style {unix | ntlm | mixed}
bypassing of FTP traverse checkingoptions ftpd.bypass_traverse_checking on
options ftpd.bypass_traverse_checking off

Note: If the ftpd.bypass_traverse_checking option is set to off, when a user attempts to access a file using FTP, Data ONTAP checks the traverse (execute) permission for all directories in the path to the file. If any of the intermediate directories does not have the "X" (traverse permission), Data ONTAP denies access to the file. If the ftpd.bypass_traverse_checking option is set to on, when a user attempts to access a file, Data ONTAP does not check the traverse permission for the intermediate directories when determining whether to grant or deny access to the file.
Restricting FTP users to a specific directoryoptions ftpd.dir.restriction on
options ftpd.dir.restriction off
Restricting FTP users to their home directories or a default directoryoptions ftpd.dir.override ""
Maximum number of connectionsoptions ftpd.max_connections n
options ftpd.max_connections_threshold n
idle timeout valueoptions ftpd.idle_timeout n s | m | h
anonymous loginsoptions ftpd.anonymous.enable on
options ftpd.anonymous.enable off

# specify the name for the anonymous login
options ftpd.anonymous.name username

# create the directory for the anonymous login
options ftpd.anonymous.home_dir homedir
FTP Commands
Log files/etc/log/ftp.cmd
/etc/log/ftp.xfer

# specify the max number of logfiles (default is 6) and size
options ftpd.log.nfiles 10
options ftpd.log.filesize 1G

Note: use rdfile to view
Restricting access/etc/ftpusers

Note: using rdfile and wrfile to access /etc/ftpusers
statsftp stat

# to reset
ftp stat -z
File Access using HTTP
HTTP Options
enableoptions httpd.enable on
disableoptions httpd.enable off
Enabling or disabling the bypassing of HTTP traverse checkingoptions httpd.bypass_traverse_checking on
options httpd.bypass_traverse_checking off

Note: this is similar to the FTP version
root directoryoptions httpd.rootdir /vol0/home/users/pages
Host accessoptions httpd.access host=Host1 AND if=e3
options httpd.admin.access host!=Host1
HTTP Commands
Log files/etc/log/httpd.log

# use the below to change the logfile format
options httpd.log.format alt1

Note: use rdfile to view
redirectsredirect /cgi-bin/* http://cgi-host/*
pass rulepass /image-bin/*
fail rulefail /usr/forbidden/*
mime types/etc/httpd.mimetypes

Note: use rdfile and wrfile to edit
interface firewallifconfig f0 untrusted
statshttpstat [-dersta]

# reset the stats
httpstat -z[derta]
Network Interfaces
Displayifconfig -a
ifconfig <interface>
IP addressifconfig e0 <IP Address>
ifconfig e0a <IP Address>

# Remove a IP Address
ifconfig e3 0
subnet maskifconfig e0a netmask <subnet mask address>
broadcastifconfig e0a broadcast <broadcast address>
media typeifconfig e0a mediatype 100tx-fd
maximum transmission unit (MTU)ifconfig e8 mtusize 9000
Flow controlifconfig <interface_name> <flowcontrol> <value>

# example
ifconfig e8 flowcontrol none
Note: value is the flow control type. You can specify the following values for the flowcontrol option:

none    - No flow control
receive - Able to receive flow control frames
send    - Able to send flow control frames
full    - Able to send and receive flow control frames

The default flowcontrol type is full.
trustedifconfig e8 untrusted

Note: You can specify whether a network interface is trustworthy or untrustworthy. When you specify an interface as untrusted (untrustworthy), any packets received on the interface are likely to be dropped.
HA Pairifconfig e8 partner <IP Address>

## You must enable takeover on interface failures by entering the following commands:
options cf.takeover.on_network_interface_failure enable
ifconfig interface_name {nfo|-nfo}
nfo   — Enables negotiated failover
-nfo  — Disables negotiated failover
Note: In an HA pair, you can assign a partner IP address to a network interface. The network interface takes over this IP address when a failover occurs
Alias# Create alias
ifconfig e0 alias 192.0.2.30

# Remove alias
ifconfig e0 -alias 192.0.2.30
Block/Unblock protocols# Block
options interface.blocked.cifs e9
options interface.blocked.cifs e0a,e0b

# Unblock
options interface.blocked.cifs ""
Statsifstat
netstat

Note: there are many options to both these commands so I will leave to the man pages
bring up/down an interfaceifconfig <interface> up
ifconfig <interface> down
Routing
default route# using wrfile and rdfile edit the /etc/rc file with the below
route add default 192.168.0.254 1

# the full /etc/rc file will look like something below
hostname netapp1
ifconfig e0 192.168.0.10 netmask 255.255.255.0 mediatype 100tx-fd
route add default 192.168.0.254 1
routed on
enable/disable fast pathoptions ip.fastpath.enable {on|off}

Note:
on   — Enables fast path
off  — Disables fast path
enable/disable routing daemonrouted {on|off}

Note:
on   — Turns on the routed daemon
off  — Turns off the routed daemon
Display routing tablenetstat -rn
route -s
routed status
Add to routing tableroute add 192.168.0.15 gateway.com 1
Hosts and DNS
Hosts# use wrfile and rdfile to read and edit /etc/hosts file , it basically use the sdame rules as a Unix
# hosts file
nsswitch file# use wrfile and rdfile to read and edit /etc/nsswitch.conf file , it basically uses the same rules as a
# Unix nsswitch.conf file
DNS# use wrfile and rdfile to read and edit /etc/resolv.conf file , it basically uses the same rules as a
# Unix resolv.conf file

options dns.enable {on|off}

Note:
on   — Enables DNS
off  — Disables DNS
Domain Nameoptions dns.domainname <domain>
DNS cacheoptions dns.cache.enable
options dns.cache.disable

# To flush the DNS cache
dns flush

# To see dns cache information
dns info
DNS updatesoptions dns.update.enable {on|off|secure}
Note:
on     — Enables dynamic DNS updates
off    — Disables dynamic DNS updates
secure — Enables secure dynamic DNS updates
time-to-live (TTL)options dns.update.ttl <time>
# Example
options dns.update.ttl 2h

Note: time can be set in seconds (s), minutes (m), or hours (h), with a minimum value of 600 seconds
and a maximum value of 24 hour
VLAN
Createvlan create [-g {on|off}] ifname vlanid

# Create VLANs with identifiers 10, 20, and 30 on the interface e4 of a storage system by using the following command:
vlan create e4 10 20 30
# Configure the VLAN interface e4-10 by using the following command
ifconfig e4-10 192.168.0.11 netmask 255.255.255.0
Addvlan add e4 40 50
Delete# Delete specific VLAN
vlan delete e4 30

# Delete All VLANs on a interface
vlan delete e4
Enable/Disable GRVP on VLANvlan modify -g {on|off} ifname
Statvlan stat <interface_name> <vlan_id>

# Examples
vlan stat e4
vlan stat e4 10
Interface Groups
Create (single-mode)# To create a single-mode interface group, enter the following command:
ifgrp create single SingleTrunk1 e0 e1 e2 e3
# To configure an IP address of 192.168.0.10 and a netmask of 255.255.255.0 on the singlemode interface group SingleTrunk1
ifconfig SingleTrunk1 192.168.0.10 netmask 255.255.255.0
# To specify the interface e1 as preferred
ifgrp favor e1
Create ( multi-mode)# To create a static multimode interface group, comprising interfaces e0, e1, e2, and e3 and using MAC
# address load balancing
ifgrp create multi MultiTrunk1 -b mac e0 e1 e2 e3
# To create a dynamic multimode interface group, comprising interfaces e0, e1, e2, and e3 and using IP
# address based load balancing
ifgrp create lacp MultiTrunk1 -b ip e0 e1 e2 e3
Create second level intreface group# To create two interface groups and a second-level interface group. In this example, IP address load
# balancing is used for the multimode interface groups.
ifgrp create multi Firstlev1 e0 e1
ifgrp create multi Firstlev2 e2 e3
ifgrp create single Secondlev Firstlev1 Firstlev2
# To enable failover to a multimode interface group with higher aggregate bandwidth when one or more of
# the links in the active multimode interface group fail
options ifgrp.failover.link_degraded on
Note: You can create a second-level interface group by using two multimode interface groups. Secondlevel interface groups enable you to provide a standby multimode interface group in case the primary multimode interface group fails.
Create second level intreface group in a HA pair# Use the following commands to create a second-level interface group in an HA pair. In this example,
# IP-based load balancing is used for the multimode interface groups.

# On StorageSystem1:
ifgrp create multi Firstlev1 e1 e2
ifgrp create multi Firstlev2 e3 e4
ifgrp create single Secondlev1 Firstlev1 Firstlev2

# On StorageSystem2 :
ifgrp create multi Firstlev3 e5 e6
ifgrp create multi Firstlev4 e7 e8
ifgrp create single Secondlev2 Firstlev3 Firstlev4

# On StorageSystem1:
ifconfig Secondlev1 partner Secondlev2

# On StorageSystem2 :
ifconfig Secondlev2 partner Secondlev1
Favoured/non-favoured interface# select favoured interface
ifgrp nofavor e3
# select a non-favoured interface
ifgrp nofavor e3
Addifgrp add MultiTrunk1 e4
Deleteifconfig MultiTrunk1 down
ifgrp delete MultiTrunk1 e4

Note: You must configure the interface group to the down state before you can delete a network interface
from the interface group
Destroyifconfig ifgrp_name down
ifgrp destroy ifgrp_name
Note: You must configure the interface group to the down state before you can delete a network interface
from the interface group
Enable/disable a interface groupifconfig ifgrp_name up
ifconfig ifgrp_name down
Statusifgrp status [ifgrp_name]
Statifgrp stat [ifgrp_name] [interval]
Diagnostic Tools
Useful options
Ping thottling# Throttle ping
options ip.ping_throttle.drop_level <packets_per_second>

# Disable ping throttling
options ip.ping_throttle.drop_level 0
Forged IMCP attacksoptions ip.icmp_ignore_redirect.enable on

Note: You can disable ICMP redirect messages to protect your storage system against forged ICMP redirect attacks.
Useful Commands
netdiagThe netdiag command continuously gathers and analyzes statistics, and performs diagnostic tests. These diagnostic tests identify and report problems with your physical network or transport layers and suggest remedial action.
pingYou can use the ping command to test whether your storage system can reach other hosts on your network.
pkttYou can use the pktt command to trace the packets sent and received in the storage system's network.