Netapp Advanced Drive Partitioning Setup

Shares

How to Setup Netapp Advanced Drive Partitioning on FAS2552

Advanced Drive Paritioning is a new feature introduced in Clustered Data Ontap 8.3. It allows us to be able to partition the internal drives of entry level and all-flash systems in order to save physical disk space.

Traditionally, for a new setup,  we would have to allocate a minimum of 3 disks so the system is able to create a root aggregate and install the operating system. As an example if we are using SAS 900GB disks across the board, 3 of these would need to be allocated to node1 and another 3 would be allocated to node2 for their root aggregates. It was a bit of a waste especially on entry level systems.

Advanced Drive Partitioning allows the system to partition each physical disk into 2 partitions. One partition being for the root aggregate and the second partition being for data.

In this tutorial we will look at how we can configure Advanced Drive Partitioning on a Netapp FAS2552. This system is a brand new out of the box and does not contain any user data.

WARNING: I have to say don’t try and partition a live system unless you don’t mind losing all your data. All volumes, aggregates and disk ownship get removed.

Configuration of Advanced Drive Partitioning

We are going to start with a few checks:

  • Ensure Data Ontap 8.3 or later is installed as the system image (you can see this during boot)

NetApp Data ONTAP 8.3P1
Copyright (C) 1992-2015 NetApp.
All rights reserved.

  • Do you currently have any drives owned by nodes or are any drives partitioned ?

In our case we had drives assigned to the nodes so I’m going to show you the process to go through for removing all disk ownership.

Let’s start with Node 1

As the system boots look out for the following text and press Ctrl-C for the Boot Menu

*******************************

Press Ctrl-C for Boot Menu.

*******************************

At the boot menu we want to select option 5 for the maintenance mode boot

(1) Normal Boot.
(2) Boot without /etc/rc.
(3) Change password.
(4) Clean configuration and initialize all disks.
(5) Maintenance mode boot.
(6) Update flash from backup config.
(7) Install new software first.
(8) Reboot node.
Selection (1-8)? 5

Once you arrive at the following prompt:

*>

Type in aggr show

*> aggr status
Aggr          State         Status          Options
aggr0        online       raid_dp,      aggr root, nosnap=on
64-bit

If your system does not contain any aggregates you can skip this step, if your system contains an aggregate we are going to remove it by typing:

*> aggr offline aggr0

type yes when asked: Are you sure want to take the root aggregate (aggr0) offline?

*> aggr destroy aggr0

type yes when asked: Are you sure you want to destroy this aggregate?

Next we are going to see how the physical disks are laid out by typing disk show

If we only see disks with the following format: port.shelfID.diskID, (for example 0b.00.11) Then we do not have any partitioned disks.

If we see disks in the following format: port.shelfID.diskIDP1 or P2, the P1 and P2 indictate Partition 1 and Partition 2. (for example 0b.00.11P1 and 0b.00.11P2)

If your disks do not contain any partitions you can skip this step.

We will now unpartition each disk by typing:

*> disk unpartition 0b.00.11 (this will remove partition 1 and 2 from this disk)

Repeat the above command for all partitioned disks

Type disk show once again to ensure all partitions have been removed, if they have been removed move onto the next step.

Now we will remove disk ownership from each disk by typing the following command:

*> disk remove_ownership 0b.00.11

Type y for yes when the following question appears: Volumes must be taken offline. Are all impacted volumes offline(y/n)??

Do this for every disk and once you have finished type disk show again and ensure you get the following return information:

disk show: No disks match option show.

Once you are finished on Node 1, type:

*> halt

and let the system reboot into the Loader-A> prompt.

Now repeat the above process on Node 2 – boot into maintenance, remove aggregates, unpartition disks, remove disk ownership and halt the system.

Return back to Node 1 and type:

Loader-A> boot_ontap

Press Ctrl-C when you see Press Ctrl-C for Boot Menu

*******************************

Press Ctrl-C for Boot Menu.

*******************************
^CBoot Menu will be available.

When the boot menu appears we are going to select option 4 this time:

Please choose one of the following:

(1) Normal Boot.
(2) Boot without /etc/rc.
(3) Change password.
(4) Clean configuration and initialize all disks.
(5) Maintenance mode boot.
(6) Update flash from backup config.
(7) Install new software first.
(8) Reboot node.
Selection (1-8)? 4

Type yes when asked Zero disks, reset config and install a new file system?

Type yes when asked This will erase all the data on the disks, are you sure?

Node 1 will now zero and partition all odd numbered disks, for example 1,3,5,7,9,etc. During the disk zero task you will see many continuous dots on the screen.

Now plug your console cable into Node 2. You must complete this task before Node 1 has finished the disk zero and partitioning tasks.

You should be at the Loader-B> prompt. Type:

Loader-B> boot_ontap

Press Ctrl-C when you see Press Ctrl-C for Boot Menu.

We will repeat the same process as Node 1. When the boot menu appears we will select option 4.

Type yes when asked Zero disks, reset config and install a new file system?

Type yes when asked This will erase all the data on the disks, are you sure?

Node 2 will now zero and partition all even numbered disks, for example 0,2,4,6,8,etc. During the disk zero task you will see many continuous dots on the screen.

Once the disk zero task has finished on each node, each disk will be auto partitioned:

[localhost:raid.autoPart.start:notice]: System has started auto-partitioning 10 disks.

Once the partitioning has completed, this will be the default disk partition layout.

advanced drive partitioning
The numbers at the top (0 – 23) indicate the disk number. P1 = Partition 1 (Clustered Data Ontap Root Aggregate) and P2 = Partition 2 (Used for Data Aggregates).

You should now be cluster setup wizard. Once you have completed the cluster setup wizard on both nodes we can decide how we would like to setup our data aggregates.

  • Option 1, we leave it as is in the diagram above where disks 1,3,5,7,9,11,13,15,17,19,21 are used for a data aggregate with disk 21 being a spare for Node 1 and disks 0,2,4,6,8,10,12,14,16,18,20 are used for a data aggregate with disk 22 being a spare for Node 2.
  • Option 2, we unassign data partitioned disks from Node 2, assign ownership to Node 1 and add them to the existing data aggregate as seen in the diagram below

 advanced drive partitioning

Disclaimer:
All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.