Skip to main content

How To: Unjoin NetApp Nodes from a Cluster

Let me paint you a word picture:

You've upgraded to a shiny new AFF - it's all racked, stacked, cabled and ready to rock. You've moved your volumes onto the new storage and your workloads are performing beautifully (of course) and it's time to put your old NetApp gear out to pasture.

We're going to learn how to unjoin nodes from an existing cluster. But wait! There are several prerequisites that must be met before the actual cluster unjoin can be done.


  • Ensure that you have either moved volumes to your new aggregates or offlined and deleted any unused volumes.
  • Offline and delete aggregates from old nodes.
  • Re-home data LIFs or disable/delete if they are not in use.
  • Disable and delete intercluster LIFs for the old nodes (and remove them from any Cluster Peering relationships)
  • Remove the old node's ports from any Broadcast Domains or Failover Groups that they may be a member of.
  • Move epsilon to one of the new nodes (let's assume nodes 3 and 4 are the new nodes, in this scenario).

labnetapp01::> set -priv advanced
labnetapp01::*> cluster show -epsilon *
Node                 Health  Eligibility   Epsilon
-------------------- ------- ------------  ------------
node1               true    true          true
node2               true    true          false
node3               true    true          false
node4               true    true          false
4 entries were displayed.

We can see that node 1 has epsilon currently, so let's disable it on node 1 and move it to one of the new nodes.

labnetapp01::*> cluster modify -node node1 -epsilon false 

labnetapp01::*> cluster modify -node node3 -epsilon true

We can verify that epsilon was moved by running the "cluster show -epsilon *" command again:

labnetapp01::*> cluster show -epsilon *
Node                 Health  Eligibility   Epsilon
-------------------- ------- ------------  ------------
node1               true    true          false
node2               true    true          false
node3               true    true          true
node4               true    true          false
4 entries were displayed.

  • Disable cluster replication ring eligibility for the old nodes by moving the cluster ring master to one of the new nodes. 
labnetapp01::*> cluster ring show
Node      UnitName Epoch    DB Epoch DB Trnxs Master    Online
--------- -------- -------- -------- -------- --------- ---------
node1     mgmt     1        1        1068     node0     master
node1     vldb     1        1        98       node0     master
node1     vifmgr   1        1        350      node0     master
node1     bcomd    1        1        56       node0     master
node1     crs      1        1        88       node0     master
node2     mgmt     1        1        1068     node0     secondary
node2     vldb     1        1        98       node0     secondary
node2     vifmgr   1        1        350      node0     secondary
node2     bcomd    1        1        56       node0     secondary
node2     crs      1        1        88       node0     secondary

node3     mgmt     1        1        1068     node0     secondary
node3     vldb     1        1        98       node0     secondary
node3     vifmgr   1        1        350      node0     secondary
node3     bcomd    1        1        56       node0     secondary
node3     crs      1        1        88       node0     secondary
node4     mgmt     1        1        1068     node0     secondary
node4     vldb     1        1        98       node0     secondary
node4     vifmgr   1        1        350      node0     secondary
node4     bcomd    1        1        56       node0     secondary
node4     crs      1        1        88       node0     secondary
20 entries were displayed.

In order to force the cluster ring master to move to a different node, we need to set eligibility to false for node1 and node2.

labnetapp01::*> system node modify -node node1 -eligibility false

Then we'll do the same thing for node2.

labnetapp01::*> system node modify -node node2 -eligibility false

You'll probably get a bunch of email alerts at this point, don't panic. After these commands have run, you can see the result by running the "cluster ring show" command again.

labnetapp01::*> cluster ring show
Node      UnitName Epoch    DB Epoch DB Trnxs Master    Online
--------- -------- -------- -------- -------- --------- ---------
node1     mgmt     1        1        1068     node0     offline
node1     vldb     1        1        98       node0     offline
node1     vifmgr   1        1        350      node0     offline
node1     bcomd    1        1        56       node0     offline
node1     crs      1        1        88       node0     offline
node2     mgmt     1        1        1068     node0     offline
node2     vldb     1        1        98       node0     offline
node2     vifmgr   1        1        350      node0     offline
node2     bcomd    1        1        56       node0     offline
node2     crs      1        1        88       node0     offline

node3     mgmt     1        1        1068     node0     master
node3     vldb     1        1        98       node0     master
node3     vifmgr   1        1        350      node0     master
node3     bcomd    1        1        56       node0     master
node3     crs      1        1        88       node0     master
node4     mgmt     1        1        1068     node0     secondary
node4     vldb     1        1        98       node0     secondary
node4     vifmgr   1        1        350      node0     secondary
node4     bcomd    1        1        56       node0     secondary
node4     crs      1        1        88       node0     secondary
20 entries were displayed.

  • Now we'll need to disable SFO (storage failover) for the two old nodes. 
labnetapp01::*> storage failover modify -node node1 -enabled false
labnetapp01::*> storage failover modify -node node2 -enabled false

  • Verify that storage failover is disabled by running "storage failover show". You'll see a value of "False" under the "Possible" column. 
  • Now we're finally ready to do the actual cluster unjoin! After waiting for some time, you'll see a success message if nothing went catastrophically wrong. 
labnetapp01::*> cluster unjoin -node node1

Warning: This command will unjoin node "node1" from the cluster. You must unjoin the failover partner as well. After the node is successfully unjoined, erase its configuration and init
         (4)" option from the boot menu.
Do you want to continue? {y|n}: y
[Job 47561] Cleaning cluster database[Job 47561] Job succeeded: Cluster unjoin succeeded

labnetapp01::*> cluster unjoin -node node2

Warning: This command will unjoin node "node2" from the cluster. You must unjoin the failover partner as well. After the node is successfully unjoined, erase its configuration and init
         (4)" option from the boot menu.
Do you want to continue? {y|n}: y
[Job 47561] Cleaning cluster database[Job 47561] Job succeeded: Cluster unjoin succeeded

That's it for the cluster unjoin. After this process, you'll be ready to power down and physically remove the old storage. Thanks for reading!

Disclaimer: This is a lab - I am not responsible for breaking your stuff if you run this in production and something gets borked.

Comments

Post a Comment

Popular posts from this blog

ONTAP Configuration Compliance Auditing with PowerShell and Pester

I have been looking for a way to validate NetApp cluster configuration settings (once a configuration setting is set, I want to validate that it was set properly in a programmatic fashion) and prevent configuration drift (if a setting is different than its expected value, I want to know about it). I needed it to be able to scale out to dozens of clusters as well, so it needed to be something that I could run both automatically and on an ad-hoc basis if necessary.

NetApp PowerShell Toolkit

The core of the solution is the NetApp PowerShell Toolkit, without which this would likely not be possible. It contains 2300+ cmdlets for provisioning and managing NetApp storage components. It can be downloaded from the ToolChest on the NetApp MySupport site (with a valid login). You'll find exhaustive documentation there as well for each of the cmdlets along with syntax examples and sample code. It is a fantastic and easy way to automate common storage tasks - we use it in our environment for e…

Step up your HTTP security header game with NetScaler Rewrite Policies

There are a number of HTTP response headers that exist to increase web site security. If set properly, they can ensure that your site is less exposed to many common web vulnerabilities. By no means are these descriptions exhaustive, so I have included some references that can provide a more in-depth explanation at the bottom of each section. I'd also like to give a shout-out to the OWASP Secure Headers Project and Scott Helme of securityheaders.com - thank you!

Note: Screenshots are from a NetScaler VPX 12.1 - if you are running a different version, the screenshots may look different, but the logic is the same. So that I have something to bind these policies to, I've also already created a load-balancing virtual server named lb_web_ssl and a Service Group for two TurnKey LAMP servers on the back-end.

X-Frame-Options
The X-Frame-Options header is designed to guard against clickjacking (an attack where malicious content is hidden beneath a clickable button or element on a web si…