Skip to main content

NetApp ONTAP 9.3 Simulator Deployment – Part 2


In Part 1 of the deployment guide, we learned how to deploy a single NetApp simulator on ESXi. For this post, I’d like to detail the process of adding a second simulator to the cluster. We did a lot of the legwork for the first node’s deployment, so adding the second node is a much shorter process.



1. Go through the steps to deploy the OVA package again just like the first simulator (refer to Part 1 steps 7-11 if needed)

2. Once the OVA has been deployed, boot the second simulator. This time, however, we’ll need to interrupt the boot process to access the LOADER prompt in order to change the system ID and serial number of the second simulator (two systems can’t have the same serial/system ID).

Press Space Bar when you see the Hit [Enter] to boot immediately, or any other key for command prompt. Booting in 10 seconds... message. You’ll see a VLOADER> prompt.

3. Run the setenv SYS_SERIAL_NUM 4034389-06-2 and setenv bootarg.nvram.sysid 4034389062 commands to set the serial number and system ID, respectively.











You can run the printenv SYS_SERIAL_NUM and printenv bootarg.nvram.sysid commands to verify that the values were updated accordingly.

4. Enter the boot command at the VLOADER prompt to boot the node.

5. When you see “Press Ctrl-C for Boot Menu”, press Ctrl-C.

6. At the Boot Menu, choose option 4.

















7. Hit y when asked if you want to zero the disks, reset the config, and install a new filesystem. Hit y again to confirm. The node will reboot. As with the first node, this process will take several minutes.

8. Type yes to enable AutoSupport and hit Enter.












9. Hit Enter to accept e0c as the node management interface and fill in the IP/netmask details for the second node.











As with the first node, we'll be using the CLI to join the secondary node to the cluster.


10. Type join when asked whether you want to create a new cluster or join an existing cluster.






11. Accept the system defaults at the next screen. The cluster interfaces will be created.









12. Hit Enter to join the existing cluster (in my environment, the cluster name is labcluster1).

13. Assign all unowned disks to the second node by running the storage disk assign -all true -node <second node name> command.

14. Assign the secondary node licenses from the text file that you used during the setup of the first node.

15. The setup of the second node is complete. Run a cluster show command to verify that the cluster is healthy.














That concludes the process for adding a second node to the NetApp Simulator environment! My simulators are running the GA version of 9.3 at the moment, but there have been some updates to 9.3 since its release - I'll cover upgrading the cluster to the latest 9.3 release in a future post.

Thanks for reading!

Comments

Popular posts from this blog

How To: Unjoin NetApp Nodes from a Cluster

Let me paint you a word picture:

You've upgraded to a shiny new AFF - it's all racked, stacked, cabled and ready to rock. You've moved your volumes onto the new storage and your workloads are performing beautifully (of course) and it's time to put your old NetApp gear out to pasture.

We're going to learn how to unjoin nodes from an existing cluster. But wait! There are several prerequisites that must be met before the actual cluster unjoin can be done.


Ensure that you have either moved volumes to your new aggregates or offlined and deleted any unused volumes.Offline and delete aggregates from old nodes.Re-home data LIFs or disable/delete if they are not in use.Disable and delete intercluster LIFs for the old nodes (and remove them from any Cluster Peering relationships)Remove the old node's ports from any Broadcast Domains or Failover Groups that they may be a member of.Move epsilon to one of the new nodes (let's assume nodes 3 and 4 are the new nodes, in th…

Modernizing a NetApp Certification

Read on to find out how new versions of NetApp exams are written during an Item Development Workshop at NetApp's RTP office
In mid-October, this message popped up in the NetApp United Slack channel from Petya Stefanova, NetApp United's fearless leader:
Hey @channel there’s a new opportunity to participate in an IDW with NetAppU. This time the workshop will be reviewing the two exams for NetApp Certified Data Administrator ONTAP (NCDA, NS0-192) and NetApp Certified Support Engineer ONTAP (NCSE ONTAP, NS0-590), taking place mid-end January. If you are interested, drop me an email how you quality and can contribute to IDW. I need to submit nominations by Friday. So please let me know ASAP! Partners and customers can participate
I immediately knew that it was something that I would be interested in, so I talked to my employer to get their approval and put in my application. At the time, I didn't have any NetApp certifications so I didn't expect to be selected to take part in…

Cisco UCS Platform Emulator Installation

To continue my series of posts on building the framework for a functional lab environment, I'd like to talk about the Cisco UCS Platform Emulator (UCSPE). It is a software appliance packaged as a vSphere OVA that approximates a UCS deployment, including the networking components (a pair of switches called the Fabric Interconnects) and both blade and rackmount UCS servers (B- and C-Series, respectively). It can be a great tool for learning and becoming more familiar with the UCS platform. I will be deploying my UCSPE on vSphere 6.7 in my lab, but it should work similarly in other recent versions.

1. Start by downloading the UCS Platform Emulator OVA from https://communities.cisco.com/docs/DOC-71877 - you will need a Cisco Connection Online (CCO) login in order to begin the download. I am using version 3.1(2ePE1) of the emulator for this guide as that appeared to be the latest version available at the time of writing. Side note, I also noticed during the boot process that this versi…