Skip to main content

NetApp Cloud Insights Preview, Part 1: Installing the Acquisition Unit



For those of you that aren't familiar with NetApp Cloud Insights, it is an infrastructure monitoring tool that is currently available as a Public Preview. It is designed to provide, well, insight into the often diverse sets of storage and networking components in use across your entire environment - everything from on-premises ONTAP deployments to public cloud offerings from Amazon, Microsoft, and others.

Recently, I registered for the preview and just received my email welcoming me into the preview last week. I am planning a series of posts to cover my experiences with Cloud Insights and share the information with other people for whom a vendor-agnostic SaaS monitoring solution might be a good fit.

As a disclaimer: my experiences will be limited to what is in use in my environment, so the coverage of the public cloud features will likely not be covered in great detail - it will primarily be on-premises ONTAP and VMware monitoring data that I'm looking at.

However, there is a lot of good information about the offering in general if you'd like to get more information on the things that I gloss over. If you'd like to sign up for the preview yourself, there is a link below as well.

Cloud Insights product page
Tech Field Day video with James Holden
Register for Cloud Insights Preview

Now, with all of the introduction out of the way, onto the first step in getting your Cloud Insights deployment up and running - the Acquisition Unit (AU). It is a small application designed to run on a Linux server (CentOS 7 in my case) that connects to your various services (via Data Collectors configured in Cloud Insights) and sends inventory and performance data to the Cloud Insights servers.

When you have been accepted into the preview program, you will get an email with instructions on how to log into your tenant environment via NetApp Cloud Central. Once you have that email, you will be able to log in and download the .zip file that contains the Acquisition Unit software.

The installation is very straightforward and consists of unzipping a file and running the installation script contained within. You'll accept a EULA which I totally read in its entirety and then be prompted for several key pieces of information - server name, port, acquisition user password - all of which are provided in your welcome email. I think the installation took me roughly 10 minutes total and was relatively painless.

One quick note: I did not see any specific system requirements for the Acquisition Unit, but I had originally provisioned my virtual machine with 1 vCPU and 2GB of RAM, and the acquisition service would not start due to (evidently) Java failing to allocate sufficient memory to the application. I allocated 4GB of RAM to the AU instead, and all was well. So I would use 1 vCPU/4GB of RAM for your virtual machine.

Edit: As Rafael kindly pointed out (thank you!) in the comments section, here are the AU requirements:

- CPU: 4 cores
- RAM: 16GB
- Disk: 40GB

Once your Acquisition Unit is installed, you can make sure the service is running by issuing the sudo oci-service.sh status acquisition command. You should see a message like this if all is well:

[user@au-server ~]# sudo oci-service.sh status acquisition
OnCommand (R) Insight
acquisition is running
[user@au-server ~]#

From there, you should be able to see the AU in Cloud Insights as well under Admin -> Data Collectors -> Acquisition Units. The status should say "OK" assuming your AU is able to communicate with the Cloud Insights servers.

Now you can install Data Collectors, which are designed to connect to your on-premises or cloud services and collect inventory and performance data (bet you couldn't have guessed their function by their name!). That is accomplished via the Admin -> Data Collectors -> Available Data Collectors screen. At the time of writing, there are 49 available data collectors.




If you click on one of the boxes, you'll be able to install that particular Data Collector (or be given a choice of multiple Collectors offered by the same vendor. I'll be using the NetApp ONTAP Management Software collector in my example, but the logic is the same for other Data Collectors. 

I am using an existing service account with read-only access to my NetApp clusters, so if you have one of those, great. If not, you'll probably want to create one to use for monitoring use cases like this one. 

Once you click on the Data Collector, you'll see a screen to put in all of the relevant information for your NetApp deployment:



















Once you have all the information added, click on Test Configuration in order to validate that the AU is able to connect to the management IP and run a test command. You'll see a message indicating whether or not the test passed. 

From there, you can see the status of all of your installed Data Collectors by going to Admin -> Data Collectors -> Installed Data Collectors. There you can see high-level information about the status of the polling for your Data Collectors - the default polling interval appears to be 20 minutes and is configurable on a per-Data Collector basis. 

Once the Data Collectors have been polled successfully for inventory and performance data, you can begin to build some dashboards to get some use out of all the data. I'll be covering dashboards in a future blog post, so keep an eye out for a post with pretty graphs in the next few weeks.

So far, I'm pretty excited about what Cloud Insights has been able to offer, even just for on-premises ONTAP and VMware deployments. Hopefully I'll get to dig in even deeper and find out how I can use it in my environment to accelerate troubleshooting and get VM-level monitoring data to correlate with storage performance data. 

Thanks for reading and as always, stay tuned for future posts!





Popular posts from this blog

How To: Unjoin NetApp Nodes from a Cluster

Let me paint you a word picture:

You've upgraded to a shiny new AFF - it's all racked, stacked, cabled and ready to rock. You've moved your volumes onto the new storage and your workloads are performing beautifully (of course) and it's time to put your old NetApp gear out to pasture.

We're going to learn how to unjoin nodes from an existing cluster. But wait! There are several prerequisites that must be met before the actual cluster unjoin can be done.


Ensure that you have either moved volumes to your new aggregates or offlined and deleted any unused volumes.Offline and delete aggregates from old nodes.Re-home data LIFs or disable/delete if they are not in use.Disable and delete intercluster LIFs for the old nodes (and remove them from any Cluster Peering relationships)Remove the old node's ports from any Broadcast Domains or Failover Groups that they may be a member of.Move epsilon to one of the new nodes (let's assume nodes 3 and 4 are the new nodes, in th…

ONTAP Configuration Compliance Auditing with PowerShell and Pester

I have been looking for a way to validate NetApp cluster configuration settings (once a configuration setting is set, I want to validate that it was set properly in a programmatic fashion) and prevent configuration drift (if a setting is different than its expected value, I want to know about it). I needed it to be able to scale out to dozens of clusters as well, so it needed to be something that I could run both automatically and on an ad-hoc basis if necessary.

NetApp PowerShell Toolkit

The core of the solution is the NetApp PowerShell Toolkit, without which this would likely not be possible. It contains 2300+ cmdlets for provisioning and managing NetApp storage components. It can be downloaded from the ToolChest on the NetApp MySupport site (with a valid login). You'll find exhaustive documentation there as well for each of the cmdlets along with syntax examples and sample code. It is a fantastic and easy way to automate common storage tasks - we use it in our environment for e…

Step up your HTTP security header game with NetScaler Rewrite Policies

There are a number of HTTP response headers that exist to increase web site security. If set properly, they can ensure that your site is less exposed to many common web vulnerabilities. By no means are these descriptions exhaustive, so I have included some references that can provide a more in-depth explanation at the bottom of each section. I'd also like to give a shout-out to the OWASP Secure Headers Project and Scott Helme of securityheaders.com - thank you!

Note: Screenshots are from a NetScaler VPX 12.1 - if you are running a different version, the screenshots may look different, but the logic is the same. So that I have something to bind these policies to, I've also already created a load-balancing virtual server named lb_web_ssl and a Service Group for two TurnKey LAMP servers on the back-end.

X-Frame-Options
The X-Frame-Options header is designed to guard against clickjacking (an attack where malicious content is hidden beneath a clickable button or element on a web si…