Prerequisites

As mentioned in Part 1 of the blog, we need to run Prometheus and Grafana in the Kubernetes environment on our Amazon EKS. The recommended way is to use Kube-Prometheus, an Open Source project. Not only will this simplify the deployment, but adds a lot more components, like the Prometheus Node Exporter which monitors Linux host metrics and is typically used in a Kubernetes environment. 

Clone the https://github.com/coreos/kube-prometheus repository from Github, but do not create any manifests just yet.

Components included in this package:

Note:

This tutorial works on the basis that the manifests which bring up the relevant resources for Prometheus Operator are still located in the folder manifests.

Please adjust accordingly if changes have been made as the repository is experimental and subject to change.

 

Create the Couchbase ServiceMonitor

The ServiceMonitor tells Prometheus to monitor a Service resource that defines the endpoints Prometheus scrapes for incoming metrics provided by the couchbase-exporter. This file,couchbase-serviceMonitor.yaml, should be kube-prometheus/manifests directory. 

Legend:

  1. You may wish to include our Couchbase ServiceMonitor in the monitoring namespace along with the other ServiceMonitors. For examples of this tutorial we have just left it in the default namespace for ease of use.
  2. The port can be a string value and will work for different port numbers of the service as long as the name matches.
  3. interval tells Prometheus how often to scrape the endpoint. Here we want to match the namespace of the Service will we be creating in the next step,
  4. note that the namespace our Service will be running in must be the same namespace of the Couchbase Cluster we wish to scrape metrics from.
  5. Similar to the namespaceSelector, this is a simple labelSelector that will select the service we will be creating.

Create the Couchbase Metrics Service

The Service will define the port that we described in our ServiceMonitor at spec.endpoint[0].port earlier. his file,couchbase-service.yaml, should be kube-prometheus/manifests directory. 

Legend:

  1. As mentioned previously, make sure that the Service is in the same namespace as the Couchbase cluster that you wish to scrape metrics from, otherwise no pods will be selected and no endpoints will be displayed in Prometheus Targets. Also make sure this value matches up with spec.namespaceSelector in the ServiceMonitor.
  2. Keep this port as its default value of 9091 as this is the port the Couchbase Exporter will be exporting to.
  3. A further level of granularity to your selector can be added in the scenario you have more than one Couchbase Cluster running in the same namespace.

Prometheus Dynamic Service discovery

Prometheus discovers the monitoring end points, dynamically, by matching the labels on the ServiceMonitor to the Services which specify the cluster and end points, Port 9091 in our case.

Create the Manifests

Follow the specific command given in the Github README to bring up our created resources along with the other provided default manifests.

Components such as Prometheus, AlertManager, NodeExporter and Grafana should then startup and we can confirm this by inspecting the pods in the namespace monitoring.

Let’s begin.

Create the Kubernetes namespace and CRDs

Wait for a few minutes before the next step, but it may be necessary to run the command multiple times for all components to be created successfully.

Create the remaining resources

Check monitoring namespaces

Components such as Prometheus, AlertManager, NodeExporter and Grafana should then startup and we can confirm this by inspecting the pods in the namespace monitoring.

Check that our ServiceMonitor has been created.

Check that our Service has been created.

In the above output, we not only see the services, but also the ports. We will use this information to forward these ports, like we did with the Couchbase Administration UI, in order to access these services.

To check that all is working correctly with the Prometheus Operator deployment, run the following command to view the logs:

Port forwarding

We have already forwarded the Couchbase Admin UI port 8091 from one Couchbase node previously, but I’m giving this again, this time from the service point of view. 

In addition to that port, we actually need only the Grafana service access, Port 3000. However, let’s access Prometheus service Port 9090 as well. Then we can take a look at all the metrics from the different exporters and try a little PromQL, the Prometheus Query Language as well. 

Now, the 3 above should be sufficient. However, there’s some additional advantage of taking a look at the metrics from each individual service as well. The Couchbase exporter exposes the Couchbase metrics on Port 9091. So, we can forward those ports as well. Note that you really need only access to Grafana.

Check out Prometheus Targets

Access: http://localhost:9090/targets

All Prometheus targets should be UP. There are quite a few of these since Kube-Prometheus deployed a bunch of exporters.

Check out the raw Couchbase Metrics

Access: http://localhost:9091/metrics

This output is useful as you can rapidly search through the list.

Try a basic PromQL query

In the above UI, click on Graph first.

The drop box gives you the list of metrics scraped. This is the complete list of all the metrics scraped by all the exporters and that’s a pretty daunting list.  One method to narrow down the list to just Couchbase metrics, is of course, to access the 9091 endpoint as previously described.

Check out Grafana

Access: http://localhost:3000

Userid and password are: admin/admin

The kube-prometheus deployment of Grafana already has the Prometheus Data Source defined and a large set of Default Dashboards. Let’s check out the Default Node Dashboard

Build a Sample Grafana Dashboard to Monitor Couchbase Metrics

We will not be building a complete Dashboard, but a small sample with a few panels to show how it’s done. This dashboard will monitor the number of items in a bucket and the number of GET and SET operations.

Note: Please have the pillow-fight application running as described in Part 1. This will generate the operations which we are interested in monitoring.

Prometheus Metrics

Access: http://localhost:9091/graph

We are interested in the current items in a bucket. There are a couple of metrics which supply that, cluster wide and per node. Let’s use the per node metric. We will then allow Prometheus to handle all aggregations, as per best practices. Another advantage is that we can even show the current items in the bucket, per node basis just to check if our data set is skewed.

Let’s take a look at one Element:

In the above example, we are interested in these labels: bucket, part of node (the middle part, cb-example which is the cluster name, and pod. We are also interested in service in order to filter. These will help us design a dashboard where we can view the metrics by bucket, node or cluster.

The Sample Dashboard

Lets create a new blank sample dashboard.

Adding variables

Since we want the metrics per bucket, node and cluster, let’s add these variables so that they can be selected in a drop box.

The above example creates the variable bucket. Note the Query and Regex expression. 

Let’s create 2 more so that we have 3 variables:

The Query does not change for these 3, but here are the Regex expressions:

Creating a Panel

Create 3 Panels for Current Items, GETs and SETs

You can duplicate each panel and edit them. These are the queries:

Items Panel: sum(cbpernodebucket_curr_items{bucket=~”$bucket”,pod=~”$node”}) by (bucket)

GETs Panel: sum(cbpernodebucket_cmd_get{bucket=~”$bucket”,pod=~”$node”}) by (bucket)

SETs Panel: sum(cbpernodebucket_cmd_set{bucket=~”$bucket”,pod=~”$node”}) by (bucket)

The completed sample Grafana Dashboard

This is how our final sample dashboard looks like.

Clean Up

Finally clean up your deployment:

Resources:

Posted by Prasad Doddi

Prasad is a Senior Product Manager for Couchbase Supportability, Manageability and Tools. Prior to Couchbase, he worked at IBM in various departments including Development, QA, Support and Technical Sales. Prasad holds a master’s degree in Chem. Engg. from Clarkson University, NY.

Leave a reply