Kubernetes Logo

This blog is part of a multi-part blog series that shows how to run your applications on Kubernetes. It will use the Couchbase, an open source NoSQL distributed document database, as the  Docker
container.

This fourth part will show:

  • How to setup and start the Kubernetes cluster on Azure
  • Run Docker container in the Kubernetes cluster
  • Expose Pod on Kubernetes as Service
  • Shutdown the cluster

azure-kubernetes-couchbase-cluster

Many thanks to @colemickens  for helping me through this recipe.

Install and Configure Azure CLI

Azure CLI is a command-line interface to develop, deploy and manage Azure applications. This is needed in order to install Kubernetes cluster on Azure.

  1. Install Node:
  2. Install Azure CLI:
  3. Sign up for free trial at https://azure.microsoft.com/en-us/free/.
  4. Login to Azure using the command azure login:
  5. Get account information using azure account show command:

    Note the value shown instead of XXX and YYY. These will be used to configure the Kubernetes cluster.

Start Kubernetes Cluster

  1. Download Kubernetes 1.2.4 and extract it.
  2. Kubernetes cluster on Azure can be started as:

    Make sure to specify the appropriate values for XXX and YYY from the previous command. AZURE_SUBSCRIPTION_ID and AZURE_TENANT_ID are specific to Azure. These values
    can also be edited in cluster/azure/config-default.sh.
  3. Start Kubernetes cluster:

    It starts four nodes of Standard_A1 size. Each
    node gives you 1 core, 1.75 GB RAM, and 40GB HDD.

Run Docker Container in Kubernetes Cluster on Azure

Now that the cluster is up and running, get a list of all the nodes:

Four instances are created as shown – one for master node and three for worker nodes. Azure Portal shows all the created artifacts in the Resource Group:
 azure-portal-kubernetes-resource-group-1024x578

More details about the created nodes is available:

 azure-portal-kubernetes-resource-group-1024x578

Create a Couchbase pod:

Notice, how the image name can be specified on the CLI. Kubernetes pre-1.2 versions created a Replication Controller with this command. This is explained in  Kubernetes on Amazon Web Services or Kubernetes on Google Cloud. Kubernetes 1.2 introduced Deployments and
so this creates a Deployment instead. This enables simplified application deployment and management including versioning, multiple simultaneous rollouts, aggregating status across all pods, maintaining application availability and rollback.

The pod uses arungupta/couchbase Docker image that provides a pre-configured Couchbase server. Any Docker image can be specified here. Status of the pod can be watched:

Get more details about the pod:

Expose Pod on Kubernetes as Service

Now that our pod is running, how do I access the Couchbase server? You need to expose the Deployment as a Service outside the Kubernetes cluster. Typically, this will be exposed using the command:

But Azure does not support --type=LoadBalancer at this time. This feature is being worked upon and will hopefully be available in the near future. So in the meanwhile, we’ll expose the Service as:

Now proxy to this Service using kubectl proxy command:

And now this exposed Service is accessible at http://127.0.0.1:9999/api/v1/proxy/namespaces/default/services/couchbase/index.html. This shows the login
screen of Couchbase Web Console:
azure-kubernetes-couchbase-web-console

Shutdown Kubernetes Cluster

Finally, shutdown the cluster using cluster/kube-down.sh script.

This script shuts down the cluster but the Azure resource group need to be explicitly removed. This can be done by selecting the Resource Group from portal.azure.com:
 azure-portal-kubernetes-delete-resource-group

This is filed as #26601.

Further references …

Enjoy!

Author

Posted by Arun Gupta, VP, Developer Advocacy, Couchbase

Arun Gupta is the vice president of developer advocacy at Couchbase. He has built and led developer communities for 10+ years at Sun, Oracle, and Red Hat. He has deep expertise in leading cross-functional teams to develop and execute strategy, planning and execution of content, marketing campaigns, and programs. Prior to that he led engineering teams at Sun and is a founding member of the Java EE team. Gupta has authored more than 2,000 blog posts on technology. He has extensive speaking experience in more than 40 countries on myriad topics and is a JavaOne Rock Star for three years in a row. Gupta also founded the Devoxx4Kids chapter in the US and continues to promote technology education among children. An author of several books on technology, an avid runner, a globe trotter, a Java Champion, a JUG leader, NetBeans Dream Team member, and a Docker Captain, he is easily accessible at @arungupta.

One Comment

  1. I made good progress on trying this but hit a problem at the very end. I can manually hit the pools endpoint on the proxied connection and authenticate, but the web ui isn’t trying to use the proxied URL and is instead just trying 127.0.0.1:9999/pools (instead of the full proxied url). Did I miss a step? As an aside, I was able to use the –type=LoadBalancer which is now apparently available.

Leave a reply