When I started out using MongoDB in 2012 as the ops and architecture guy, I had a few key problems with how it is architected. As time has passed, I’ve observed others having similar challenges. A fundamental problem is MongoDB does not fully utilize the power of servers/instances it has, by design. I hate provisioning three nice servers/instances, but then the application only really uses a fraction of the resources. That is what MongoDB does. You can only write to the primary in a replica set. Then to scale reads, you have to read from the secondary nodes, and those reads can be eventually consistent. If you must have full consistency, you can only interact with the primary node of a replica set.

Couchbase, on the other hand, lets you scale out, up, or both. You won’t have servers underutilized by design. All the servers in the cluster are contributing to the performance of the application AND they still deliver high availability.

So let’s dive in and see how both of these databases handle this.

Scaling with MongoDB

At the root of the problem is MongoDB’s “Master-Slave” architecture. If you have three physical servers, for example, and you run a MongoDB cluster as a replica set (with the recommended one MongoDB instance per server), the application can only write to one of these three servers, the one running the Primary. The other two servers in the replica set are purely slaves (“secondaries” in MongoDB terms) and are only there in the event that the Primary goes down. You could certainly read data off those two secondaries to scale reads, but then your reads would no longer be strongly consistent. So, in the MongoDB recommended approach, those secondary servers are underutilized (outside of background DB maintenance tasks) and barely loved by the application.

As the lonely secondary servers sit there, they must be configured with the same resources as the Primary in the event they are needed to become a Primary. In the meantime, they are hardly utilized by your application. Then let’s say your web site is chugging along fantastically, but DB performance is starting to suffer a bit and writes are taking longer than your SLA allows. Now it’s time to scale! With MongoDB you need to add at a minimum another four but best practice is six servers. According to MongoDB’s best practices, you need to add another replica set (Shard #2). That’s three servers, but then there are the config servers that hold the information about where every document lives; so according to best practices that’s three more servers.

Well, at least with all those new servers you’ll have a lot more throughput for writes, correct? Not so fast! Even by adding six more servers, you now only up to two servers to write to, not nine. You have the original Primary (replica set #1) and now a new Primary (replica set #2). All this and all you have done is significantly amped up the complexity and maintenance overhead of your cluster. If you need to add another shard for more write performance, you need to add another three nodes. Your application does not even get to play with two of each of those servers in each replica set.

One way to get around the underutilization of bare metal servers that I have observed is to put lots and lots of mongod processes on the same server, against Mongo’s own recommendations. Performance issues aside, and even if virtual servers are used, what happens when one of the nodes goes down? How many primaries and secondaries will you lose? Have you kept track of where all the primaries and secondaries are to make sure a replica set is appropriately spread across the servers in the cluster to withstand this? You talk about amping up your complexity, holy smokes this does that!

Scaling with Couchbase

Let’s look at the same, three server scenario I discussed above, but this time with Couchbase in mind. Your application can read and write across all three servers worth of isolated disks, memory, network, CPU cores, etc. In Couchbase your data is evenly distributed across the three servers and replicated between them. The Couchbase SDK your application uses knows how to access your data, where the Couchbase services are in the cluster, and how to access them all. Let’s take the same scaling scenario as above and see your options in Couchbase. You can quickly add another server to the cluster with two clicks from the Admin Console OR with one CLI command.

Compare this to the steps required to add a replica set with MongoDB. With Couchbase, when you need more capacity, you just add another server or two, then rebalance. Next week, if you need more capacity, you can add more and rebalance again. See how easy that is? Your applications are always using all of the servers in the cluster. You do not even have to change your application and you can add one or more servers any time you wish with no down time. The bottom line is that you are not wasting servers or server resources. You can use every bit of each of the servers if you want to, and scale out horizontally. You can write your application on one node of Couchbase on your computer and deploy against however large a cluster in production with no extra effort.

Couchbase does have replica data as well of course, but replica data is used only for High Availability. Why give up consistency for performance like in MongoDB, when you can get both in Couchbase? And when it comes to lowering complexity when you do want to use VMs with Couchbase, you can can use Rack/Zone Awareness.

Below is a quick visual comparison of what we are talking about here. Notice that in the MongoDB cluster with all those servers, there are only three servers you can write to out of the fourteen listed. 

 

Compare this with the Couchbase side and you see you how much simpler, cleaner and easier to manage it really is. For more information along these lines, see Manuel Hurtado’s blog post that contrasts setting up a production ready cluster in Couchbase vs. MongoDB. There are also independent, 3rd party architectural comparisons that address the problem, such as this one.

Summary

If it is not obvious by now, MongoDB has some serious problems in how an application can fully utilize all those expensive servers/instances. Yes, there are some workarounds, but most ops people will advise against stacking like that; even with virtualization, going that dense is just not a good idea. On top of all of that, this isn’t something MongoDB will likely fix any time soon, as it is baked into their core architecture. Couchbase, on the other hand, is architected to fully utilize every server you give it, spreading out the data and load evenly across the cluster. All that with ease of management and the ability to operate at any scale.

Server utilization is an important issue because the combined costs of database licenses, hardware, management overhead, and facility costs can quickly spiral out of control. To put some numbers around this, management overhead aside, it costs over $700 per year just to power (not cool) an average commodity server in your own datacenter. And if you choose to go to the Cloud for capacity, total cost can approach $8-10,000 per instance, per year. So this is truly an example where architectural considerations can have significant cost and complexity implications. So if you are up for it, go download Couchbase, load it up and see how easy it is to squeeze the power out of those servers and Couchbase itself.

Author

Posted by Kirk Kirkconnell, Senior Solutions Engineer, Couchbase

Kirk Kirkconnell was a Senior Solutions Engineer at Couchbase working with customers in multiple capacities to assist them in architecting, deploying, and managing Couchbase. His expertise is in operations, hosting, and support of large-scale application and database infrastructures.

Leave a reply