The first part of this series gives an overview of the considerations that should be taken into account when sizing a Couchbase Server 2.0 cluster for production.  The second part takes a deeper look at specific use cases and scenarios.  The 3rd goes into some hardware considerations and the 4th discusses the metrics and monitoring for sizing and when to decide to grow the cluster.

When looking to deploy a Couchbase cluster, perhaps the most common (and important) question that comes up is: How many nodes do I need?

While there are obviously many variables that go into this, the basic idea is to evaluate the overall performance and capacity requirements for your workload and dataset, and then divide that into the hardware and resources you have available.  For a high-level and visual discussion on these factors as well as general practices for running a Couchbase cluster in production, see one of my recent Couchbase Conference presentations: Couchbase Server 2.0 in Production 24×7

The sizing of your Couchbase cluster is going to be critical to its stability and performance.  Your application wants as many reads as possible coming out of cache, and the IO capacity to handle its writes.  There needs to be enough capacity in all the various areas to support everything else the system is doing while maintaining the required level of performance.

Throughout this discussion, I will refer to 5 determining factors one should be aware of when sizing a Couchbase cluster: RAM, disk (IO and space), CPU, network bandwidth and overall data distribution.

  1. RAM:  Frequently one of the most crucial areas to size correctly, RAM is what enables Couchbase to be so fast.  Cached documents allow reads to be served at very low latency and high throughput, and available RAM does the same for writes.

We will soon have an updated sizing calculator, but the amount of RAM needed will be an aggregation of: your active and replica data sets, the metadata required for tracking all of these (about 64 bytes of overhead for each item, plus the key lengths), how much of the total dataset needs to be cached in RAM (your ”working set”), and any overhead needed for the OS to do a good job at caching of disk IO.

With Couchbase Server 2.0, we have reduced the per-item metadata overhead.  We’ve also greatly enhanced the “ejection” algorithm to use an NRU (not recently used) bit which is designed to make the object-caching layer more intelligent about what data should be kept in RAM based upon the application workload.

When taking advantage of our new view/index feature, you’ll also want to reserve RAM for the OS to do it’s best job at caching disk requests.  Much more on that in part 2.

  1. Disk: The requirements of your disk subsystem are broken down into two components: size and IO.
  • Size:  Couchbase Server 2.0 is going to have increased disk size requirements over 1.8.   This is typically not a problem since disk space is considered “cheap”, but it is important to take into consideration.

The switch from an in-place-update disk format to an append-only one means that every write (insert/update/delete) will create a new entry in the file(s).  This brings immense advantages in terms of reliability, performance, consistency of that performance and much improved warmup/startup times.  However, it also means that the disk size would grow unbounded.

With 1.8, there was sometimes a performance penalty due to fragmentation of the on-disk files under workloads with frequent updates/deletes.  In 2.0, not only does that fragmentation not even exist, we also have a built-in automatic compaction process that ensures only the relevant copies of data are left around as well as reducing the size of the on-disk files themselves.

Depending on workload, your required disk size may range anywhere from 2-3x your total dataset size (active and replica data combined) due to the append-only disk format.  Heavier update/delete workloads will increase the size more dramatically than insert and read heavy workloads.  In reality, the size is likely to grow and shrink significantly over the course of time as the automatic compaction process runs. The 2-3x number comes more from this need to expand rather than your data actually taking up more space on disk.

There are also new space requirements when taking advantage of the view/indexing feature of Couchbase Server 2.0.  Again, much more on that in part 2.

  • IO: This will be a combination of your sustained write rate, the need for compacting the database files and anything else that requires disk access.  Couchbase Server will automatically buffer writes to the database in RAM and eventually persist them to disk.  Because of this, the software can accommodate much higher write rates than a disk is able to handle.  However, sustaining these writes will eventually require enough IO to get it all down to disk.

You can easily configure the thresholds and scheduling of the compaction process to control when it kicks in (and when it doesn’t kick in) but keep in mind that the successful completion of compaction is critical to keeping the disk size in check.

I will discuss the specifics much more in part 2, but when taking advantage of the new features of Couchbase Server 2.0 such as views/indexing and cross-data center replication (XDCR) disk IO will become much more critical to size correctly.

Lastly, you’ll want to ensure you have enough disk space and IO to take backups as well as anything else outside of Couchbase needing space or accessing the disk.   Where possible, it is our recommended best practice to use the configuration options available to separate data files, indexes and the installation/config directories on separate drives/devices to ensure that IO and space are allocated effectively.

  1. CPU: While typically not much of a concern with Couchbase Server 1.8, the new features of Couchbase Server 2.0 do require an increased amount of CPU.  Our object-based caching layer still allows for an extremely high throughput of requests at low latencies even when the CPU is taxed, but it will be important to ensure there is enough processing power to keep the rest of the system running smoothly.  We generally recommend at least 4 cores, with one extra core per bucket that is being replicated with XDCR and one extra core per design document (related to views).
  1. Network:  In most situations, this is not the determining factor for sizing a Couchbase cluster.  However, it is always important to understand what is going on at the network level and ensure that there is enough bandwidth between your application and the Couchbase nodes as well as between nodes themselves to serve the traffic.  With XDCR, it’s also important to ensure enough bandwidth between clusters (often spread geographically over a WAN).

The network is almost always the final bottleneck impacting latency.  For example, you should expect individual document reads/writes into/out-of cache to receive somewhere response times at the 99th percentile around 1-2ms in Amazon AWS, 500-900us on a standard gig-e network and sub-200us on a 10G network.  All this is indicating that the Couchbase server itself is able to provide extremely low latencies, it is the network that usually tacks on additional time.  Keep in mind your mileage may vary and to use these benchmarks for general comparison purposes.

  1. Data distribution: Regardless of everything else, it’s always important to make sure that your data is safe.  Couchbase is by-design a distributed system and can provide very linear scale and data redundancy only when allowed to do so effectively.

At the extreme, a single-node may be able to meet all of your RAM/disk/CPU/network requirements.  However, that creates an obvious single-point of failure.

Adding a second node will allow for replication but is still not the most ideal environment.  On one hand, a failure of either node will create a single-point of failure.  On the other, the eventual requirement to scale will be benefited by more nodes in the cluster since each one will have to move less and less data.

It’s for these reasons that we always recommend at least 3 nodes in a cluster regardless of the other sizing factors.

There will always be one factor that dictates the highest level of requirement, and therefore “determines” the number of nodes needed.  For example, a relatively small dataset with very high write workload will demand more nodes due to disk throughput requirements rather than overall dataset size.  On the other hand, a read-heavy workload on a relatively large dataset will demand more nodes due to RAM requirements.

All of the above factors are benefited by, and scale linearly by adding more nodes.  Not only are the application reads and writes sharded evenly across the nodes, but things like compaction, rebalance, view/index updates, and XDCR are also heavily advantaged by having more nodes.  Each node only has to operate on its local dataset and so disk and CPU intensive processes can all happen in parallel.

While everyone’s requirements and resource availability vary, you should always err on the side of more, smaller nodes as opposed to forcing Couchbase into a scale-up architecture of a few very expensive nodes.

Finally, guidelines and calculators can only do so much when providing sizing numbers on paper.  It is best to test out the behavior of your specific application at varying levels of scale and constantly monitor the system to ensure it starts and stays sized appropriately.

Part 2 of this series goes into much more depth on the considerations involved when deploying (or upgrading to) a Couchbase Server 2.0 cluster.

Till next time…

Author

Posted by Perry Krug

Perry Krug is an Architect in the Office of the CTO focused on customer solutions. He has been with Couchbase for over 8 years and has been working with high-performance caching and database systems for over 12 years.

2 Comments

  1. […] Compaction performance in Couchbase Server depends on IO capacity and proper cluster sizing. Your cluster must be properly sized so that there is enough capacity in all the various areas to […]

  2. […] the first part of this series, I gave an overview of the 5 factors that determine the sizing of your Couchbase cluster: RAM, disk […]

Leave a reply