It’s official. The leading NoSQL database vendors – DataStax, MongoDB, and Couchbase – have locked horns.

In the past 30 days,

  1. Couchbase published a benchmark on March 19th.
  2. MongoDB responded by publishing a benchmark on March 31st.
  3. DataStax responded by publishing a benchmark on April 13th.

It’s important to perform competitive benchmarks. When we sponsor a benchmark, we learn how well our database performs in a specific scenario with a specific configuration. When the competition sponsors a benchmark, we learn how well it performs in a different scenario with a different configuration. That being said, we all owe it to the community to ensure benchmarks represent real-world scenarios with proper configurations. And for that, we must be transparent.

While we’re happy DataStax recognized the importance of benchmarks, this benchmark does not provide an accurate representation of Couchbase Server performance. The lack of transparency in this benchmark raises a number of unanswered questions that create uncertainty about its validity.

1) Why was the cache misconfigured for Couchbase Server?

Couchbase Server supports two options for cache optimization – “value” ejection, the default option, and “full” ejection. This benchmark enabled the wrong cache optimization, “full” ejection, for this scenario – significantly limiting the read performance of Couchbase Server.

Couchbase Server caches the metadata and value when a document is inserted.

When “value” ejection is enabled, and there is not enough memory, Couchbase Server will eject the value of other documents but retain their metadata.

When “full” ejection is enabled, and there is not enough memory, Couchbase Server will eject both the metadata and the value of other documents.

We would recommend “value” ejection for the scenario in this benchmark because there was enough memory to cache the metadata for all 500M documents. This would have improved read performance by eliminating unnecessary disk IO to determine where the data was stored on disk.

Couchbase Server and MongoDB are both capable of better read performance. Why was their read performance so poor in this benchmark?

The numbers don’t add up. How did Couchbase Server perform worse with a larger percentage of data in memory and much smaller document size? The size of the documents in the Couchbase benchmark was 1K. It was 100 bytes in the DataStax benchmark.

Couchbase Server Performance in
(1) Couchbase Benchmark and (2) DataStax Benchmark

Benchmark Ops per Second per Core Data per Node Memory per Node Nodes Cores per  Node
1 Couchbase-sponsored 4,600 32GB 10GB 9 8
2 DataStax-sponsored 375 50GB 23GB 8 4

2) Why was the Couchbase Server YCSB client based on an outdated client library?

The GitHub repository shows the YCSB client for Couchbase Server used in this benchmark is based on an old client library for Couchbase Server 2.5. It should have been based on the current client library for Couchbase Server 3.0. The old client waited a minimum of 10 milliseconds before checking to see if the data has been written to disk. The current client waits as little as 10 microseconds.

The write performance of Couchbase Server would have been better with a second generation client.

3) Why was replication disabled for Cassandra?

DataStax configured Cassandra without replication – this is not recommended in the real world. It’s an unacceptable level of risk. If a node fails, the data is unavailable. Did DataStax disable replication to improve the write performance of Cassandra?

Cassandra is eventually consistent by default. Couchbase Server and MongoDB enforce strong consistency by default. If Cassandra was configured to replicate data, there would be issues with consistency. If strong consistency was configured via quorums, there would be a negative performance impact.

Couchbase Server replicates data to two nodes by default – it’s stored on the owner and on two replicas. MongoDB replicates data to multiple nodes – it’s stored on the primary node and on secondary nodes. This benchmark does not state the number of replicas configured for Couchbase Server or the number of secondary nodes deployed for MongoDB. We just don’t know, and this is why transparency is important. We are unable to reproduce the performance tests without it.

4) Were Cassandra Writes Durable?

Couchbase Server and MongoDB were configured for durable writes. It’s not clear whether or not Cassandra was configured for durable writes. If they were not, Cassandra would benefit from a significant performance advantage at the expense of durability. By default, Cassandra writes are not durable.

5) Why wasn’t replication used to achieve durability?

We recommend enabling replication to achieve durability. By replicating the data to multiple nodes, the data is durable because it will not be lost if a node fails. In this benchmark, the data is stored on a single node so it’s not durable unless it’s written to disk. However, a distributed database like Cassandra or Couchbase Server can store the data on multiple nodes… and that’s what we recommend.

Extra Credit: Defend MongoDB

That’s right, we’re defending MongoDB.

First, we’re surprised DataStax configured MongoDB with range based partitioning knowing the keys were incremental (“user1”, “user2”, “user3”). MongoDB 2.4 introduced hash based partitioning.

Second, we’re surprised DataStax published a benchmark that includes a release candidate for MongoDB. We believe benchmarks should be limited to generally available releases. They should not include milestone, candidate, alpha, or beta releases. The Couchbase benchmark includes MongoDB 3.0 because we waited for the GA release.

Third, we don’t believe this represents MongoDB read performance well. After all, MongoDB executed up to 74K operations per second in the Couchbase benchmark. That’s a lot more than the 2K ops / second demonstrated in this benchmark. We don’t know why, and that’s because this benchmark lacks transparency.

Conclusion: To be of value, benchmarks need to be transparent and properly configured

It’s important for vendors, prospects, and the community for benchmarks to be easily reproduced. In order for that to happen, published benchmarks must includes all of the client configuration and all of the database configuration. However, it would be difficult for us to reproduce this benchmark given the lack of transparency.

That being said, it’s difficult to benchmark databases. It’s not enough to know how to configure your own database, you have to know how to configure other databases as well. After benchmarking Cassandra, we learned how to better configure it in subsequent benchmarks. We hope DataStax has learned how to better configure Couchbase Server.

If needed, we’re more than happy to help with their next benchmark.

Discuss on Hacker News

Author

Posted by Shane Johnson, Director, Product Marketing, Couchbase

Shane K Johnson was the Director of Product Marketing at Couchbase. Prior to Couchbase, he occupied various roles in developing and evangelism with a background in Java and distributed systems. He has consulted with organizations in the financial, retail, telecommunications, and media industries to draft and implement architectures that relied on distributed systems for data and analysis.

Leave a reply