Apache Cassandra™ is a leading NoSQL database platform for modern applications. By offering benefits of continuous availability, high scalability & performance, strong security, and operational simplicity — while lowering overall cost of ownership — Cassandra has become a proven choice for both technical and business stakeholders. When compared to other database platforms such as HBase, MongoDB, Redis, MySQL and many others, the linearly scalable database Apache Cassandra™ delivers higher performance under heavy workloads.

The following benchmark tests provide a graphical, ‘at a glance’ view of how these platforms compare under different scenarios.  When selecting a database it is critically important to understand your use case and find the right fit. Below you will find the following three bechmarks; taking a look at write/read performance and performance at scale: 

  • University of Toronto Benchmark

  • Netflix: Benchmarking Apache Cassandra Scalability

  • End Point Benchmark Configuration and Results

University of Toronto NoSQL Database Performance

Engineers at the University of Toronto, in 2012, conducted a thorough benchmarking analysis of various NoSQL platforms including: Apache Cassandra, HBase, MySQL, Redis and Voldemort. The testing was extremely thorough and included a view into performance under varying workloads.

For a look at the details behind this analysis as well as a complete write up of the benchmark configurations used, the white paper Solving Big Data Challenges for Enterprise Application Performance Management provides all of the insight from this test. Overall their results identified Apache Cassandra the “clear winner throughout our experiments”.

A summary of throughput and latency results are available here.

Throughput for workload Read/Write
Throughput for workload Read/Scan/Write
Read latency for workload Read/Write
Write latency for workload Read/Write

If this benchmarking data from University of Toronto is interesting, take a 10 minute Cassandra walkthrough and learn more.




Netflix decided to run a test designed to validate their tooling and automation scalability as well as the performance characteristics of Cassandra. The results of their testing are provided below. For a more thorough write up of the Netflix testing process including configuration settings and commentary, visit their tech blog post titled Benchmarking Cassandra Scalability on AWS – Over a million writes per second.


End Point Benchmark Configuration and Results Summary

End Point, a database and open source consulting company, benchmarked the top NoSQL databases — Apache Cassandra, Apache HBase, and MongoDB — using a variety of different workloads on Amazon Web Services EC2 instances. This is an industry-standard platform for hosting horizontally scalable services such as the NoSQL databases that were tested. In order to minimize the effect of AWS CPU and I/O variability, End Point performed each test 3 times on 3 different days. New EC2 instances were used for each test run to further reduce the impact of any “lame instance” or “noisy neighbor” effect on any one test.

A summary of the workload analysis is available below. For a review of the entire testing process with testing environment configuration details, the benchmarking NoSQL databases white paper by End Point is available.


Goals for the Tests

  1. Select workloads that are typical of today’s modern applications
  2. Use data volumes that are representative of ‘big data’ datasets that exceed the RAM capacity for each node
  3. Ensure that all data written was done in a manner that allowed no data loss (i.e. durable writes), which is what most production environments require

Tested Workloads

The following workloads were included in the benchmark:

  1. Read-mostly workload, based on YCSB’s provided workload B: 95% read to 5% update ratio
  2. Read/write combination, based on YCSB’s workload A: 50% read to 50% update ratio
  3. Read-modify-write, based on YCSB workload F: 50% read to 50% read-modify-write
  4. Mixed operational and analytical: 60% read, 25% update, 10% insert, and 5% scan
  5. Insert-mostly combined with read: 90% insert to 10% read ratio


Throughput Results