2. Agenda
1. This presentation is related to performance benchmarks
for Cassandra based systems
2. Discuss benchmarking in general
3. Define and Approach
4. Explore gotchas and things to look out for
5. Hear from you! (Prizes for best benchmarking stories)
3. Benchmarking
• Benchmark testing is the process of load testing a
component or an entire end to end IT system to determine
the performance characteristics of the application.
4. Benchmarking Properties
• Should be repeatable
• Should capture performance measurements from
successive runs
• Ideally there should be low variance between successive
tests
• Should highlight improvements or degradation in system
changes
5. Modern Systems
• More often than not distributed.
• Many different types of system components
• Complex performance constraints
• What is Easily Measured? Network, CPU, Memory, I/O
Utilisation
• More Difficult: Tech. Specific Factors, e.g. Cassandra –
impact of compaction, read performance
6. Justification for Benchmarking
• Simple:
• Is the system going to keep performing the more users there are?
• Complex:
• Cost Reduction
• Optimisation
• Growth Projection
• TCO
8. Caveats
• The more information you have the better…
• Any investment in systemic testing is generally a good
investment
• Simplify the goals/outcomes for business
• Automate as much as possible and formalise test
procedure to ensure adherence to quality measures.
• As interested in percentiles as well as mean values
9. Requirements
• Discover resource constraints
• Discover modes of failure
• To guarantee operation outside of usual parameters
• Ensure SLAs are being met
• Ensure operation over longer periods is consistent.
10. Basic Approach
• Distinguish component benchmark from system
benchmark.
• Component benchmark is important, defines a basic SLA
for inter component operations.
• A system is sum of all parts, not just each component :
Component performance does not imply system
performance.
• Take corrective action from the bottom up (network,
hardware, compute resources) as well as from the top
down (API design, data access patterns).
11. Holistic Approach
• The system exists to service business requirements, work
backwards from them.
• Define our benchmark from user perspective.
• Technical goals + business goals must align.
• The system must function in its entirety, it is not sufficient
to performance test each component in isolation.
12. 1. Define a Basic Traffic Model
• Example - Simple Storefront
• GET /product/list (50%)
• GET /product/{id} (20%)
• POST /product/{id}/order (20%)
• GET /orders/list (10%)
13. 2. Define a User Profile
• User Type 1
• Browse heavy
• GET /product/list (70%)
• GET /product/{id} (20%)
• POST /product/{id}/order (5%)
• GET /orders/list (5%)
• User Type 2
• Compulsive buyers
• GET /product/list (30%)
• GET /product/{id} (20%)
• POST /product/{id}/order (30%)
• GET /orders/list (20%)
14. Peak Periods?
• Adding an hourly activity allows for a more useful
benchmark.
• Can be expressed as active user count.
• Very simple to assign a probability to the number of each
type of user on the system at that time.
• E.g. 20% type 1, 80% type 2.
• The ideal circumstance is to use real data for these
models if any is available.
• Distributed load drivers coordinate to meet the hourly user
count.
18. Considerations
• Cassandra’s append only writes mean writes are always
consistently fast given sufficient resources
• Compaction has a different impact depending on the
strategy you use (STCS lighter than LCS).
• Pending compactions tend to backup more during load
oriented testing
• Reads have a significant impact depending on:
• Spread of column mutations across SSTables
• Compaction strategy (STCS less efficient for above than LCS)
• No. of reads for same row key (whether we are exercising the key
cache or not)
• Our consistency level (same for writes)
19. Common Issues
• Poor query design (unbounded queries, abuse of ALLOW
FILTERING), anti-patterns.
• Poor capacity planning, disk, memory, cpu etc.
• Many failed requests on coordinators may lead to
resources being over-used for hinted handoff.
• If a node is memory constrained you may get JVM pauses
due to garbage collection
• Poor network connectivity and incorrect consistency
levels may lead to more timeouts.
• It is possible to have hotspots in Cassandra if you have
not modelled keys correctly.
20. What to collect during test?
• Read / Write latency per CF (nodetool cfstats)
• No. of reads / writes (nodetool cfstats)
• No. of pending compactions
• Thread Pool usage, especially pending (nodetool tipstats)
• Correlate with
• Disk i/o
• CPU
• Memory usage
• Visualise as much as possible and use overlays for
correlation.
21. Points to Remember
• Latency reported by Cassandra is internal, so only useful
to tell if Cassandra I/O is performing adequately. Graph it
to get most value or use OpsCentre.
• Add metrics at every tier in your system, make sure it is
possible to correlate the above number with latency in
other parts of the system.
• Soak testing is critical with Cassandra as empty system
performance may be very different as disk utilization /
compaction requirements grow.
• Experiment with settings for easy gains. Some CFs may
benefit from RowCache.