RECOMMENDED PATHS
AdministratorLOGIN TO START THIS PATH
VIEW COURSES
0%
DS101: Introduction to Apache Cassandra™
0%
DS201: DataStax Enterprise 6 Foundations of Apache Cassandra™
0%
DS210: DataStax Enterprise 6 Operations with Apache Cassandra™
ArchitectLOGIN TO START THIS PATH
VIEW COURSES
0%
DS101: Introduction to Apache Cassandra™
0%
DS201: DataStax Enterprise 6 Foundations of Apache Cassandra™
0%
DS210: DataStax Enterprise 6 Operations with Apache Cassandra™
0%
DS220: DataStax Enterprise 6 Practical Application Data Modeling with Apache Cassandra™
DeveloperLOGIN TO START THIS PATH
VIEW COURSES
0%
DS101: Introduction to Apache Cassandra™
0%
DS201: DataStax Enterprise 6 Foundations of Apache Cassandra™
0%
DS220: DataStax Enterprise 6 Practical Application Data Modeling with Apache Cassandra™
Graph SpecialistLOGIN TO START THIS PATH
VIEW COURSES
0%
DS101: Introduction to Apache Cassandra™
0%
DS201: DataStax Enterprise 6 Foundations of Apache Cassandra™
0%
DS220: DataStax Enterprise 6 Practical Application Data Modeling with Apache Cassandra™
0%
DS330: DataStax Enterprise 6 Graph
UNIT CONTENTS
Unit Status

Recommended Content

Learning Units
Learning Units
Learning Units
Learning Units
Learning Units
Learning Units
 
 
  Previous Unit: Node
Finish & Next Unit: Peer to Peer  

DSE Version: 6.0

Ring

Video

Exercises

In this unit we will learn about ring, also known as the Apache Cassandra cluster. It is the heart of Apache Cassandra. Since Apache Cassandra is a clustering system, it is an important concept to understand.

Transcript: 

Hopefully you got your head around the single node and are ready to move onto the heart of cassandra, the Ring, also known as the Apache Cassandra cluster.  Because Apache Cassandra is a clustering system, this is an important concept to understand. You could run Apache Cassandra on a single node, but why would you?  If you did that you would be missing out on all the cool things Apache Cassandra can do as a cluster! But why wouldn’t you run it on a single node? Well, if you did that, it would have to be a pretty big machine to run that node, which means spending more money and buying something to handle the load.  What we are talking about here is horizontal scaling.

If you are looking at my single node here, for a while, he’s handling the load just fine.  Reads, writes, everything is dandy.

But as the load increases, our poor little node is cracking under the pressure.  When this happens, the system falls apart. I am sure you’ve been in a situation like this before and it isn’t pretty.

So how does Apache Cassandra handle this situation?  We scale by adding more nodes. Easy, huh? If you need more capacity, you just add more nodes!  In a cloud environment, the cool thing is that you only need to rent more servers to handle the additional load.

So how does this Apache Cassandra cluster work?  Let’s say that I have data come in to the cluster that lives on a particular node, What is the process there?  Let’s show you an example. Let’s say I have partition 59 with some data. The cool thing is that you can write that data to any node in the cluster.  That node becomes the coordinator node. The job of the coordinator is to send that data to the node that stores that partition. Just FYI, any node can act as a coordinator in the cluster.  But how does it work? How does the coordinator know what node the data should go to?

What happens is that each node is responsible for a range of data. This is a called a token range.  Knowing what range of data each node owns, the coordinator can send the data to the correct node. The coordinator will get the data from the client and then send the data to the correct node.  Then the coordinator sends an acknowledgment back to the client.

So how much data are you going to store in this cluster?  A lot I am betting. In the previous example, we broke the cluster into a range of 100.  This was for demonstration purposes only to make the token range a little easier to understand.  A lot of data would need many more values than 100.

In actuality, the range goes from negative 2 to the 63 all the way around to 2 to the 63 minus 1.  That should be plenty for you to store all the data you could possibly want to store. It’s a really, REALLY big number.

How are the token values across the ring distributed?  This is where the partitioner comes into play. The partitioner determines how you are going to distribute your data across the ring.  If this gets messed up, you end up with hotspots in your cluster. The data will be distributed unevenly and some of your nodes will be overloaded while other nodes are happy as clams.  This is bad. Eventually your nodes will blow up and everyone will be unhappy. The best course of action here is to to use the right partitioner for the job. In this case murmur 3 or MD5 which will do a nice distribution of data across the ring in a random fashion but an even distribution. This is the default.

Let’s talk about when a node joins a cluster. This is a really cool thing when it comes to cluster operations.  What makes it cool? No downtime. The cluster stays up and running while a new node joins. So what actually happens when a node joins the ring?  First thing it does is gossips out to the seed nodes. Just a little shout out to say “hey! I’m new here, what do i need to do?" The node calculates where it fits in the ring (this can be manual or automatic).  And then the other nodes stream their data down to the new node. There are 4 states to each node; joining, leaving, up and down. During the joining phase, the node is still receiving data but he’s not fully joined so he’s not ready for reads.  But what’s neat is that this node is joining online.

Now if you have all this data flying around your cluster, your drivers can really come in handy.  When a driver connects to an apache Cassandra cluster, is participates in the exchange of all this data.  When the driver connects, it’s going to get a lay of the land and an understanding of the token ranges and will store that information locally.  The driver is then aware of what token range belongs to which node and which replicas. So why is it important for the client to know what is going on in the cluster when it comes to token ranges?  Well, the cool thing about the driver is that it is aware of the token ranges. One of the driver policies is called token aware policy so when data comes in the driver says “oh, this node owns this token range so i will send this data right to it.”  What’s neat about that is that it eliminates the need to involve the coordinator thus making the system more efficient. There are other policies as well, such as round robin that do rely on coordination as well as the DC Aware Round Robin policy with makes sure the data stays within the local data center.  Apache Cassandra is topology aware, your driver should be as well.

What does this all mean with Scaling?  Well back to our single server environment.  The single server can get overloaded which is bad all the way around.  At this point you have a choice. You could get bigger and bigger servers, which means you will have downtime as you offload data and move everything to the bigger server, or!  You can go back to what Apache Cassandra does best. You can add more nodes as the load grows. Take a look at this graph. As you add more nodes, you get more capacity. This is a read-write-modify load.  The same holds true for a Read mostly load or a read/write balanced load. Any of those should show that just by adding nodes to the cluster, all of these behave about the same when it comes to scaling with Apache Cassandra.  To sum up, you add more nodes you get more capacity, which rocks. Being able to add more capacity online? No downtime? Wow.

We are always talking about this, but it’s still important.  Vertical scaling is still a thing. You can beef up the servers you have to add capacity.  However, horizontal scaling is key here. You always want efficiency. You can see here that 2 nodes can do a lot on their own, but if you double that, you get twice the capacity.   And so on and so on. Every time you double the number of nodes, you are doubling capacity. Production clusters of 10, 100 or 1000 nodes are built to match the load they are put against.  A cool feature of Apache Cassandra is the ability to scale as you need to.

That being said, the opposite is true as well.  If you had scaled up your cluster in a time of increased workload, such as black friday, you can scale back by decommissioning nodes if they are not needed anymore.  This will save you money in the long run.

No write up.

Exercise 7Ring

In this exercise, you will:

  • Understand the Apache Cassandra™ token ring
  • Create a two-node cluster

One of the secrets to Apache Cassandra's™ performance is the use of a ring that keeps track of tokens. This ring enables Apache Cassandra™ to know exactly which nodes contain which partitions. The ring also eliminates any single points of failure.

Steps

1) First, we will shut down your current node. Do so by executing the following command:

/home/ubuntu/node/resources/cassandra/bin/nodetool stopdaemon

Wait for the node to terminate before continuing.

2) Delete the /home/ubuntu/node folder by executing the following commands:

cd /home/ubuntu

rm -rf node

3) To make a two-node cluster, we will unzip the DataStax Enterprise™ tarball twice making two folders: node1 and node2. In your terminal, execute the following commands within the /home/ubuntu directory:

tar -xf dse-6.0.0-bin.tar.gz

mv dse-6.0.0 node1

labwork/config_node 1

tar -xf dse-6.0.0-bin.tar.gz

mv dse-6.0.0 node2

labwork/config_node 2

4) Open the /home/ubuntu/node2/resources/cassandra/conf/cassandra.yaml file in vi, nano or any other text editor.

5) Change the initial_token value and set it to 9223372036854775807 (note you need a space between the colon and the value). This node will manage the second half of the token range--the positive tokens.

# initial_token allows you to specify tokens manually. While you can use it with
# vnodes (num_tokens > 1, above) --in which case you should provide a
# comma-separated list --it's primarily used when adding nodes to legacy clusters
# that do not have vnodes enabled.
initial_token: 9223372036854775807

Save the changes to the file and exit the text editor.

6) In your terminal, start the first node via /home/ubuntu/node1/bin/dse cassandra Wait for the node to start up. Once it has done so, press enter to get back to the prompt.

7) Run the /home/ubuntu/node1/resources/cassandra/bin/nodetool status command to verify the node is working properly.

ubuntu@ds201-node1:~$ /home/ubuntu/node1/resources/cassandra/bin/nodetool status
Datacenter: Cassandra
=====================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address    Load       Owns    Host ID                               Token                                    Rack
UN  127.0.0.1  114.83 KiB ?       e2ba30fc-1589-4ae4-8f98-69051151c44f  0                                        rack1

The UN indicates UP NORMAL meaning the node is ready to go. Load indicates current disk space usage. Owns indicates how many tokens this node is responsible for (it is the only node in the ring at the moment). Token should be 0 (the same value set in the cassandra.yaml file). We discuss racks later in the course.

8) Start the second node via /home/ubuntu/node2/bin/dse cassandra command. This node will take longer to bootstrap and join the cluster. Wait for the second node to finish bootstrapping before continuing.

9) Use /home/ubuntu/node1/resources/cassandra/bin/nodetool status again to view the current state of the cluster. Notice both nodes are now up and normal. If not, please consult your instructor.

10) Now let's recreate the two tables we made in previous exercises and import their data. Start cqlsh and execute the commands that follow:
CREATE KEYSPACE killrvideo WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1 };

USE killrvideo;

CREATE TABLE videos (id uuid,added_date timestamp,title text,PRIMARY KEY ((id)));

COPY videos(id, added_date, title) FROM '/home/ubuntu/labwork/data-files/videos.csv' WITH HEADER=TRUE;

     CREATE TABLE videos_by_tag (tag text,video_id uuid,added_date timestamp,title text,PRIMARY KEY ((tag), added_date, video_id)) WITH CLUSTERING ORDER BY(added_date DESC);

COPY videos_by_tag(tag, video_id, added_date, title) FROM '/home/ubuntu/labwork/data-files/videos-by-tag.csv' WITH HEADER=TRUE;

11) Now let's determine which nodes own which partitions in the videos_by_tag table. Execute the following query:

SELECT token(tag), tag FROM videos_by_tag;

  • How many partitions are there?
  • On which node does each partition reside?

12) You can refresh your memory as to which nodes own which token ranges by running the following in the terminal:

/home/ubuntu/node1/resources/cassandra/bin/nodetool ring

13) You can further prove which partition resides on which node by executing the following nodetool commands in the terminal:

/home/ubuntu/node1/resources/cassandra/bin/nodetool getendpoints killrvideo videos_by_tag 'cassandra'

/home/ubuntu/node1/resources/cassandra/bin/nodetool getendpoints killrvideo videos_by_tag 'datastax'

getendpoints returns the IP addresses of the node(s) which store the partitions with the respective partition key value (the last argument in single quotes: cassandra and datastax respectively). Notice we must also supply the keyspace and table name we are interested in since we set replication on a per-keyspace basis. There is more on replication to come later in this course.

No FAQs.
No resources.
Comments are closed.