Cassandra

Building a stream processing pipeline with Kafka, Storm and Cassandra – Part 3: Using CoreOS

In part 2 of this series, we learned about Docker and how you can use it to deploy the individual components of a stream processing pipeline by containerizing them. In the process, we also saw that it is can get a little complicated. This part will show how to tie all the components together using CoreOS. We already introduced CoreOS in part 1 of this series, so go back and take a look if you need to familiarize yourself.

Continue reading

Building a stream processing pipeline with Kafka, Storm and Cassandra – Part 2: Using Docker Containers

In case you missed it, part 1 of this series introduced the applications that we’re going to use and explained how they work individually. In this post, we’ll see how to run Zookeeper, Kafka, Storm and Cassandra clusters inside Docker containers on a single host. We’re going to use Ubuntu 14.04 LTS as the base operating system. Introducing Docker Docker is a software platform used for the packaging and deployment of applications which are then run on a host operating system in their own isolated environment.

Continue reading

Building a stream processing pipeline with Kafka, Storm and Cassandra – Part 1: Introducing the components

When done right, computer clusters are very powerful tools. They can bring great advantages in speed, scalability and availability. But the extra power comes at the cost of additional complexity. If you don’t stay on top of that complexity, you’ll soon become bogged down by it all and risk losing all the benefits that clustering brings. In this three-part series, we’re going to explain how you can simplify the setup and operation of a computing cluster.

Continue reading