So the weekend is here. Fantastic. What are we up to on our Saturday? Well. We’re welcoming our friends from Crate.io for a kubernetes Hackathon at the Endocode HQ. Having integrated with Mesos and Docker, Crate is kick-starting their advanced support for Google’s orchestration tool with this hackathon. There will be introductory talks about Crate and kubernetes during the day, food, drinks and cluster hacking on Google Compute Engine, kubernetes and Crate.
Alan Kay: Simple things should be simple, complex things should be possible Since Google opened the Kubernetes (k8s) project, it has never been so easy to run your own application in a cloud, public or private. And because it is easy and because it is very useful, we will show you how to do it. You have the choice of running it on nearly every public cloud, or with minimal changes, in your own private data center.
In part 2 of this series, we learned about Docker and how you can use it to deploy the individual components of a stream processing pipeline by containerizing them. In the process, we also saw that it is can get a little complicated. This part will show how to tie all the components together using CoreOS. We already introduced CoreOS in part 1 of this series, so go back and take a look if you need to familiarize yourself.
Building a stream processing pipeline with Kafka, Storm and Cassandra – Part 2: Using Docker Containers
In case you missed it, part 1 of this series introduced the applications that we’re going to use and explained how they work individually. In this post, we’ll see how to run Zookeeper, Kafka, Storm and Cassandra clusters inside Docker containers on a single host. We’re going to use Ubuntu 14.04 LTS as the base operating system. Introducing Docker Docker is a software platform used for the packaging and deployment of applications which are then run on a host operating system in their own isolated environment.
Building a stream processing pipeline with Kafka, Storm and Cassandra – Part 1: Introducing the components
When done right, computer clusters are very powerful tools. They can bring great advantages in speed, scalability and availability. But the extra power comes at the cost of additional complexity. If you don’t stay on top of that complexity, you’ll soon become bogged down by it all and risk losing all the benefits that clustering brings. In this three-part series, we’re going to explain how you can simplify the setup and operation of a computing cluster.
Introduction Computer clusters have been with us quite a few years now in one form or another, but several trends have come together to make them incredibly important today. Low-cost commodity hardware, ubiquitous fast networking, and solid distributed systems have all helped usher in today’s era of server farms and “Big Data”, things which make clusters a critical tool. In a cluster, multiple computers (or nodes) are connected together through a network.