Skip to content

Commit

Permalink
Upgraded to Kafka 1.1.0
Browse files Browse the repository at this point in the history
  • Loading branch information
pgraff committed Jun 3, 2018
1 parent 14d3c44 commit 49375de
Show file tree
Hide file tree
Showing 16 changed files with 100 additions and 76 deletions.
21 changes: 11 additions & 10 deletions labs/01-Verify-Installation/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -1,12 +1,13 @@
version: '2'
services:
zookeeper:
image: zookeeper:3.4.9
kafka:
image: wurstmeister/kafka:0.10.1.1
environment:
HOSTNAME_COMMAND: "echo $HOSTNAME"
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
depends_on:
- zookeeper
zookeeper:
image: wurstmeister/zookeeper:3.4.6
ports:
- 2181:2181
kafka:
image: wurstmeister/kafka:1.1.0
environment:
KAFKA_LISTENERS: PLAINTEXT://kafka:9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
ports:
- 9092:9092
16 changes: 8 additions & 8 deletions labs/01-Verify-Installation/hello-world-kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ In this lab, you will install Kafka with Docker and verify it is working by crea

One of the easiest way to get started with Kafka is through the use of [Docker](https://www.docker.com). Docker allows the deployment of applications inside software containers which are self-contained execution environments with their own isolated CPU, memory, and network resources. [Install Docker by following the directions appropriate for your operating system.](https://www.docker.com/products/overview) Make sure that you can run both the `docker` and `docker-compose` command from the terminal.

## Alias
## [OPTIONAL] Alias

Because we use docker and docker-compose, the commands to run the kafka CLI are absurdly long.

Expand All @@ -28,9 +28,9 @@ You may want to alias these commands. In Linux and Mac, you can simply create al
E.g., say you run bash, you can open the `~/.bash_profile` file with your favorite editor and enter something like this:

```
alias ktopics='docker-compose exec kafka /opt/kafka_2.11-0.10.1.1/bin/kafka-topics.sh'
alias kconsole-producer='docker-compose exec kafka /opt/kafka_2.11-0.10.1.1/bin/kafka-console-producer.sh'
alias kconsole-consumer='docker-compose exec kafka /opt/kafka_2.11-0.10.1.1/bin/kafka-console-consumer.sh'
alias ktopics='docker-compose exec kafka /opt/kafka/bin/kafka-topics.sh'
alias kconsole-producer='docker-compose exec kafka /opt/kafka/bin/kafka-console-producer.sh'
alias kconsole-consumer='docker-compose exec kafka /opt/kafka/bin/kafka-console-consumer.sh'
```

When you start new shells, you can now simply run:
Expand Down Expand Up @@ -72,28 +72,28 @@ You are now running inside the container and all the commands should work (and a
3. Open an additional terminal window in the lesson directory, `lelabs/01-Verify-Installation`. We are going to create a topic called `helloworld` with a single partition and one replica:

```
$ docker-compose exec kafka /opt/kafka_2.11-0.10.1.1/bin/kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic helloworld
$ docker-compose exec kafka /opt/kafka/bin/kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic helloworld
```

4. You can now see the topic that was just created with the `--list` flag:

```
$ docker-compose exec kafka /opt/kafka_2.11-0.10.1.1/bin/kafka-topics.sh --list --zookeeper zookeeper:2181
$ docker-compose exec kafka /opt/kafka/bin/kafka-topics.sh --list --zookeeper zookeeper:2181
helloworld
```

5. Normally you would use the Kafka API from within your application to produce messages but Kafka comes with a command line _producer_ client that can be used for testing purposes. Each line from standard input will be treated as a separate message. Type a few messages and leave the process running.

```
$ docker-compose exec kafka /opt/kafka_2.11-0.10.1.1/bin/kafka-console-producer.sh --broker-list kafka:9092 --topic helloworld
$ docker-compose exec kafka /opt/kafka/bin/kafka-console-producer.sh --broker-list kafka:9092 --topic helloworld
Hello world!
Welcome to Kafka.
```

6. Open another terminal window in the lesson directory. In this window, we can use Kafka's command line _consumer_ that will output the messages to standard out.

```
$ docker-compose exec kafka /opt/kafka_2.11-0.10.1.1/bin/kafka-console-consumer.sh --bootstrap-server kafka:9092 --topic helloworld --from-beginning
$ docker-compose exec kafka /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server kafka:9092 --topic helloworld --from-beginning
Hello world!
Welcome to Kafka.
```
Expand Down
2 changes: 1 addition & 1 deletion labs/02-Publish-And-Subscribe/consumer/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.10.1.1</version>
<version>1.1.0</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
Expand Down
6 changes: 3 additions & 3 deletions labs/02-Publish-And-Subscribe/docker/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
version: '2'
services:
zookeeper:
image: zookeeper:3.4.9
image: wurstmeister/zookeeper:3.4.6
ports:
- 2181:2181
kafka:
image: wurstmeister/kafka:0.10.1.1
image: wurstmeister/kafka:1.1.0
ports:
- 9092:9092
- 7203:7203
environment:
KAFKA_ADVERTISED_HOST_NAME: [INSERT IP ADDRESS HERE]
KAFKA_ADVERTISED_PORT: 9092
# KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
depends_on:
- zookeeper
20 changes: 12 additions & 8 deletions labs/02-Publish-And-Subscribe/producer.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,17 +24,17 @@ All the directory references in this lab is relative to where you expended the l
version: '2'
services:
zookeeper:
image: zookeeper:3.4.9
image: wurstmeister/zookeeper:3.4.6
ports:
- 2181:2181
kafka:
image: wurstmeister/kafka:0.10.1.1
image: wurstmeister/kafka:1.1.0
ports:
- 9092:9092
- 7203:7203
environment:
KAFKA_ADVERTISED_HOST_NAME: [INSERT IP ADDRESS HERE]
KAFKA_ADVERTISED_PORT: 9092
# KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
depends_on:
- zookeeper
Expand All @@ -56,9 +56,9 @@ All the directory references in this lab is relative to where you expended the l
inet6 fe80::cc8a:c5ff:fe43:b670%awdl0 prefixlen 64 scopeid 0x8
inet6 fe80::7df6:ec93:ffea:367a%utun0 prefixlen 64 scopeid 0xa
```

In this case, the IP address to use is `10.0.1.4`. Make sure you *do not* use `127.0.0.1` because that will not work correctly.


On Windows, you can use the following command:

```
Expand All @@ -80,6 +80,10 @@ All the directory references in this lab is relative to where you expended the l
Save the `docker-compose.yml` file after making this modification.

> We have noticed on some configurations of Windows and Linux that the use of `KAFKA_ADVERTISED_HOST_NAME` does not work properly (the Kafka clients can't connect).
> We've not found the source of this problem, but in many of the cases we've seen, the use of `localhost` instead of the host IP may work.
> Note though, that the use of `localhost` prevents you from running multiple Kafka brokers on the same machine.

1. Start the Kafka and Zookeeper processes using Docker Compose:

```
Expand All @@ -89,14 +93,14 @@ All the directory references in this lab is relative to where you expended the l
1. Open an additional terminal window in the lesson directory, `docker/`. We are going to create two topics that will be used in the Producer program. Run the following commands:

```
$ docker-compose exec kafka /opt/kafka_2.11-0.10.1.1/bin/kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic user-events
$ docker-compose exec kafka /opt/kafka_2.11-0.10.1.1/bin/kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic global-events
$ docker-compose exec kafka /opt/kafka/bin/kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic user-events
$ docker-compose exec kafka /opt/kafka/bin/kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic global-events
```

1. List the topics to double check they were created without any issues.

```
$ docker-compose exec kafka /opt/kafka_2.11-0.10.1.1/bin/kafka-topics.sh --list --zookeeper zookeeper:2181
$ docker-compose exec kafka /opt/kafka/bin/kafka-topics.sh --list --zookeeper zookeeper:2181
global-events
user-events
```
Expand All @@ -107,7 +111,7 @@ All the directory references in this lab is relative to where you expended the l
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.10.1.1</version>
<version>1.1.0</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
Expand Down
2 changes: 1 addition & 1 deletion labs/02-Publish-And-Subscribe/producer/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.10.1.1</version>
<version>1.1.0</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.10.1.1</version>
<version>1.1.0</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ services:
- 7203:7203
environment:
KAFKA_ADVERTISED_HOST_NAME: [INSERT IP ADDRESS HERE]
# KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
depends_on:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.10.1.1</version>
<version>1.1.0</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,8 +43,8 @@ $ docker-compose up
Next we'll simply create the topics. Open a new terminal in the `docker` directory.

```
$ docker-compose exec kafka /opt/kafka_2.11-0.10.1.1/bin/kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic device-heartbeat
$ docker-compose exec kafka /opt/kafka_2.11-0.10.1.1/bin/kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic device-event
$ docker-compose exec kafka /opt/kafka/bin/kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic device-heartbeat
$ docker-compose exec kafka /opt/kafka/bin/kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic device-event
```

## Build and run the device simulator
Expand Down Expand Up @@ -87,7 +87,7 @@ To see the messages, let's run our usual console consumer. In a new terminal `cd

```
$ cd docker
$ docker-compose exec kafka /opt/kafka_2.11-0.10.1.1/bin/kafka-console-consumer.sh --bootstrap-server kafka:9092 --topic device-heartbeat
$ docker-compose exec kafka /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server kafka:9092 --topic device-heartbeat
```

After a few seconds you should start to see heartbeat messages being produced. E.g.:
Expand Down Expand Up @@ -151,7 +151,7 @@ In a new shell, go to the `docker` directory.

```
$ cd docker
$ docker-compose exec kafka /opt/kafka_2.11-0.10.1.1/bin/kafka-console-consumer.sh --bootstrap-server kafka:9092 --topic device-event
$ docker-compose exec kafka /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server kafka:9092 --topic device-event
```

It may take some time before you see online or offline messages (see the device simulator and you'll see the randomness of the heartbeat production).
Expand Down
41 changes: 41 additions & 0 deletions labs/06-Streaming/docker/docker-compose-with-spark.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
version: '2'
services:
zookeeper:
image: zookeeper:3.4.9
ports:
- 2181:2181
kafka:
image: wurstmeister/kafka:0.10.1.1
ports:
- 9092:9092
- 7203:7203
environment:
KAFKA_ADVERTISED_HOST_NAME: [INSERT IP ADDRESS HERE]
# KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
depends_on:
- zookeeper
master:
image: singularities/spark:2.0
command: start-spark master
hostname: master
volumes:
- ./spark:/app
environment:
HDFS_USER: root
depends_on:
- kafka
worker:
image: singularities/spark:2.0
command: start-spark worker master
environment:
SPARK_WORKER_CORES: 1
SPARK_WORKER_MEMORY: 2g
HDFS_USER: root
links:
- master
volumes:
- ./spark:/app
depends_on:
- master
27 changes: 2 additions & 25 deletions labs/06-Streaming/docker/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,37 +5,14 @@ services:
ports:
- 2181:2181
kafka:
image: wurstmeister/kafka:0.10.1.1
image: wurstmeister/kafka:1.1.0
ports:
- 9092:9092
- 7203:7203
environment:
KAFKA_ADVERTISED_HOST_NAME: [INSERT IP ADDRESS HERE]
# KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
depends_on:
- zookeeper
master:
image: singularities/spark:2.0
command: start-spark master
hostname: master
volumes:
- ./spark:/app
environment:
HDFS_USER: root
depends_on:
- kafka
worker:
image: singularities/spark:2.0
command: start-spark worker master
environment:
SPARK_WORKER_CORES: 1
SPARK_WORKER_MEMORY: 2g
HDFS_USER: root
links:
- master
volumes:
- ./spark:/app
depends_on:
- master

2 changes: 1 addition & 1 deletion labs/06-Streaming/iot-kafka-solution/gps-pump/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.10.1.1</version>
<version>1.1.0</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
Expand Down
6 changes: 3 additions & 3 deletions labs/06-Streaming/iot-kafka-solution/processor/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,12 @@
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.10.1.1</version>
<version>1.1.0</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams</artifactId>
<version>0.10.1.1</version>
<version>1.1.0</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
Expand Down Expand Up @@ -76,7 +76,7 @@

<!-- (optional) name for binary executable, if not set will just -->
<!-- make the regular jar artifact executable -->
<programFile>wordcounter</programFile>
<programFile>parking-processor</programFile>
</configuration>

<executions>
Expand Down
Loading

0 comments on commit 49375de

Please sign in to comment.