Skip to content

Commit

Permalink
Merge pull request apache#2763 from druid-io/b-docs
Browse files Browse the repository at this point in the history
clean up for extensions docs
  • Loading branch information
fjy committed Mar 31, 2016
2 parents 5f9240f + 14dbc43 commit 2fc5918
Show file tree
Hide file tree
Showing 9 changed files with 23 additions and 1 deletion.
2 changes: 2 additions & 0 deletions docs/content/development/extensions-contrib/azure.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ layout: doc_page

# Microsoft Azure

To use this extension, make sure to [include](../../operations/including-extensions.html) `druid-azure-extensions` extension.

## Deep Storage

[Microsoft Azure Storage](http://azure.microsoft.com/en-us/services/storage/) is another option for deep storage. This requires some additional druid configuration.
Expand Down
2 changes: 2 additions & 0 deletions docs/content/development/extensions-contrib/cassandra.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ layout: doc_page

# Apache Cassandra

To use this extension, make sure to [include](../../operations/including-extensions.html) `druid-cassandra-storage` extension.

[Apache Cassandra](http://www.datastax.com/what-we-offer/products-services/datastax-enterprise/apache-cassandra) can also
be leveraged for deep storage. This requires some additional druid configuration as well as setting up the necessary
schema within a Cassandra keystore.
4 changes: 4 additions & 0 deletions docs/content/development/extensions-contrib/distinctcount.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,10 @@ layout: doc_page

# DistinctCount aggregator

To use this extension, make sure to [include](../../operations/including-extensions.html) `druid-distinctcount` extension.

Additionally, follow these steps:

(1) First use single dimension hash-based partitioning to partition data by a dimension for example visitor_id, this to make sure all rows with a particular value for that dimension will go into the same segment or this might over count.
(2) Second use distinctCount to calculate exact distinct count, make sure queryGranularity is divide exactly by segmentGranularity or else the result will be wrong.
There is some limitations, when use with groupBy, the groupBy keys' numbers should not exceed maxIntermediateRows in every segment, if exceed the result will wrong. And when use with topN, numValuesPerPass should not too big, if too big the distinctCount will use many memory and cause the JVM out of service.
Expand Down
2 changes: 2 additions & 0 deletions docs/content/development/extensions-contrib/graphite.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ layout: doc_page

# Graphite Emitter

To use this extension, make sure to [include](../../operations/including-extensions.html) `graphite-emitter` extension.

## Introduction

This extension emits druid metrics to a graphite carbon server.
Expand Down
2 changes: 2 additions & 0 deletions docs/content/development/extensions-contrib/kafka-simple.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ layout: doc_page

# Kafka Simple Consumer

To use this extension, make sure to [include](../../operations/including-extensions.html) `druid-kafka-eight-simpleConsumer` extension.

## Firehose

This is an experimental firehose to ingest data from kafka using kafka simple consumer api. Currently, this firehose would only work inside standalone realtime nodes.
Expand Down
6 changes: 6 additions & 0 deletions docs/content/development/extensions-contrib/parquet.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,11 @@
---
layout: doc_page
---

# Parquet

To use this extension, make sure to [include](../../operations/including-extensions.html) `druid-avro-extensions` and `druid-parquet-extensions`.

This extension enables Druid to ingest and understand the Apache Parquet data format offline.

## Parquet Hadoop Parser
Expand Down
2 changes: 2 additions & 0 deletions docs/content/development/extensions-contrib/rabbitmq.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ layout: doc_page

# RabbitMQ

To use this extension, make sure to [include](../../operations/including-extensions.html) `druid-rabbitmq` extension.

## Firehose

#### RabbitMQFirehose
Expand Down
2 changes: 2 additions & 0 deletions docs/content/development/extensions-contrib/rocketmq.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,6 @@ layout: doc_page

# RocketMQ

To use this extension, make sure to [include](../../operations/including-extensions.html) `druid-rocketmq` extension.

Original author: [https://github.com/lizhanhui](https://github.com/lizhanhui).
2 changes: 1 addition & 1 deletion docs/content/development/extensions.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ If you'd like to take on maintenance for a community extension, please post on [
|druid-cloudfiles-extensions|Rackspace Cloudfiles deep storage and firehose.|[link](../development/extensions-contrib/cloudfiles.html)|
|druid-distinctcount|DistinctCount aggregator|[link](../development/extensions-contrib/distinctcount.html)|
|druid-kafka-eight-simpleConsumer|Kafka ingest firehose (low level consumer).|[link](../development/extensions-contrib/kafka-simple.html)|
|druid-parquet-extensions|Support for data in Apache Parquet data format.|[link](../development/extensions-contrib/parquet.html)|
|druid-parquet-extensions|Support for data in Apache Parquet data format. Requires druid-avro-extensions to be loaded.|[link](../development/extensions-contrib/parquet.html)|
|druid-rabbitmq|RabbitMQ firehose.|[link](../development/extensions-contrib/rabbitmq.html)|
|druid-rocketmq|RocketMQ firehose.|[link](../development/extensions-contrib/rocketmq.html)|
|graphite-emitter|Graphite metrics emitter|[link](../development/extensions-contrib/graphite.html)|
Expand Down

0 comments on commit 2fc5918

Please sign in to comment.