Skip to content

Commit

Permalink
Merge pull request LinkedInAttic#9 from RallySoftware/property-cleanup
Browse files Browse the repository at this point in the history
Cleaning up perperty names from example properties and README
  • Loading branch information
kengoodhope committed Feb 11, 2013
2 parents b593ad5 + 7e47520 commit e925b11
Show file tree
Hide file tree
Showing 2 changed files with 9 additions and 8 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,10 +95,10 @@ Here is an abbreviated list of commonly used parameters.
* kafka.max.pull.hrs=1
* Events with a timestamp older than this will be discarded.
* kafka.max.historical.days=3
* Max bytes pull for a topic-partition in a single run
* kafka.max.pull.megabytes.per.topic=4096
* Max minutes for each mapper to pull messages
* kafka.max.pull.minutes.per.task=-1
* Decoder class for Kafka Messages to Avro Records
* kafka.message.decoder.class=
* camus.message.decoder.class=
* If whitelist has values, only whitelisted topic are pulled. Nothing on the blacklist is pulled
* kafka.blacklist.topics=
* kafka.whitelist.topics=
Expand Down
11 changes: 6 additions & 5 deletions camus-example/src/main/resources/camus.properties
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,13 @@ zookeeper.broker.topics=/brokers/topics
zookeeper.broker.nodes=/brokers/ids

# Concrete implementation of the Encoder class to use (used by Kafka Audit, and thus optional for now)
#kafka.message.encoder.class=com.linkedin.batch.etl.kafka.coders.DummyKafkaMessageEncoder
#camus.message.encoder.class=com.linkedin.batch.etl.kafka.coders.DummyKafkaMessageEncoder

# Concrete implementation of the Decoder class to use
kafka.message.decoder.class=com.linkedin.batch.etl.kafka.coders.LatestSchemaKafkaAvroMessageDecoder
camus.message.decoder.class=com.linkedin.batch.etl.kafka.coders.LatestSchemaKafkaAvroMessageDecoder

# Used by avro-based Decoders to use as their Schema Registry
kafka.message.decoder.schema.registry.class=com.linkedin.camus.example.DummySchemaRegistry
kafka.message.coder.schema.registry.class=com.linkedin.camus.example.DummySchemaRegistry

# all files in this dir will be added to the distributed cache and placed on the classpath for hadoop tasks
# hdfs.default.classpath.dir=
Expand All @@ -29,8 +29,9 @@ mapred.map.tasks=30
kafka.max.pull.hrs=1
# events with a timestamp older than this will be discarded.
kafka.max.historical.days=3
# max bytes pull for a topic-partition in a single run
kafka.max.pull.megabytes.per.topic=4096
# Max minutes for each mapper to pull messages (-1 means no limit)
kafka.max.pull.minutes.per.task=-1


# if whitelist has values, only whitelisted topic are pulled. nothing on the blacklist is pulled
kafka.blacklist.topics=
Expand Down

0 comments on commit e925b11

Please sign in to comment.