Skip to content

Commit

Permalink
[hotfix][docs] Fix some typos in the documentation.
Browse files Browse the repository at this point in the history
This closes apache#5039.
  • Loading branch information
ggevay authored and zentol committed Nov 21, 2017
1 parent 80cd586 commit 52599ff
Show file tree
Hide file tree
Showing 4 changed files with 8 additions and 8 deletions.
4 changes: 2 additions & 2 deletions docs/dev/connectors/kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -537,7 +537,7 @@ chosen by passing appropriate `semantic` parameter to the `FlinkKafkaProducer011
* `Semantic.NONE`: Flink will not guarantee anything. Produced records can be lost or they can
be duplicated.
* `Semantic.AT_LEAST_ONCE` (default setting): similar to `setFlushOnCheckpoint(true)` in
`FlinkKafkaProducer010`. his guarantees that no records will be lost (although they can be duplicated).
`FlinkKafkaProducer010`. This guarantees that no records will be lost (although they can be duplicated).
* `Semantic.EXACTLY_ONCE`: uses Kafka transactions to provide exactly-once semantic.

<div class="alert alert-warning">
Expand Down Expand Up @@ -579,7 +579,7 @@ un-finished transaction. In other words after following sequence of events:
3. User committed `transaction2`

Even if records from `transaction2` are already committed, they will not be visible to
the consumers until `transaction1` is committed or aborted. This hastwo implications:
the consumers until `transaction1` is committed or aborted. This has two implications:

* First of all, during normal working of Flink applications, user can expect a delay in visibility
of the records produced into Kafka topics, equal to average time between completed checkpoints.
Expand Down
4 changes: 2 additions & 2 deletions docs/dev/stream/operators/windows.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ programmer can benefit to the maximum from its offered functionality.

The general structure of a windowed Flink program is presented below. The first snippet refers to *keyed* streams,
while the second to *non-keyed* ones. As one can see, the only difference is the `keyBy(...)` call for the keyed streams
and the `window(...)` which becomes `windowAll(...)` for non-keyed streams. These is also going to serve as a roadmap
and the `window(...)` which becomes `windowAll(...)` for non-keyed streams. This is also going to serve as a roadmap
for the rest of the page.

**Keyed Windows**
Expand Down Expand Up @@ -1383,7 +1383,7 @@ and then calculating the top-k elements within the same window in the second ope

Windows can be defined over long periods of time (such as days, weeks, or months) and therefore accumulate very large state. There are a couple of rules to keep in mind when estimating the storage requirements of your windowing computation:

1. Flink creates one copy of each element per window to which it belongs. Given this, tumbling windows keep one copy of each element (an element belongs to exactly window unless it is dropped late). In contrast, sliding windows create several of each element, as explained in the [Window Assigners](#window-assigners) section. Hence, a sliding window of size 1 day and slide 1 second might not be a good idea.
1. Flink creates one copy of each element per window to which it belongs. Given this, tumbling windows keep one copy of each element (an element belongs to exactly one window unless it is dropped late). In contrast, sliding windows create several of each element, as explained in the [Window Assigners](#window-assigners) section. Hence, a sliding window of size 1 day and slide 1 second might not be a good idea.

2. `ReduceFunction`, `AggregateFunction`, and `FoldFunction` can significantly reduce the storage requirements, as they eagerly aggregate elements and store only one value per window. In contrast, just using a `ProcessWindowFunction` requires accumulating all elements.

Expand Down
2 changes: 1 addition & 1 deletion docs/ops/production_ready.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ important and need **careful considerations** if you plan to bring your Flink jo
Flink provides out-of-the-box defaults to make usage and adoption of Flink easier. For many users and scenarios, those
defaults are good starting points for development and completely sufficient for "one-shot" jobs.

However, once you are planning to bring a Flink appplication to production the requirements typically increase. For example,
However, once you are planning to bring a Flink application to production the requirements typically increase. For example,
you want your job to be (re-)scalable and to have a good upgrade story for your job and new Flink versions.

In the following, we present a collection of configuration options that you should check before your job goes into production.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -747,7 +747,7 @@ public final <OUT> DataStreamSource<OUT> fromElements(Class<OUT> type, OUT... da
* elements, it may be necessary to manually supply the type information via
* {@link #fromCollection(java.util.Collection, org.apache.flink.api.common.typeinfo.TypeInformation)}.
*
* <p>Note that this operation will result in a non-parallel data stream source, i.e. a data stream source with a
* <p>Note that this operation will result in a non-parallel data stream source, i.e. a data stream source with
* parallelism one.
*
* @param data
Expand Down Expand Up @@ -784,7 +784,7 @@ public <OUT> DataStreamSource<OUT> fromCollection(Collection<OUT> data) {
* Creates a data stream from the given non-empty collection.
*
* <p>Note that this operation will result in a non-parallel data stream source,
* i.e., a data stream source with a parallelism one.
* i.e., a data stream source with parallelism one.
*
* @param data
* The collection of elements to create the data stream from
Expand Down Expand Up @@ -843,7 +843,7 @@ public <OUT> DataStreamSource<OUT> fromCollection(Iterator<OUT> data, Class<OUT>
* {@link #fromCollection(java.util.Iterator, Class)} does not supply all type information.
*
* <p>Note that this operation will result in a non-parallel data stream source, i.e.,
* a data stream source with a parallelism one.
* a data stream source with parallelism one.
*
* @param data
* The iterator of elements to create the data stream from
Expand Down

0 comments on commit 52599ff

Please sign in to comment.