Skip to content

Commit

Permalink
[hotfix][docs]: Fix minor grammar and spelling mistakes
Browse files Browse the repository at this point in the history
docs: Incomplete sentence
docs: use consistent voice

chore: Connectors document review

chore: Reword SplitEnumerator

[hotfix][docs] Use consistent intro sentence

Apply grammar fixes from code review

Co-authored-by: Matthias Pohl <[email protected]>
  • Loading branch information
RyanSkraba and XComp committed Oct 14, 2022
1 parent d03c334 commit 7d28828
Show file tree
Hide file tree
Showing 8 changed files with 35 additions and 35 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ Don’t hesitate to ask!

Contact the developers and community on the [mailing lists](https://flink.apache.org/community.html#mailing-lists) if you need any help.

[Open an issue](https://issues.apache.org/jira/browse/FLINK) if you found a bug in Flink.
[Open an issue](https://issues.apache.org/jira/browse/FLINK) if you find a bug in Flink.


## Documentation
Expand Down
2 changes: 1 addition & 1 deletion docs/content/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ under the License.
# Apache Flink Documentation

{{< center >}}
**Apache Flink** is a framework and distributed processing engine for stateful computations over *unbounded* and *bounded* data streams. Flink has been designed to run in *all common cluster environments* perform computations at *in-memory* speed and at *any scale*.
**Apache Flink** is a framework and distributed processing engine for stateful computations over *unbounded* and *bounded* data streams. Flink has been designed to run in *all common cluster environments*, perform computations at *in-memory* speed and at *any scale*.
{{< /center >}}

{{< columns >}}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -346,7 +346,7 @@ public static HBaseTableSchema fromDataType(DataType physicalRowType) {

// ------------------------------------------------------------------------------------

/** An class contains information about rowKey, such as rowKeyName, rowKeyType, rowKeyIndex. */
/** A class containing information about rowKey, such as rowKeyName, rowKeyType, rowKeyIndex. */
private static class RowKeyInfo implements Serializable {
private static final long serialVersionUID = 1L;
final String rowKeyName;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@
import static org.apache.flink.util.Preconditions.checkState;

/**
* The @builder class for {@link KafkaSource} to make it easier for the users to construct a {@link
* The builder class for {@link KafkaSource} to make it easier for the users to construct a {@link
* KafkaSource}.
*
* <p>The following example shows the minimum setup to create a KafkaSource that reads the String
Expand All @@ -68,10 +68,10 @@
* #setStartingOffsets(OffsetsInitializer)}.
*
* <p>By default the KafkaSource runs in an {@link Boundedness#CONTINUOUS_UNBOUNDED} mode and never
* stops until the Flink job is canceled or fails. To let the KafkaSource run in {@link
* Boundedness#CONTINUOUS_UNBOUNDED} but stops at some given offsets, one can call {@link
* stops until the Flink job is canceled or fails. To let the KafkaSource run as {@link
* Boundedness#CONTINUOUS_UNBOUNDED} yet stop at some given offsets, one can call {@link
* #setUnbounded(OffsetsInitializer)}. For example the following KafkaSource stops after it consumes
* up to the latest partition offsets at the point when the Flink started.
* up to the latest partition offsets at the point when the Flink job started.
*
* <pre>{@code
* KafkaSource<String> source = KafkaSource
Expand Down Expand Up @@ -197,7 +197,7 @@ public KafkaSourceBuilder<OUT> setKafkaSubscriber(KafkaSubscriber kafkaSubscribe
}

/**
* Specify from which offsets the KafkaSource should start consume from by providing an {@link
* Specify from which offsets the KafkaSource should start consuming from by providing an {@link
* OffsetsInitializer}.
*
* <p>The following {@link OffsetsInitializer}s are commonly used and provided out of the box.
Expand Down Expand Up @@ -235,16 +235,16 @@ public KafkaSourceBuilder<OUT> setStartingOffsets(
}

/**
* By default the KafkaSource is set to run in {@link Boundedness#CONTINUOUS_UNBOUNDED} manner
* and thus never stops until the Flink job fails or is canceled. To let the KafkaSource run as
* a streaming source but still stops at some point, one can set an {@link OffsetsInitializer}
* to specify the stopping offsets for each partition. When all the partitions have reached
* their stopping offsets, the KafkaSource will then exit.
* By default the KafkaSource is set to run as {@link Boundedness#CONTINUOUS_UNBOUNDED} and thus
* never stops until the Flink job fails or is canceled. To let the KafkaSource run as a
* streaming source but still stop at some point, one can set an {@link OffsetsInitializer} to
* specify the stopping offsets for each partition. When all the partitions have reached their
* stopping offsets, the KafkaSource will then exit.
*
* <p>This method is different from {@link #setBounded(OffsetsInitializer)} that after setting
* the stopping offsets with this method, {@link KafkaSource#getBoundedness()} will still return
* {@link Boundedness#CONTINUOUS_UNBOUNDED} even though it will stop at the stopping offsets
* specified by the stopping offsets {@link OffsetsInitializer}.
* <p>This method is different from {@link #setBounded(OffsetsInitializer)} in that after
* setting the stopping offsets with this method, {@link KafkaSource#getBoundedness()} will
* still return {@link Boundedness#CONTINUOUS_UNBOUNDED} even though it will stop at the
* stopping offsets specified by the stopping offsets {@link OffsetsInitializer}.
*
* <p>The following {@link OffsetsInitializer} are commonly used and provided out of the box.
* Users can also implement their own {@link OffsetsInitializer} for custom behaviors.
Expand Down Expand Up @@ -276,15 +276,15 @@ public KafkaSourceBuilder<OUT> setUnbounded(OffsetsInitializer stoppingOffsetsIn
}

/**
* By default the KafkaSource is set to run in {@link Boundedness#CONTINUOUS_UNBOUNDED} manner
* and thus never stops until the Flink job fails or is canceled. To let the KafkaSource run in
* {@link Boundedness#BOUNDED} manner and stops at some point, one can set an {@link
* OffsetsInitializer} to specify the stopping offsets for each partition. When all the
* partitions have reached their stopping offsets, the KafkaSource will then exit.
* By default the KafkaSource is set to run as {@link Boundedness#CONTINUOUS_UNBOUNDED} and thus
* never stops until the Flink job fails or is canceled. To let the KafkaSource run as {@link
* Boundedness#BOUNDED} and stop at some point, one can set an {@link OffsetsInitializer} to
* specify the stopping offsets for each partition. When all the partitions have reached their
* stopping offsets, the KafkaSource will then exit.
*
* <p>This method is different from {@link #setUnbounded(OffsetsInitializer)} that after setting
* the stopping offsets with this method, {@link KafkaSource#getBoundedness()} will return
* {@link Boundedness#BOUNDED} instead of {@link Boundedness#CONTINUOUS_UNBOUNDED}.
* <p>This method is different from {@link #setUnbounded(OffsetsInitializer)} in that after
* setting the stopping offsets with this method, {@link KafkaSource#getBoundedness()} will
* return {@link Boundedness#BOUNDED} instead of {@link Boundedness#CONTINUOUS_UNBOUNDED}.
*
* <p>The following {@link OffsetsInitializer} are commonly used and provided out of the box.
* Users can also implement their own {@link OffsetsInitializer} for custom behaviors.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,8 @@
import java.io.Serializable;

/**
* A interface for users to specify the start position of a pulsar subscription. Since it would be
* serialized into split. The implementation for this interface should be well considered. I don't
* An interface for users to specify the start position of a pulsar subscription. Since it would be
* serialized into split, the implementation for this interface should be well considered. I don't
* recommend adding extra internal state for this implementation.
*
* <p>This class would be used only for {@link SubscriptionType#Exclusive} and {@link
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,9 @@
import java.io.Serializable;

/**
* A interface for users to specify the stop position of a pulsar subscription. Since it would be
* serialized into split. The implementation for this interface should be well considered. I don't
* recommend adding extra internal state for this implementation.
* An interface for users to specify the stop position of a pulsar subscription. Since it would be
* serialized into split, the implementation for this interface should be well considered. It's not
* recommended to add extra internal state for this implementation.
*/
@PublicEvolving
@FunctionalInterface
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ SplitEnumerator<SplitT, EnumChkT> restoreEnumerator(

/**
* Creates the serializer for the {@link SplitEnumerator} checkpoint. The serializer is used for
* the result of the {@link SplitEnumerator#snapshotState} method.
* the result of the {@link SplitEnumerator#snapshotState(long)} method.
*
* @return The serializer for the SplitEnumerator checkpoint.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@
import java.util.List;

/**
* A interface of a split enumerator responsible for the followings: 1. discover the splits for the
* {@link SourceReader} to read. 2. assign the splits to the source reader.
* The interface for a split enumerator responsible for discovering the source splits, and assigning
* them to the {@link SourceReader}.
*/
@Public
public interface SplitEnumerator<SplitT extends SourceSplit, CheckpointT>
Expand All @@ -52,10 +52,10 @@ public interface SplitEnumerator<SplitT extends SourceSplit, CheckpointT>
void handleSplitRequest(int subtaskId, @Nullable String requesterHostname);

/**
* Add a split back to the split enumerator. It will only happen when a {@link SourceReader}
* Add splits back to the split enumerator. This will only happen when a {@link SourceReader}
* fails and there are splits assigned to it after the last successful checkpoint.
*
* @param splits The split to add back to the enumerator for reassignment.
* @param splits The splits to add back to the enumerator for reassignment.
* @param subtaskId The id of the subtask to which the returned splits belong.
*/
void addSplitsBack(List<SplitT> splits, int subtaskId);
Expand Down

0 comments on commit 7d28828

Please sign in to comment.