Skip to content

Commit

Permalink
[FLINK-20651] Fix formatting that doesn't work with google-java-forma…
Browse files Browse the repository at this point in the history
…t/checkstyle
  • Loading branch information
aljoscha authored and zentol committed Dec 28, 2020
1 parent 640ddd6 commit ce427e7
Show file tree
Hide file tree
Showing 60 changed files with 319 additions and 189 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -229,12 +229,14 @@ boolean shouldRunFetchTask() {
* <p>The correctness can be think of in the following way. The purpose of wake up
* is to let the fetcher thread go to the very beginning of the running loop.
* There are three major events in each run of the loop.
*
* <ol>
* <li>pick a task (blocking)
* <li>assign the task to runningTask variable.
* <li>run the runningTask. (blocking)
* </ol>
* We don't need to worry about things after step 3 because there is no blocking point
*
* <p>We don't need to worry about things after step 3 because there is no blocking point
* anymore.
*
* <p>We always first set the wakeup flag when waking up the fetcher, then use the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -151,9 +151,8 @@ public CassandraSink<IN> setParallelism(int parallelism) {
/**
* Turns off chaining for this operator so thread co-location will not be
* used as an optimization.
* <p/>
* <p/>
* Chaining can be turned off for the whole
*
* <p>Chaining can be turned off for the whole
* job by {@link org.apache.flink.streaming.api.environment.StreamExecutionEnvironment#disableOperatorChaining()}
* however it is not advised for performance considerations.
*
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -230,12 +230,14 @@ private static Collection<Path> splitsToPaths(Collection<FileSourceSplit> splits
/**
* The generic base builder. This builder carries a <i>SELF</i> type to make it convenient to
* extend this for subclasses, using the following pattern.
*
* <pre>{@code
* public class SubBuilder<T> extends AbstractFileSourceBuilder<T, SubBuilder<T>> {
* ...
* }
* }</pre>
* That way, all return values from builder method defined here are typed to the sub-class
*
* <p>That way, all return values from builder method defined here are typed to the sub-class
* type and support fluent chaining.
*
* <p>We don't make the publicly visible builder generic with a SELF type, because it leads to
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,12 +46,14 @@
* accessed via the Flink's {@link FileSystem} class.
*
* <p>Start building a file source via one of the following calls:
*
* <ul>
* <li>{@link FileSource#forRecordStreamFormat(StreamFormat, Path...)}</li>
* <li>{@link FileSource#forBulkFileFormat(BulkFormat, Path...)}</li>
* <li>{@link FileSource#forRecordFileFormat(FileRecordFormat, Path...)}</li>
* </ul>
* This creates a {@link FileSource.FileSourceBuilder} on which you can configure all the
*
* <p>This creates a {@link FileSource.FileSourceBuilder} on which you can configure all the
* properties of the file source.
*
* <h2>Batch and Streaming</h2>
Expand All @@ -71,6 +73,7 @@
* <p>The reading of each file happens through file readers defined by <i>file formats</i>.
* These define the parsing logic for the contents of the file. There are multiple classes that
* the source supports. Their interfaces trade of simplicity of implementation and flexibility/efficiency.
*
* <ul>
* <li>A {@link StreamFormat} reads the contents of a file from a file stream. It is the simplest
* format to implement, and provides many features out-of-the-box (like checkpointing logic)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,8 @@
* .setDeserializer(KafkaRecordDeserializer.valueOnly(StringDeserializer.class))
* .build();
* }</pre>
* The bootstrap servers, group id, topics/partitions to consume, and the record deserializer
*
* <p>The bootstrap servers, group id, topics/partitions to consume, and the record deserializer
* are required fields that must be set.
*
* <p>To specify the starting offsets of the KafkaSource, one can call
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,8 @@
* <li>generated ids to abort will never clash with ids to abort from different subtasks
* <li>generated ids to use will never clash with ids to abort from different subtasks
* </ul>
* In other words, any particular generated id will always be assigned to one and only one subtask.
*
* <p>In other words, any particular generated id will always be assigned to one and only one subtask.
*/
@Internal
public class TransactionalIdsGenerator {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,18 +26,21 @@
*
* <p>Note, one Kafka partition can contain multiple Flink partitions.
*
* <p>Cases:
* # More Flink partitions than kafka partitions
* <p>There are a couple of cases to consider.
*
* <h3>More Flink partitions than kafka partitions</h3>
* <pre>
* Flink Sinks: Kafka Partitions
* 1 ----------------&gt; 1
* 2 --------------/
* 3 -------------/
* 4 ------------/
* </pre>
* Some (or all) kafka partitions contain the output of more than one flink partition
*
* <p>Fewer Flink partitions than Kafka
* <p>Some (or all) kafka partitions contain the output of more than one flink partition
*
* <h3>Fewer Flink partitions than Kafka</h3>
*
* <pre>
* Flink Sinks: Kafka Partitions
* 1 ----------------&gt; 1
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@
* a {@link FlinkKafkaShuffleConsumer} together into a {@link FlinkKafkaShuffle}.
* Here is an example how to use a {@link FlinkKafkaShuffle}.
*
* <p><pre>{@code
* <pre>{@code
* StreamExecutionEnvironment env = ... // create execution environment
* DataStream<X> source = env.addSource(...) // add data stream source
* DataStream<Y> dataStream = ... // some transformation(s) based on source
Expand Down Expand Up @@ -91,7 +91,7 @@
* decoupled to three regions: `KafkaShuffleProducer', `KafkaShuffleConsumer' and `KafkaShuffleConsumerReuse'
* through `PERSISTENT DATA` as shown below. If any region fails the execution, the other two keep progressing.
*
* <p><pre>
* <pre>
* source -> ... KafkaShuffleProducer -> PERSISTENT DATA -> KafkaShuffleConsumer -> ...
* |
* | ----------> KafkaShuffleConsumerReuse -> ...
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,9 @@
* <pre>{@code
* TypeInformation<Tuple2<String, Long>> info = TypeInformation.of(new TypeHint<Tuple2<String, Long>>(){});
* }</pre>
* or
*
* <p>or
*
* <pre>{@code
* TypeInformation<Tuple2<String, Long>> info = new TypeHint<Tuple2<String, Long>>(){}.getTypeInfo();
* }</pre>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,13 +41,15 @@ public interface KeySelector<IN, KEY> extends Function, Serializable {
* User-defined function that deterministically extracts the key from an object.
*
* <p>For example for a class:
*
* <pre>
* public class Word {
* String word;
* int count;
* }
* </pre>
* The key extractor could return the word as
*
* <p>The key extractor could return the word as
* a key to group all Word objects by the String they contain.
*
* <p>The code would look like this
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
*/


/**
/*
* This file is based on source code from the Hadoop Project (http://hadoop.apache.org/), licensed by the Apache
* Software Foundation (ASF) under the Apache License, Version 2.0. See the NOTICE file distributed with this work for
* additional information regarding copyright ownership.
Expand Down
48 changes: 25 additions & 23 deletions flink-core/src/main/java/org/apache/flink/core/fs/FileSystem.java
Original file line number Diff line number Diff line change
Expand Up @@ -121,34 +121,36 @@
*
* <h3>Examples</h3>
*
* <ul>
* <li>For <b>fault-tolerant distributed file systems</b>, data is considered persistent once
* it has been received and acknowledged by the file system, typically by having been replicated
* to a quorum of machines (<i>durability requirement</i>). In addition the absolute file path
* must be visible to all other machines that will potentially access the file (<i>visibility
* requirement</i>).
* <h4>Fault-tolerant distributed file systems</h4>
*
* <p>Whether data has hit non-volatile storage on the storage nodes depends on the specific
* guarantees of the particular file system.
* <p>For <b>fault-tolerant distributed file systems</b>, data is considered persistent once
* it has been received and acknowledged by the file system, typically by having been replicated
* to a quorum of machines (<i>durability requirement</i>). In addition the absolute file path
* must be visible to all other machines that will potentially access the file (<i>visibility
* requirement</i>).
*
* <p>The metadata updates to the file's parent directory are not required to have reached
* a consistent state. It is permissible that some machines see the file when listing the parent
* directory's contents while others do not, as long as access to the file by its absolute path
* is possible on all nodes.</li>
* <p>Whether data has hit non-volatile storage on the storage nodes depends on the specific
* guarantees of the particular file system.
*
* <li>A <b>local file system</b> must support the POSIX <i>close-to-open</i> semantics.
* Because the local file system does not have any fault tolerance guarantees, no further
* requirements exist.
* <p>The metadata updates to the file's parent directory are not required to have reached
* a consistent state. It is permissible that some machines see the file when listing the parent
* directory's contents while others do not, as long as access to the file by its absolute path
* is possible on all nodes.
*
* <p>The above implies specifically that data may still be in the OS cache when considered
* persistent from the local file system's perspective. Crashes that cause the OS cache to loose
* data are considered fatal to the local machine and are not covered by the local file system's
* guarantees as defined by Flink.
* <h4>Local file systems</h4>
*
* <p>That means that computed results, checkpoints, and savepoints that are written only to
* the local filesystem are not guaranteed to be recoverable from the local machine's failure,
* making local file systems unsuitable for production setups.</li>
* </ul>
* <p>A <b>local file system</b> must support the POSIX <i>close-to-open</i> semantics.
* Because the local file system does not have any fault tolerance guarantees, no further
* requirements exist.
*
* <p>The above implies specifically that data may still be in the OS cache when considered
* persistent from the local file system's perspective. Crashes that cause the OS cache to loose
* data are considered fatal to the local machine and are not covered by the local file system's
* guarantees as defined by Flink.
*
* <p>That means that computed results, checkpoints, and savepoints that are written only to
* the local filesystem are not guaranteed to be recoverable from the local machine's failure,
* making local file systems unsuitable for production setups.
*
* <h2>Updating File Contents</h2>
*
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@
* Note that this code realizes both the byte order swapping and the reinterpret cast access to
* get a long from the byte array.
*
* <p><pre>
* <pre>
* [Verified Entry Point]
* 0x00007fc403e19920: sub $0x18,%rsp
* 0x00007fc403e19927: mov %rbp,0x10(%rsp) ;*synchronization entry
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@
*
* <p>This program implements the following SQL equivalent:
*
* <p><pre>{@code
* <pre>{@code
* SELECT
* c_custkey,
* c_name,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@
*
* <p>This program implements the following SQL equivalent:
*
* <p><pre>{@code
* <pre>{@code
* SELECT
* l_orderkey,
* SUM(l_extendedprice*(1-l_discount)) AS revenue,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,10 +33,12 @@
* <p>This program connects to a server socket and reads strings from the socket.
* The easiest way to try this out is to open a text server (at port 12345)
* using the <i>netcat</i> tool via
*
* <pre>
* nc -l 12345 on Linux or nc -l -p 12345 on Windows
* </pre>
* and run this example with the hostname and the port as arguments.
*
* <p>and run this example with the hostname and the port as arguments.
*/
@SuppressWarnings("serial")
public class SocketWindowWordCount {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,8 @@
* <li>serialization via reflection (ReflectDatumReader / -Writer)</li>
* <li>serialization of generic records via GenericDatumReader / -Writer</li>
* </ul>
* The serializer instantiates them depending on the class of the type it should serialize.
*
* <p>The serializer instantiates them depending on the class of the type it should serialize.
*
* <p><b>Important:</b> This serializer is NOT THREAD SAFE, because it reuses the data encoders
* and decoders which have buffers that would be shared between the threads if used concurrently
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,7 @@ public DataSet<ST> closeWith(DataSet<ST> solutionSetDelta, DataSet<WT> newWorkse
* Gets the initial solution set. This is the data set on which the delta iteration was started.
*
* <p>Consider the following example:
*
* <pre>
* {@code
* DataSet<MyType> solutionSetData = ...;
Expand All @@ -108,7 +109,8 @@ public DataSet<ST> closeWith(DataSet<ST> solutionSetDelta, DataSet<WT> newWorkse
* DeltaIteration<MyType, AnotherType> iteration = solutionSetData.iteratorDelta(worksetData, 10, ...);
* }
* </pre>
* The <tt>solutionSetData</tt> would be the data set returned by {@code iteration.getInitialSolutionSet();}.
*
* <p>The <tt>solutionSetData</tt> would be the data set returned by {@code iteration.getInitialSolutionSet();}.
*
* @return The data set that forms the initial solution set.
*/
Expand All @@ -121,6 +123,7 @@ public DataSet<ST> getInitialSolutionSet() {
* iteration.
*
* <p>Consider the following example:
*
* <pre>
* {@code
* DataSet<MyType> solutionSetData = ...;
Expand All @@ -129,7 +132,8 @@ public DataSet<ST> getInitialSolutionSet() {
* DeltaIteration<MyType, AnotherType> iteration = solutionSetData.iteratorDelta(worksetData, 10, ...);
* }
* </pre>
* The <tt>worksetData</tt> would be the data set returned by {@code iteration.getInitialWorkset();}.
*
* <p>The <tt>worksetData</tt> would be the data set returned by {@code iteration.getInitialWorkset();}.
*
* @return The data set that forms the initial workset.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -65,11 +65,13 @@ public Grouping(DataSet<T> set, Keys<T> keys) {
* Returns the input DataSet of a grouping operation, that is the one before the grouping. This means that
* if it is applied directly to the result of a grouping operation, it will cancel its effect. As an example, in the
* following snippet:
*
* <pre>{@code
* DataSet<X> notGrouped = input.groupBy().getDataSet();
* DataSet<Y> allReduced = notGrouped.reduce()
* }</pre>
* the {@code groupBy()} is as if it never happened, as the {@code notGrouped} DataSet corresponds
*
* <p>the {@code groupBy()} is as if it never happened, as the {@code notGrouped} DataSet corresponds
* to the input of the {@code groupBy()} (because of the {@code getDataset()}).
* */
@Internal
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@
*
* <p>A pattern definition is used by {@link org.apache.flink.cep.nfa.compiler.NFACompiler} to create a {@link NFA}.
*
* <p><pre>{@code
* <pre>{@code
* Pattern<T, F> pattern = Pattern.<T>begin("start")
* .next("middle").subtype(F.class)
* .followedBy("end").where(new MyCondition());
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
/**
* A quantifier describing the Pattern. There are three main groups of {@link Quantifier}.
*
* <p><ol>
* <ol>
* <li>Single</li>
* <li>Looping</li>
* <li>Times</li>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -285,7 +285,7 @@ private <T> CepOperator<Event, Integer, T> createCepOperator(

/**
* Creates a {@link PatternProcessFunction} that as a result will produce Strings as follows:
* <pre>[timestamp]:[Event.getName]...</pre> The Event.getName will occur stateNumber times. If the match does not
* {@code [timestamp]:[Event.getName]...}. The Event.getName will occur stateNumber times. If the match does not
* contain n-th pattern it will replace this position with "null".
*
* @param stateNumber number of states in the pattern
Expand All @@ -298,7 +298,7 @@ private static PatternProcessFunction<Event, String> extractTimestampAndNames(in

/**
* Creates a {@link PatternProcessFunction} that as a result will produce Strings as follows:
* <pre>[timestamp]:[Event.getName]...</pre> The Event.getName will occur stateNumber times. If the match does not
* {@code [timestamp]:[Event.getName]...} The Event.getName will occur stateNumber times. If the match does not
* contain n-th pattern it will replace this position with "null".
*
* <p>This function will also apply the same logic for timed out partial matches and emit those results into
Expand All @@ -318,7 +318,7 @@ private static PatternProcessFunction<Event, String> extractTimestampAndNames(

/**
* Creates a {@link PatternProcessFunction} that as a result will produce Strings as follows:
* <pre>[currentProcessingTime]:[Event.getName]...</pre> The Event.getName will occur stateNumber times.
* {@code [currentProcessingTime]:[Event.getName]...}. The Event.getName will occur stateNumber times.
* If the match does not contain n-th pattern it will replace this position with "null".
*
* @param stateNumber number of states in the pattern
Expand All @@ -330,7 +330,7 @@ private static PatternProcessFunction<Event, String> extractCurrentProcessingTim

/**
* Creates a {@link PatternProcessFunction} that as a result will produce Strings as follows:
* <pre>[currentProcessingTime]:[Event.getName]...</pre> The Event.getName will occur stateNumber times.
* {@code [currentProcessingTime]:[Event.getName]...}. The Event.getName will occur stateNumber times.
* If the match does not contain n-th pattern it will replace this position with "null".
*
* <p>This function will also apply the same logic for timed out partial matches and emit those results into
Expand Down
Loading

0 comments on commit ce427e7

Please sign in to comment.