Skip to content

Commit

Permalink
[FLINK-1172] [docs] Fix broken links in documentation
Browse files Browse the repository at this point in the history
This closes apache#206
  • Loading branch information
supermegaciaccount authored and StephanEwen committed Nov 17, 2014
1 parent a296f40 commit 9c1585e
Show file tree
Hide file tree
Showing 4 changed files with 21 additions and 21 deletions.
2 changes: 1 addition & 1 deletion docs/examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ The {% gh_link /flink-examples/flink-scala-examples/src/main/scala/org/apache/fl

The PageRank algorithm computes the "importance" of pages in a graph defined by links, which point from one pages to another page. It is an iterative graph algorithm, which means that it repeatedly applies the same computation. In each iteration, each page distributes its current rank over all its neighbors, and compute its new rank as a taxed sum of the ranks it received from its neighbors. The PageRank algorithm was popularized by the Google search engine which uses the importance of webpages to rank the results of search queries.

In this simple example, PageRank is implemented with a [bulk iteration](java_api_guide.html#iterations) and a fixed number of iterations.
In this simple example, PageRank is implemented with a [bulk iteration](iterations.html) and a fixed number of iterations.

<div class="codetabs" markdown="1">
<div data-lang="java" markdown="1">
Expand Down
6 changes: 3 additions & 3 deletions docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ of the master and the worker where the exception occurred
- When you start a program locally with the [LocalExecutor](local_execution.html),
you can place breakpoints in your functions and debug them like normal
Java/Scala programs.
- The [Accumulators](java_api_guide.html#accumulators) are very helpful in
- The [Accumulators](programming_guide.html#accumulators--counters) are very helpful in
tracking the behavior of the parallel execution. They allow you to gather
information inside the program's operations and show them after the program
execution.
Expand Down Expand Up @@ -303,10 +303,10 @@ open source project in the next versions.
### Are Hadoop-like utilities, such as Counters and the DistributedCache supported?
[Flink's Accumulators](java_api_guide.html#accumulators-&-counters) work very similar like
[Flink's Accumulators](programming_guide.html#accumulators--counters) work very similar like
[Hadoop's counters, but are more powerful.
Flink has a {% gh_link /flink-core/src/main/java/org/apache/flink/api/common/cache/DistributedCache.java "Distributed Cache" %} that is deeply integrated with the APIs. Please refer to the {% gh_link /flink-java/src/main/java/org/apache/flink/api/java/ExecutionEnvironment.java#L561 "JavaDocs" %} for details on how to use it.
In order to make data sets available on all tasks, we encourage you to use [Broadcast Variables](java_api_guide.html#broadcast_variables) instead. They are more efficient and easier to use than the distributed cache.
In order to make data sets available on all tasks, we encourage you to use [Broadcast Variables](programming_guide.html#broadcast-variables) instead. They are more efficient and easier to use than the distributed cache.
2 changes: 1 addition & 1 deletion docs/iterations.md
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@ setFinalState(solution);

<div class="panel panel-default">
<div class="panel-body">
See the <strong><a href="scala_api_guide.html">Scala</a> and <a href="java_api_guide.html#iterations">Java</a> programming guides</strong> for details and code examples.</div>
See the <strong><a href="programming_guide.html">programming guide</a></strong> for details and code examples.</div>
</div>

### Example: Propagate Minimum in Graph
Expand Down
32 changes: 16 additions & 16 deletions docs/programming_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -229,7 +229,7 @@ DataSet<String> text = env.readTextFile("file:///path/to/file");

This will give you a DataSet on which you can then apply transformations. For
more information on data sources and input formats, please refer to
[Data Sources](#data_sources).
[Data Sources](#data-sources).

Once you have a DataSet you can apply transformations to create a new
DataSet which you can then write to a file, transform again, or
Expand Down Expand Up @@ -269,7 +269,7 @@ a cluster, the result goes to the standard out stream of the cluster nodes and e
up in the *.out* files of the workers).
The first two do as the name suggests, the third one can be used to specify a
custom data output format. Please refer
to [Data Sinks](#data_sinks) for more information on writing to files and also
to [Data Sinks](#data-sinks) for more information on writing to files and also
about custom data output formats.

Once you specified the complete program you need to call `execute` on
Expand Down Expand Up @@ -329,7 +329,7 @@ val text = env.readTextFile("file:///path/to/file")

This will give you a DataSet on which you can then apply transformations. For
more information on data sources and input formats, please refer to
[Data Sources](#data_sources).
[Data Sources](#data-sources).

Once you have a DataSet you can apply transformations to create a new
DataSet which you can then write to a file, transform again, or
Expand Down Expand Up @@ -370,7 +370,7 @@ a cluster, the result goes to the standard out stream of the cluster nodes and e
up in the *.out* files of the workers).
The first two do as the name suggests, the third one can be used to specify a
custom data output format. Please refer
to [Data Sinks](#data_sinks) for more information on writing to files and also
to [Data Sinks](#data-sinks) for more information on writing to files and also
about custom data output formats.

Once you specified the complete program you need to call `execute` on
Expand Down Expand Up @@ -840,9 +840,9 @@ val result3 = in.groupBy(0).sortGroup(1, Order.ASCENDING).first(3)
</div>
</div>

The [parallelism](#parallelism) of a transformation can be defined by `setParallelism(int)` while
The [parallelism](#parallel-execution) of a transformation can be defined by `setParallelism(int)` while
`name(String)` assigns a custom name to a transformation which is helpful for debugging. The same is
possible for [Data Sources](#data_sources) and [Data Sinks](#data_sinks).
possible for [Data Sources](#data-sources) and [Data Sinks](#data-sinks).

[Back to Top](#top)

Expand Down Expand Up @@ -1297,10 +1297,10 @@ Rich functions provide, in addition to the user-defined function (map,
reduce, etc), four methods: `open`, `close`, `getRuntimeContext`, and
`setRuntimeContext`. These are useful for creating and finalizing
local state, accessing broadcast variables (see
[Broadcast Variables](#broadcast_variables), and for accessing runtime
[Broadcast Variables](#broadcast-variables), and for accessing runtime
information such as accumulators and counters (see
[Accumulators and Counters](#accumulators_counters), and information
on iterations (see [Iterations](#iterations)).
[Accumulators and Counters](#accumulators--counters), and information
on iterations (see [Iterations](iterations.html)).

In particular for the `reduceGroup` transformation, using a rich
function is the only way to define an optional `combine` function. See
Expand Down Expand Up @@ -2015,7 +2015,7 @@ env.execute("Iterative Pi Example");
{% endhighlight %}

You can also check out the
{% gh_link /flink-examples/flink-java-examples/src/main/java/org/apache/flink/example/java/clustering/KMeans.java "K-Means example" %},
{% gh_link /flink-examples/flink-java-examples/src/main/java/org/apache/flink/examples/java/clustering/KMeans.java "K-Means example" %},
which uses a BulkIteration to cluster a set of unlabeled points.

#### Delta Iterations
Expand Down Expand Up @@ -2272,7 +2272,7 @@ data.map(new MapFunction<String, String>() {

Make sure that the names (`broadcastSetName` in the previous example) match when registering and
accessing broadcasted data sets. For a complete example program, have a look at
{% gh_link /flink-examples/flink-java-examples/src/main/java/org/apache/flink/example/java/clustering/KMeans.java#L96 "KMeans Algorithm" %}.
{% gh_link /flink-examples/flink-java-examples/src/main/java/org/apache/flink/examples/java/clustering/KMeans.java#L96 "K-Means Algorithm" %}.
</div>
<div data-lang="scala" markdown="1">

Expand Down Expand Up @@ -2312,7 +2312,7 @@ of a function, or use the `withParameters(...)` method to pass in a configuratio
Program Packaging & Distributed Execution
-----------------------------------------

As described in the [program skeleton](#skeleton) section, Flink programs can be executed on
As described in the [program skeleton](#program-skeleton) section, Flink programs can be executed on
clusters by using the `RemoteEnvironment`. Alternatively, programs can be packaged into JAR Files
(Java Archives) for execution. Packaging the program is a prerequisite to executing them through the
[command line interface](cli.html) or the [web interface](web_client.html).
Expand Down Expand Up @@ -2429,7 +2429,7 @@ name.
A note on accumulators and iterations: Currently the result of accumulators is only available after
the overall job ended. We plan to also make the result of the previous iteration available in the
next iteration. You can use
{% gh_link /flink-java/src/main/java/org/apache/flink/api/java/IterativeDataSet.java#L98 "Aggregators" %}
{% gh_link /flink-java/src/main/java/org/apache/flink/api/java/operators/IterativeDataSet.java#L98 "Aggregators" %}
to compute per-iteration statistics and base the termination of iterations on such statistics.

__Custom accumulators:__
Expand Down Expand Up @@ -2463,7 +2463,7 @@ The degree of parallelism of a task can be specified in Flink on different level

The parallelism of an individual operator, data source, or data sink can be defined by calling its
`setParallelism()` method. For example, the degree of parallelism of the `Sum` operator in the
[WordCount](#example) example program can be set to `5` as follows :
[WordCount](#example-program) example program can be set to `5` as follows :


<div class="codetabs" markdown="1">
Expand Down Expand Up @@ -2506,7 +2506,7 @@ parallelism of an operator.

The default parallelism of an execution environment can be specified by calling the
`setDegreeOfParallelism()` method. To execute all operators, data sources, and data sinks of the
[WordCount](#example) example program with a parallelism of `3`, set the default parallelism of the
[WordCount](#example-program) example program with a parallelism of `3`, set the default parallelism of the
execution environment as follows:

<div class="codetabs" markdown="1">
Expand Down Expand Up @@ -2543,7 +2543,7 @@ env.execute("Word Count Example")

A system-wide default parallelism for all execution environments can be defined by setting the
`parallelization.degree.default` property in `./conf/flink-conf.yaml`. See the
[Configuration]({{site.baseurl}}/config.html) documentation for details.
[Configuration](config.html) documentation for details.

[Back to top](#top)

Expand Down

0 comments on commit 9c1585e

Please sign in to comment.