Skip to content

Commit

Permalink
TensorFlow: Upstream changes to git.
Browse files Browse the repository at this point in the history
Changes:
- Docuementation changes.
- Update URL for protobuf submodule.

Base CL: 107345722
  • Loading branch information
keveman committed Nov 8, 2015
1 parent 7312671 commit 5c6acf7
Show file tree
Hide file tree
Showing 23 changed files with 430 additions and 405 deletions.
2 changes: 1 addition & 1 deletion .gitmodules
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
[submodule "google/protobuf"]
path = google/protobuf
url = https://github.googlesource.com/google/protobuf.git
url = https://github.com/google/protobuf.git
32 changes: 16 additions & 16 deletions tensorflow/g3doc/api_docs/cc/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,25 +26,25 @@ write the graph to a file.
##Classes <a class="md-anchor" id="AUTOGENERATED-classes"></a>
* [tensorflow::Env](ClassEnv.md)
* [tensorflow::EnvWrapper](ClassEnvWrapper.md)
* [tensorflow::RandomAccessFile](ClassRandomAccessFile.md)
* [tensorflow::Session](ClassSession.md)
* [tensorflow::Status](ClassStatus.md)
* [tensorflow::Tensor](ClassTensor.md)
* [tensorflow::TensorBuffer](ClassTensorBuffer.md)
* [tensorflow::TensorShape](ClassTensorShape.md)
* [tensorflow::TensorShapeIter](ClassTensorShapeIter.md)
* [tensorflow::TensorShapeUtils](ClassTensorShapeUtils.md)
* [tensorflow::Thread](ClassThread.md)
* [tensorflow::WritableFile](ClassWritableFile.md)
* [tensorflow::Env](../../api_docs/cc/ClassEnv.md)
* [tensorflow::EnvWrapper](../../api_docs/cc/ClassEnvWrapper.md)
* [tensorflow::RandomAccessFile](../../api_docs/cc/ClassRandomAccessFile.md)
* [tensorflow::Session](../../api_docs/cc/ClassSession.md)
* [tensorflow::Status](../../api_docs/cc/ClassStatus.md)
* [tensorflow::Tensor](../../api_docs/cc/ClassTensor.md)
* [tensorflow::TensorBuffer](../../api_docs/cc/ClassTensorBuffer.md)
* [tensorflow::TensorShape](../../api_docs/cc/ClassTensorShape.md)
* [tensorflow::TensorShapeIter](../../api_docs/cc/ClassTensorShapeIter.md)
* [tensorflow::TensorShapeUtils](../../api_docs/cc/ClassTensorShapeUtils.md)
* [tensorflow::Thread](../../api_docs/cc/ClassThread.md)
* [tensorflow::WritableFile](../../api_docs/cc/ClassWritableFile.md)
##Structs <a class="md-anchor" id="AUTOGENERATED-structs"></a>
* [tensorflow::SessionOptions](StructSessionOptions.md)
* [tensorflow::Status::State](StructState.md)
* [tensorflow::TensorShapeDim](StructTensorShapeDim.md)
* [tensorflow::ThreadOptions](StructThreadOptions.md)
* [tensorflow::SessionOptions](../../api_docs/cc/StructSessionOptions.md)
* [tensorflow::Status::State](../../api_docs/cc/StructState.md)
* [tensorflow::TensorShapeDim](../../api_docs/cc/StructTensorShapeDim.md)
* [tensorflow::ThreadOptions](../../api_docs/cc/StructThreadOptions.md)
<div class='sections-order' style="display: none;">
Expand Down
662 changes: 331 additions & 331 deletions tensorflow/g3doc/api_docs/python/index.md

Large diffs are not rendered by default.

Binary file modified tensorflow/g3doc/extras/tensorflow-whitepaper2015.pdf
Binary file not shown.
4 changes: 2 additions & 2 deletions tensorflow/g3doc/get_started/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,8 +68,8 @@ also use MNIST as an example in our technical tutorial where we elaborate on
TensorFlow features.

## Recommended Next Steps: <a class="md-anchor" id="AUTOGENERATED-recommended-next-steps-"></a>
* [Download and Setup](os_setup.md)
* [Basic Usage](basic_usage.md)
* [Download and Setup](../get_started/os_setup.md)
* [Basic Usage](../get_started/basic_usage.md)
* [TensorFlow Mechanics 101](../tutorials/mnist/tf/index.md)


Expand Down
8 changes: 4 additions & 4 deletions tensorflow/g3doc/how_tos/adding_an_op/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ to:
* [Verify it works](#AUTOGENERATED-verify-it-works)
* [Validation](#Validation)
* [Op registration](#AUTOGENERATED-op-registration)
* [Attrs](#AUTOGENERATED-attrs)
* [Attrs](#Attrs)
* [Attr types](#AUTOGENERATED-attr-types)
* [Polymorphism](#Polymorphism)
* [Inputs and Outputs](#AUTOGENERATED-inputs-and-outputs)
Expand All @@ -51,8 +51,8 @@ to:

You define the interface of an Op by registering it with the TensorFlow system.
In the registration, you specify the name of your Op, its inputs (types and
names) and outputs (types and names), as well as [docstrings](#docstrings) and
any [attrs](#attrs) the Op might require.
names) and outputs (types and names), as well as docstrings and
any [attrs](#Attrs) the Op might require.

To see how this works, suppose you'd like to create an Op that takes a tensor of
`int32`s and outputs a copy of the tensor, with all but the first element set to
Expand Down Expand Up @@ -249,7 +249,7 @@ function on error.
## Op registration <a class="md-anchor" id="AUTOGENERATED-op-registration"></a>
### Attrs <a class="md-anchor" id="AUTOGENERATED-attrs"></a>
### Attrs <a class="md-anchor" id="Attrs"></a>
Ops can have attrs, whose values are set when the Op is added to a graph. These
are used to configure the Op, and their values can be accessed both within the
Expand Down
2 changes: 1 addition & 1 deletion tensorflow/g3doc/how_tos/graph_viz/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ TensorFlow computation graphs are powerful but complicated. The graph visualizat
![Visualization of a TensorFlow graph](./graph_vis_animation.gif "Visualization of a TensorFlow graph")
*Visualization of a TensorFlow graph.*

To see your own graph, run TensorBoard pointing it to the log directory of the job, click on the graph tab on the top pane and select the appropriate run using the menu at the upper left corner. For in depth information on how to run TensorBoard and make sure you are logging all the necessary information, see [Summaries and TensorBoard](../summaries_and_tensorboard/index.md).
To see your own graph, run TensorBoard pointing it to the log directory of the job, click on the graph tab on the top pane and select the appropriate run using the menu at the upper left corner. For in depth information on how to run TensorBoard and make sure you are logging all the necessary information, see [Summaries and TensorBoard](../../how_tos/summaries_and_tensorboard/index.md).

## Name scoping and nodes <a class="md-anchor" id="AUTOGENERATED-name-scoping-and-nodes"></a>

Expand Down
18 changes: 9 additions & 9 deletions tensorflow/g3doc/how_tos/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
TensorFlow Variables are in-memory buffers containing tensors. Learn how to
use them to hold and update model parameters during training.

[View Tutorial](variables/index.md)
[View Tutorial](../how_tos/variables/index.md)


## TensorFlow Mechanics 101 <a class="md-anchor" id="AUTOGENERATED-tensorflow-mechanics-101"></a>
Expand All @@ -25,54 +25,54 @@ your model(s). This tutorial describes how to build and run TensorBoard as well
as how to add Summary ops to automatically output data to the Events files that
TensorBoard uses for display.

[View Tutorial](summaries_and_tensorboard/index.md)
[View Tutorial](../how_tos/summaries_and_tensorboard/index.md)


## TensorBoard: Graph Visualization <a class="md-anchor" id="AUTOGENERATED-tensorboard--graph-visualization"></a>

This tutorial describes how to use the graph visualizer in TensorBoard to help
you understand the dataflow graph and debug it.

[View Tutorial](graph_viz/index.md)
[View Tutorial](../how_tos/graph_viz/index.md)


## Reading Data <a class="md-anchor" id="AUTOGENERATED-reading-data"></a>

This tutorial describes the three main methods of getting data into your
TensorFlow program: Feeding, Reading and Preloading.

[View Tutorial](reading_data/index.md)
[View Tutorial](../how_tos/reading_data/index.md)


## Threading and Queues <a class="md-anchor" id="AUTOGENERATED-threading-and-queues"></a>

This tutorial describes the various constructs implemented by TensorFlow
to facilitate asynchronous and concurrent training.

[View Tutorial](threading_and_queues/index.md)
[View Tutorial](../how_tos/threading_and_queues/index.md)


## Adding a New Op <a class="md-anchor" id="AUTOGENERATED-adding-a-new-op"></a>

TensorFlow already has a large suite of node operations from which you can
compose in your graph, but here are the details of how to add you own custom Op.

[View Tutorial](adding_an_op/index.md)
[View Tutorial](../how_tos/adding_an_op/index.md)


## Custom Data Readers <a class="md-anchor" id="AUTOGENERATED-custom-data-readers"></a>

If you have a sizable custom data set, you may want to consider extending
TensorFlow to read your data directly in it's native format. Here's how.

[View Tutorial](new_data_formats/index.md)
[View Tutorial](../how_tos/new_data_formats/index.md)


## Using GPUs <a class="md-anchor" id="AUTOGENERATED-using-gpus"></a>

This tutorial describes how to construct and execute models on GPU(s).

[View Tutorial](using_gpu/index.md)
[View Tutorial](../how_tos/using_gpu/index.md)


## Sharing Variables <a class="md-anchor" id="AUTOGENERATED-sharing-variables"></a>
Expand All @@ -83,7 +83,7 @@ different locations in the model construction code.

The "Variable Scope" mechanism is designed to facilitate that.

[View Tutorial](variable_scope/index.md)
[View Tutorial](../how_tos/variable_scope/index.md)

<div class='sections-order' style="display: none;">
<!--
Expand Down
6 changes: 3 additions & 3 deletions tensorflow/g3doc/how_tos/new_data_formats/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ helper functions from
without modifying any arguments.
Next you will create the actual Reader op. It will help if you are familiar
with [the adding an op how-to](../adding_an_op/index.md). The main steps
with [the adding an op how-to](../../how_tos/adding_an_op/index.md). The main steps
are:
* Registering the op.
Expand Down Expand Up @@ -122,7 +122,7 @@ REGISTER_OP("TextLineReader")
A Reader that outputs the lines of a file delimited by '\n'.
)doc");
```

To define an `OpKernel`, Readers can use the shortcut of descending from
`ReaderOpKernel`, defined in
[tensorflow/core/framework/reader_op_kernel.h](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/core/framework/reader_op_kernel.h),
Expand Down Expand Up @@ -199,7 +199,7 @@ You can see some examples in
## Writing an Op for a record format <a class="md-anchor" id="AUTOGENERATED-writing-an-op-for-a-record-format"></a>
Generally this is an ordinary op that takes a scalar string record as input, and
so follow [the instructions to add an Op](../adding_an_op/index.md). You may
so follow [the instructions to add an Op](../../how_tos/adding_an_op/index.md). You may
optionally take a scalar string key as input, and include that in error messages
reporting improperly formatted data. That way users can more easily track down
where the bad data came from.
Expand Down
6 changes: 3 additions & 3 deletions tensorflow/g3doc/how_tos/reading_data/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -266,7 +266,7 @@ This can be important:
How many threads do you need? the `tf.train.shuffle_batch*` functions add a
summary to the graph that indicates how full the example queue is. If you have
enough reading threads, that summary will stay above zero. You can
[view your summaries as training progresses using TensorBoard](../summaries_and_tensorboard/index.md).
[view your summaries as training progresses using TensorBoard](../../how_tos/summaries_and_tensorboard/index.md).

### Creating threads to prefetch using `QueueRunner` objects <a class="md-anchor" id="QueueRunner"></a>

Expand Down Expand Up @@ -355,7 +355,7 @@ threads got an error when running some operation (or an ordinary Python
exception).

For more about threading, queues, QueueRunners, and Coordinators
[see here](../threading_and_queues/index.md).
[see here](../../how_tos/threading_and_queues/index.md).

#### Aside: How clean shut-down when limiting epochs works <a class="md-anchor" id="AUTOGENERATED-aside--how-clean-shut-down-when-limiting-epochs-works"></a>

Expand Down Expand Up @@ -493,4 +493,4 @@ This is what is done in

You can have the train and eval in the same graph in the same process, and share
their trained variables. See
[the shared variables tutorial](../variable_scope/index.md).
[the shared variables tutorial](../../how_tos/variable_scope/index.md).
Original file line number Diff line number Diff line change
Expand Up @@ -105,4 +105,4 @@ not contain any data relevant to that tab, a message will be displayed
indicating how to serialize data that is applicable to that tab.

For in depth information on how to use the *graph* tab to visualize your graph,
see [TensorBoard: Visualizing your graph](../graph_viz/index.md).
see [TensorBoard: Visualizing your graph](../../how_tos/graph_viz/index.md).
4 changes: 2 additions & 2 deletions tensorflow/g3doc/how_tos/variable_scope/index.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Sharing Variables <a class="md-anchor" id="AUTOGENERATED-sharing-variables"></a>

You can create, initialize, save and load single variables
in the way described in the [Variables HowTo](../variables/index.md).
in the way described in the [Variables HowTo](../../how_tos/variables/index.md).
But when building complex models you often need to share large sets of
variables and you might want to initialize all of them in one place.
This tutorial shows how this can be done using `tf.variable_scope()` and
Expand All @@ -12,7 +12,7 @@ the `tf.get_variable()`.
Imagine you create a simple model for image filters, similar to our
[Convolutional Neural Networks Tutorial](../../tutorials/deep_cnn/index.md)
model but with only 2 convolutions (for simplicity of this example). If you use
just `tf.Variable`, as explained in [Variables HowTo](../variables/index.md),
just `tf.Variable`, as explained in [Variables HowTo](../../how_tos/variables/index.md),
your model might look like this.

```python
Expand Down
28 changes: 25 additions & 3 deletions tensorflow/g3doc/resources/bib.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
# BibTex Citation <a class="md-anchor" id="AUTOGENERATED-bibtex-citation"></a>
If you use TensorFlow in your research and would like to cite the TensorFlow
system, we suggest you cite the following whitepaper:

```
@misc{tensorflow2015-whitepaper,
title={{TensorFlow}: Large-Scale Machine Learning on Heterogeneous Systems},
Expand All @@ -11,7 +14,7 @@ author={
Eugene~Brevdo and
Zhifeng~Chen and
Craig~Citro and
Greg~S~Corrado and
Greg~S.~Corrado and
Andy~Davis and
Jeffrey~Dean and
Matthieu~Devin and
Expand All @@ -21,6 +24,7 @@ author={
Geoffrey~Irving and
Michael~Isard and
Yangqing Jia and
Rafal~Jozefowicz and
Lukasz~Kaiser and
Manjunath~Kudlur and
Josh~Levenberg and
Expand All @@ -29,20 +33,38 @@ author={
Sherry~Moore and
Derek~Murray and
Chris~Olah and
Mike~Schuster and
Jonathon~Shlens and
Benoit~Steiner and
Ilya~Sutskever and
Kunal~Talwar and
Paul~Tucker and
Vincent~Vanhoucke and
Vijay~Vasudevan and
Fernanda~Vi\'{e}gas,
Fernanda~Vi\'{e}gas and
Oriol~Vinyals and
Pete~Warden and
Martin~Wattenberg,
Martin~Wattenberg and
Martin~Wicke and
Yuan~Yu and
Xiaoqiang~Zheng},
year={2015},
}
```

In textual form:

```
Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo,
Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis,
Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow,
Andrew Harp, Geoffrey Irving, Michael Isard, Rafal Jozefowicz, Yangqing Jia,
Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Mike Schuster,
Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Jonathon Shlens,
Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker,
Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas,
Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke,
Yuan Yu, and Xiaoqiang Zheng.
TensorFlow: Large-scale machine learning on heterogeneous systems,
2015. Software available from tensorflow.org.
```
4 changes: 2 additions & 2 deletions tensorflow/g3doc/resources/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

This document provides answers to some of the frequently asked questions about
TensorFlow. If you have a question that is not covered here, you might find an
answer on one of the TensorFlow [community resources](index.md).
answer on one of the TensorFlow [community resources](../resources/index.md).

<!-- TOC-BEGIN This section is generated by neural network: DO NOT EDIT! -->
## Contents
Expand Down Expand Up @@ -54,7 +54,7 @@ uses multiple GPUs.
#### What are the different types of tensors that are available? <a class="md-anchor" id="AUTOGENERATED-what-are-the-different-types-of-tensors-that-are-available-"></a>

TensorFlow supports a variety of different data types and tensor shapes. See the
[ranks, shapes, and types reference](dims_types.md) for more details.
[ranks, shapes, and types reference](../resources/dims_types.md) for more details.

## Running a TensorFlow computation <a class="md-anchor" id="AUTOGENERATED-running-a-tensorflow-computation"></a>

Expand Down
4 changes: 2 additions & 2 deletions tensorflow/g3doc/resources/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@
Additional details about the TensorFlow programming model and the underlying
implementation can be found in out white paper:

* [TensorFlow: Large-scale machine learning on heterogeneous systems](../extras/tensorflow-whitepaper2015.pdf)
* [TensorFlow: Large-scale machine learning on heterogeneous systems](http://tensorflow.org/tensorflow-whitepaper2015.pdf)

### Citation <a class="md-anchor" id="AUTOGENERATED-citation"></a>

If you use TensorFlow in your research and would like to cite the TensorFlow
system, we suggest you cite the paper above.
You can use this [BibTeX entry](bib.md). As the project progresses, we
You can use this [BibTeX entry](../resources/bib.md). As the project progresses, we
may update the suggested citation with new papers.


Expand Down
8 changes: 4 additions & 4 deletions tensorflow/g3doc/tutorials/deep_cnn/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ adds operations that perform inference, i.e. classification, on supplied images.
add operations that compute the loss,
gradients, variable updates and visualization summaries.

### Model Inputs <a class="md-anchor" id="AUTOGENERATED-model-inputs"></a>
### Model Inputs <a class="md-anchor" id="model-inputs"></a>

The input part of the model is built by the functions `inputs()` and
`distorted_inputs()` which read images from the CIFAR-10 binary data files.
Expand Down Expand Up @@ -143,7 +143,7 @@ processing time. To prevent these operations from slowing down training, we run
them inside 16 separate threads which continuously fill a TensorFlow
[queue](../../api_docs/python/io_ops.md#shuffle_batch).

### Model Prediction <a class="md-anchor" id="AUTOGENERATED-model-prediction"></a>
### Model Prediction <a class="md-anchor" id="model-prediction"></a>

The prediction part of the model is constructed by the `inference()` function
which adds operations to compute the *logits* of the predictions. That part of
Expand Down Expand Up @@ -181,7 +181,7 @@ the CIFAR-10 model specified in
layers are locally connected and not fully connected. Try editing the
architecture to exactly replicate that fully connected model.

### Model Training <a class="md-anchor" id="AUTOGENERATED-model-training"></a>
### Model Training <a class="md-anchor" id="model-training"></a>

The usual method for training a network to perform N-way classification is
[multinomial logistic regression](https://en.wikipedia.org/wiki/Multinomial_logistic_regression),
Expand Down Expand Up @@ -302,7 +302,7 @@ values. See how the scripts use
[ExponentialMovingAverage](../../api_docs/python/train.md#ExponentialMovingAverage)
for this purpose.

## Evaluating a Model <a class="md-anchor" id="AUTOGENERATED-evaluating-a-model"></a>
## Evaluating a Model <a class="md-anchor" id="evaluating-a-model"></a>

Let us now evaluate how well the trained model performs on a hold-out data set.
the model is evaluated by the script `cifar10_eval.py`. It constructs the model
Expand Down
Loading

0 comments on commit 5c6acf7

Please sign in to comment.