Skip to content

Commit

Permalink
TensorFlow: Upstream latest changes to git.
Browse files Browse the repository at this point in the history
Changes:
- Documentation updates.
- Specify numpy version in required packages.

Base CL: 107344010
  • Loading branch information
keveman committed Nov 8, 2015
1 parent a5ac11d commit 7312671
Show file tree
Hide file tree
Showing 14 changed files with 110 additions and 110 deletions.
11 changes: 0 additions & 11 deletions tensorflow/g3doc/api_docs/python/ops.md

This file was deleted.

6 changes: 3 additions & 3 deletions tensorflow/g3doc/api_docs/python/state_ops.md
Original file line number Diff line number Diff line change
Expand Up @@ -540,9 +540,9 @@ You number checkpoint filenames by passing a value to the optional
`global_step` argument to `save()`:

```python
saver.save('my-model', global_step=0) ==> filename: 'my-model-0'
saver.save(sess, 'my-model', global_step=0) ==> filename: 'my-model-0'
...
saver.save('my-model', global_step=1000) ==> filename: 'my-model-1000'
saver.save(sess, 'my-model', global_step=1000) ==> filename: 'my-model-1000'
```

Additionally, optional arguments to the `Saver()` constructor let you control
Expand Down Expand Up @@ -676,7 +676,7 @@ path can be passed directly to a call to `restore()`.
##### Args: <a class="md-anchor" id="AUTOGENERATED-args-"></a>


* <b>sess</b>: A Session to use to save the variables..
* <b>sess</b>: A Session to use to save the variables.
* <b>save_path</b>: string. Path to the checkpoint filename. If the saver is
`sharded`, this is the prefix of the sharded checkpoint filename.
* <b>global_step</b>: If provided the global step number is appended to
Expand Down
7 changes: 4 additions & 3 deletions tensorflow/g3doc/get_started/basic_usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -286,7 +286,8 @@ with tf.Session() as sess:
```

A `placeholder()` operation generates an error if you do not supply a feed for
it. See the [MNIST fully-connected feed
tutorial](../tutorials/mnist/fully_connected_feed.py) for a larger-scale
example of feeds.
it. See the
[MNIST fully-connected feed tutorial](../tutorials/mnist/tf/index.md)
([source code](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/tutorials/mnist/fully_connected_feed.py))
for a larger-scale example of feeds.

21 changes: 13 additions & 8 deletions tensorflow/g3doc/get_started/os_setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,19 +2,21 @@

## Binary Installation <a class="md-anchor" id="AUTOGENERATED-binary-installation"></a>

The TensorFlow Python API requires Python 2.7.

### Ubuntu/Linux <a class="md-anchor" id="AUTOGENERATED-ubuntu-linux"></a>

**Note**: All the virtualenv-related instructions are optional, but we recommend
using the virtualenv on any multi-user system.

Make sure you have [pip](https://pypi.python.org/pypi/pip), the python headers,
and (optionally) [virtualenv](https://pypi.python.org/pypi/virtualenv) installed:

```bash
$ sudo apt-get install python-pip python-dev python-virtualenv
```

**Note**: All the virtualenv-related instructions are optional, but we recommend
using the virtualenv on any multi-user system.

Set up a new virtualenv environment. Assuming you want to set it up in the
Set up a new virtualenv environment. To set it up in the
directory `~/tensorflow`, run:

```bash
Expand All @@ -39,18 +41,19 @@ Inside the virtualenv, install TensorFlow:
# For GPU-enabled version (only install this version if you have the CUDA sdk installed)
(tensorflow)$ pip install --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.5.0-cp27-none-linux_x86_64.whl

# When you are done using TensorFlow:
(tensorflow)$ deactivate # Deactivate the virtualenv
$ # Your prompt should change back
```

### Mac OS X <a class="md-anchor" id="AUTOGENERATED-mac-os-x"></a>

Make sure you have [pip](https://pypi.python.org/pypi/pip) and
(optionally) [virtualenv](https://pypi.python.org/pypi/virtualenv) installed:

**Note**: All the virtualenv-related instructions are optional, but we recommend
using the virtualenv on any multi-user system.

Make sure you have [pip](https://pypi.python.org/pypi/pip) and
(optionally) [virtualenv](https://pypi.python.org/pypi/virtualenv) installed:

If using `easy_install`:

```bash
Expand Down Expand Up @@ -78,6 +81,8 @@ Install TensorFlow (only CPU binary version is currently available).

```bash
(tensorflow)$ pip install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl

# When you are done using TensorFlow:
(tensorflow)$ deactivate # Deactivate the virtualenv
$ # Your prompt should change back
```
Expand Down Expand Up @@ -184,7 +189,7 @@ Add the executable `output/bazel` to your `$PATH` environment variable.
$ sudo apt-get install python-numpy swig python-dev
```

#### <a name="install_cuda"></a>Optional: Install CUDA (GPUs on Linux) <a class="md-anchor" id="AUTOGENERATED--a-name--install_cuda----a-optional--install-cuda--gpus-on-linux-"></a>
#### Optional: Install CUDA (GPUs on Linux) <a class="md-anchor" id="install_cuda"></a>

In order to build or run TensorFlow with GPU support, both Cuda Toolkit 7.0 and
CUDNN 6.5 V2 from NVIDIA need to be installed.
Expand Down
84 changes: 43 additions & 41 deletions tensorflow/g3doc/how_tos/variables/index.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,26 @@
# Variables: Creation, Initialization, Saving, and Loading <a class="md-anchor" id="AUTOGENERATED-variables--creation--initialization--saving--and-loading"></a>

When you train a model, you use [Variables](../../api_docs/python/state_ops.md)
When you train a model, you use [variables](../../api_docs/python/state_ops.md)
to hold and update parameters. Variables are in-memory buffers containing
tensors. They need to be explicitly initialized and can be saved to disk during
tensors. They must be explicitly initialized and can be saved to disk during
and after training. You can later restore saved values to exercise or analyse
the model.

This document references the following TensorFlow classes. Follow the links to
their reference manual for a complete description of their API:

* The `Variable` class [tf.Variable](../../api_docs/python/state_ops.md#Variable).
* The `Saver` class [tf.train.Saver](../../api_docs/python/state_ops.md#Saver).
* The [`tf.Variable`](../../api_docs/python/state_ops.md#Variable) class.
* The [`tf.train.Saver`](../../api_docs/python/state_ops.md#Saver) class.


## Creation <a class="md-anchor" id="AUTOGENERATED-creation"></a>

When you create a [Variable](../../api_docs/python/state_ops.md) you pass a
`Tensor` as its initial value to the `Variable()` constructor. TensorFlow
provides a collection of Ops that produce tensors often used for initialization
provides a collection of ops that produce tensors often used for initialization
from [constants or random values](../../api_docs/python/constant_op.md).

Note that all these Ops require you to specify the shape of the tensors. That
Note that all these ops require you to specify the shape of the tensors. That
shape automatically becomes the shape of the variable. Variables generally
have a fixed shape, but TensorFlow provides advanced mechanisms to reshape
variables.
Expand All @@ -32,28 +32,28 @@ weights = tf.Variable(tf.random_normal([784, 200], stddev=0.35),
biases = tf.Variable(tf.zeros([200]), name="biases")
```

Calling `tf.Variable()` adds a few Ops to the graph:
Calling `tf.Variable()` adds several ops to the graph:

* A `variable` Op that holds the variable value.
* An initializer Op that sets the variable to its initial value. This is
actually a `tf.assign` Op.
* The Ops for the initial value, such as the `zeros` Op for the `biases`
* A `variable` op that holds the variable value.
* An initializer op that sets the variable to its initial value. This is
actually a `tf.assign` op.
* The ops for the initial value, such as the `zeros` op for the `biases`
variable in the example are also added to the graph.

The value returned by `tf.Variable()` value is an instance of the Python class
`tf.Variable`.

## Initialization <a class="md-anchor" id="AUTOGENERATED-initialization"></a>

Variable initializers must be run explicitly before other Ops in your model can
be run. The easiest way to do that is to add an Op that runs all the variable
initializers, and run that Op before using the model.
Variable initializers must be run explicitly before other ops in your model can
be run. The easiest way to do that is to add an op that runs all the variable
initializers, and run that op before using the model.

You can alternatively restore variable values from a checkpoint file, see
below.

Use `tf.initialize_all_variables()` to add an Op to run variable initializers.
Only run that Op after you have fully constructed your model and launched it in
Use `tf.initialize_all_variables()` to add an op to run variable initializers.
Only run that op after you have fully constructed your model and launched it in
a session.

```python
Expand All @@ -62,13 +62,13 @@ weights = tf.Variable(tf.random_normal([784, 200], stddev=0.35),
name="weights")
biases = tf.Variable(tf.zeros([200]), name="biases")
...
# Add an Op to initialize the variables.
# Add an op to initialize the variables.
init_op = tf.initialize_all_variables()

# Later, when launching the model
with tf.Session() as sess:
# Run the init operation.
sess.Run(init_op)
sess.run(init_op)
...
# Use the model
...
Expand All @@ -77,7 +77,7 @@ with tf.Session() as sess:
### Initialization from another Variable <a class="md-anchor" id="AUTOGENERATED-initialization-from-another-variable"></a>

You sometimes need to initialize a variable from the initial value of another
variable. As the Op added by `tf.initialize_all_variables()` initializes all
variable. As the op added by `tf.initialize_all_variables()` initializes all
variables in parallel you have to be careful when this is needed.

To initialize a new variable from the value of another variable use the other
Expand All @@ -98,27 +98,29 @@ w_twice = tf.Variable(weights.initialized_value() * 0.2, name="w_twice")

### Custom Initialization <a class="md-anchor" id="AUTOGENERATED-custom-initialization"></a>

The convenience function `tf.initialize_all_variables()` adds an Op to
The convenience function `tf.initialize_all_variables()` adds an op to
initialize *all variables* in the model. You can also pass it an explicit list
of variables to initialize. See the
[Variables Documentation](../../api_docs/python/state_ops.md) for more options,
including checking if variables are initialized.

## Saving and Restoring <a class="md-anchor" id="AUTOGENERATED-saving-and-restoring"></a>

The easiest way to save and restore a model is to use a `tf.train.Saver`
object. The constructor adds `save` and `restore` Ops to the graph for all, or
a specified list, of variables. The saver object provides methods to run these
Ops, specifying paths for the checkpoint files to write to or read from.
The easiest way to save and restore a model is to use a `tf.train.Saver` object.
The constructor adds `save` and `restore` ops to the graph for all, or a
specified list, of the variables in the graph. The saver object provides
methods to run these ops, specifying paths for the checkpoint files to write to
or read from.

### Checkpoint Files <a class="md-anchor" id="AUTOGENERATED-checkpoint-files"></a>

Variables are saved in binary files that, roughly, contains a map from variable
names to tensors.
Variables are saved in binary files that, roughly, contain a map from variable
names to tensor values.

When you create a `Saver` object, you can optionally chose names for the
variables in the checkpoint files. By default, it uses the names passed to the
`tf.Variable()` call.
When you create a `Saver` object, you can optionally choose names for the
variables in the checkpoint files. By default, it uses the value of the
[`Variable.name`](../../api_docs/python/state_ops.md#Variable.name) property for
each variable.

### Saving Variables <a class="md-anchor" id="AUTOGENERATED-saving-variables"></a>

Expand All @@ -130,20 +132,20 @@ the model.
v1 = tf.Variable(..., name="v1")
v2 = tf.Variable(..., name="v2")
...
# Add an Op to initialize the variables.
# Add an op to initialize the variables.
init_op = tf.initialize_all_variables()

# Add Ops to save and restore all the variables.
# Add ops to save and restore all the variables.
saver = tf.train.Saver()

# Later, launch the model, initialize the variables, do some work, save the
# variables to disk.
with tf.Session() as sess:
sess.Run(init_op)
sess.run(init_op)
# Do some work with the model.
..
# Save the variables to disk.
save_path = saver.Save(sess, "/tmp/model.ckpt")
save_path = saver.save(sess, "/tmp/model.ckpt")
print "Model saved in file: ", save_path
```

Expand All @@ -157,23 +159,23 @@ restore variables from a file you do not have to initialize them beforehand.
v1 = tf.Variable(..., name="v1")
v2 = tf.Variable(..., name="v2")
...
# Add Ops to save and restore all the variables.
# Add ops to save and restore all the variables.
saver = tf.train.Saver()

# Later, launch the model, use the saver to restore variables from disk, and
# do some work with the model.
with tf.Session() as sess:
# Restore variables from disk.
saver.Restore(sess, "/tmp/model.ckpt")
saver.restore(sess, "/tmp/model.ckpt")
print "Model restored."
# Do some work with the model
...
```

### Chosing which Variables to Save and Restore <a class="md-anchor" id="AUTOGENERATED-chosing-which-variables-to-save-and-restore"></a>
### Choosing which Variables to Save and Restore <a class="md-anchor" id="AUTOGENERATED-choosing-which-variables-to-save-and-restore"></a>

If you do not pass any argument to `tf.train.Saver()` the saver
handles all variables. Each one of them is saved under the name that was
If you do not pass any argument to `tf.train.Saver()` the saver handles all
variables in the graph. Each one of them is saved under the name that was
passed when the variable was created.

It is sometimes useful to explicitly specify names for variables in the
Expand All @@ -196,10 +198,10 @@ Notes:
* You can create as many saver objects as you want if you need to save and
restore different subsets of the model variables. The same variable can be
listed in multiple saver objects, its value is only changed when the saver
`Restore()` method is run.
`restore()` method is run.

* If you only restore a subset of the model variables at the start
of a session, you have to run an initialize Op for the other variables. See
of a session, you have to run an initialize op for the other variables. See
[`tf.initialize_variables()`](../../api_docs/python/state_ops.md#initialize_variables)
for more information.

Expand All @@ -208,7 +210,7 @@ Notes:
v1 = tf.Variable(..., name="v1")
v2 = tf.Variable(..., name="v2")
...
# Add Ops to save and restore only 'v2' using the name "my_v2"
# Add ops to save and restore only 'v2' using the name "my_v2"
saver = tf.train.Saver({"my_v2": v2})
# Use the saver object normally after that.
...
Expand Down
12 changes: 6 additions & 6 deletions tensorflow/g3doc/tutorials/mandelbrot/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,18 +6,18 @@ general mathematics. This is actually a pretty naive implementation of the
visualization, but it makes the point. (We may end up providing a more
elaborate implementation down the line to produce more truly beautiful images.)

Note: This tutorial was originally prepared as an iPython notebook.
Note: This tutorial was originally prepared as an IPython notebook.

## Basic Setup <a class="md-anchor" id="AUTOGENERATED-basic-setup"></a>

We'll need a few imports to get started.

```python
#Import libraries for simulation
# Import libraries for simulation
import tensorflow as tf
import numpy as np

#Imports for visualization
# Imports for visualization
import PIL.Image
from cStringIO import StringIO
from IPython.display import clear_output, Image, display
Expand Down Expand Up @@ -45,7 +45,7 @@ def DisplayFractal(a, fmt='jpeg'):

## Session and Variable Initialization <a class="md-anchor" id="AUTOGENERATED-session-and-variable-initialization"></a>

For playing around like this, we often us an interactive session, but a regular
For playing around like this, we often use an interactive session, but a regular
session would work as well.

```python
Expand All @@ -61,7 +61,7 @@ Y, X = np.mgrid[-1.3:1.3:0.005, -2:1:0.005]
Z = X+1j*Y
```

Now we define and initialize.
Now we define and initialize TensorFlow tensors.

```python
xs = tf.constant(Z.astype("complex64"))
Expand All @@ -72,7 +72,7 @@ ns = tf.Variable(tf.zeros_like(xs, "float32"))
TensorFlow requires that you explicitly initialize variables before using them.

```python
tf.InitializeAllVariables().run()
tf.initialize_all_variables().run()
```

## Defining and Running the Computation <a class="md-anchor" id="AUTOGENERATED-defining-and-running-the-computation"></a>
Expand Down
9 changes: 5 additions & 4 deletions tensorflow/g3doc/tutorials/mnist/beginners/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,10 +34,11 @@ work through the code.
## The MNIST Data <a class="md-anchor" id="AUTOGENERATED-the-mnist-data"></a>

The MNIST data is hosted on
[Yann LeCun's website](http://yann.lecun.com/exdb/mnist/).
For your convenience, we've included some python code to download and install
the data automatically. You can either download [the code](../input_data.py) and
import it as below, or simply copy and paste it in.
[Yann LeCun's website](http://yann.lecun.com/exdb/mnist/). For your
convenience, we've included some python code to download and install the data
automatically. You can either download
[the code](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/tutorials/mnist/input_data.py)
and import it as below, or simply copy and paste it in.

```python
import input_data
Expand Down
Loading

0 comments on commit 7312671

Please sign in to comment.