Skip to content

Commit

Permalink
Automated g4 rollback of changelist 190835392
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 190858242
  • Loading branch information
annarev authored and tensorflower-gardener committed Mar 28, 2018
1 parent 390e19a commit 108178d
Show file tree
Hide file tree
Showing 116 changed files with 556 additions and 1,703 deletions.
60 changes: 0 additions & 60 deletions RELEASE.md
Original file line number Diff line number Diff line change
@@ -1,63 +1,3 @@
# Release 1.7.0

## Major Features And Improvements
* Eager mode is moving out of contrib, try `tf.enable_eager_execution()`.
* Graph rewrites emulating fixed-point quantization compatible with TensorFlow Lite, supported by new `tf.contrib.quantize` package.
* Easily customize gradient computation with `tf.custom_gradient`.
* [TensorBoard Debugger Plugin](https://github.com/tensorflow/tensorboard/blob/master/tensorboard/plugins/debugger/README.md), the graphical user interface (GUI) of TensorFlow Debugger (tfdbg), is now in alpha.
* Experimental support for reading a sqlite database as a `Dataset` with new `tf.contrib.data.SqlDataset`.
* Distributed Mutex / CriticalSection added to `tf.contrib.framework.CriticalSection`.
* Better text processing with `tf.regex_replace`.
* Easy, efficient sequence input with `tf.contrib.data.bucket_by_sequence_length`

## Bug Fixes and Other Changes
* Accelerated Linear Algebra (XLA):
* Add `MaxPoolGradGrad` support for XLA
* CSE pass from Tensorflow is now disabled in XLA.
* `tf.data`:
* `tf.data.Dataset`
* Add support for building C++ Dataset op kernels as external libraries, using the `tf.load_op_library()` mechanism.
* `Dataset.list_files()` now shuffles its output by default.
* `Dataset.shuffle(..., seed=tf.constant(0, dtype=tf.int64))` now yields the same sequence of elements as `Dataset.shuffle(..., seed=0)`.
* Add `num_parallel_reads` argument to `tf.data.TFRecordDataset`.
* `tf.contrib`:
* `tf.contrib.bayesflow.halton_sequence` now supports randomization.
* Add support for scalars in `tf.contrib.all_reduce`.
* Add `effective_sample_size` to `tf.contrib.bayesflow.mcmc_diagnostics`.
* Add `potential_scale_reduction` to `tf.contrib.bayesflow.mcmc_diagnostics`.
* Add `BatchNormalization`, `Kumaraswamy` bijectors.
* Deprecate `tf.contrib.learn`. Please check contrib/learn/README.md for instructions on how to convert existing code.
* `tf.contrib.data`
* Remove deprecated `tf.contrib.data.Dataset`, `tf.contrib.data.Iterator`, `tf.contrib.data.FixedLengthRecordDataset`, `tf.contrib.data.TextLineDataset`, and `tf.contrib.data.TFRecordDataset` classes.
* Added `bucket_by_sequence_length`, `sliding_window_batch`, and `make_batched_features_dataset`
* Remove unmaintained `tf.contrib.ndlstm`. You can find it externally at https://github.com/tmbarchive/tfndlstm.
* Moved most of `tf.contrib.bayesflow` to its own repo: `tfp`
* Other:
* tf.py_func now reports the full stack trace if an exception occurs.
* Integrate `TPUClusterResolver` with GKE's integration for Cloud TPUs.
* Add a library for statistical testing of samplers.
* Add Helpers to stream data from the GCE VM to a Cloud TPU.
* Integrate ClusterResolvers with TPUEstimator.
* Unify metropolis_hastings interface with HMC kernel.
* Move LIBXSMM convolutions to a separate --define flag so that they are disabled by default.
* Fix `MomentumOptimizer` lambda.
* Reduce `tfp.layers` boilerplate via programmable docstrings.
* Add `auc_with_confidence_intervals`, a method for computing the AUC and confidence interval with linearithmic time complexity.
* `regression_head` now accepts customized link function, to satisfy the usage that user can define their own link function if the `array_ops.identity` does not meet the requirement.
* Fix `initialized_value` and `initial_value` behaviors for `ResourceVariables` created from `VariableDef` protos.
* Add TensorSpec to represent the specification of Tensors.
* Constant folding pass is now deterministic.
* Support `float16` `dtype` in `tf.linalg.*`.
* Add `tf.estimator.export.TensorServingInputReceiver` that allows `tf.estimator.Estimator.export_savedmodel` to pass raw tensors to model functions.

## Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

4d55397500, Abe, Alistair Low, Andy Kernahan, Appledore, Ben, Ben Barsdell, Boris Pfahringer, Brad Wannow, Brett Koonce, Carl Thomé, cclauss, Chengzhi Chen, Chris Drake, Christopher Yeh, Clayne Robison, Codrut Grosu, Daniel Trebbien, Danny Goodman, David Goodwin, David Norman, Deron Eriksson, Donggeon Lim, Donny Viszneki, DosLin, DylanDmitri, Francisco Guerrero, Fred Reiss, gdh1995, Giuseppe, Glenn Weidner, gracehoney, Guozhong Zhuang, Haichen "Hc" Li, Harald Husum, harumitsu.nobuta, Henry Spivey, hsm207, Jekyll Song, Jerome, Jiongyan Zhang, jjsjann123, John Sungjin Park, Johnson145, JoshVarty, Julian Wolff, Jun Wang, June-One, Kamil Sindi, Kb Sriram, Kdavis-Mozilla, Kenji, lazypanda1, Liang-Chi Hsieh, Loo Rong Jie, Mahesh Bhosale, MandarJKulkarni, ManHyuk, Marcus Ong, Marshal Hayes, Martin Pool, matthieudelaro, mdfaijul, mholzel, Michael Zhou, Ming Li, Minmin Sun, Myungjoo Ham, MyungsungKwak, Naman Kamra, Peng Yu, Penghao Cen, Phil, Raghuraman-K, resec, Rohin Mohanadas, Sandeep N Gupta, Scott Tseng, seaotterman, Seo Sanghyeon, Sergei Lebedev, Ted Chang, terrytangyuan, Tim H, tkunic, Tod, vihanjain, Yan Facai (颜发才), Yin Li, Yong Tang, Yukun Chen, Yusuke Yamada



# Release 1.6.0

## Breaking Changes
Expand Down
2 changes: 1 addition & 1 deletion configure.py
Original file line number Diff line number Diff line change
Expand Up @@ -1414,7 +1414,7 @@ def main():
set_build_var(environ_cp, 'TF_NEED_S3', 'Amazon S3 File System',
'with_s3_support', True, 's3')
set_build_var(environ_cp, 'TF_NEED_KAFKA', 'Apache Kafka Platform',
'with_kafka_support', True, 'kafka')
'with_kafka_support', False, 'kafka')
set_build_var(environ_cp, 'TF_ENABLE_XLA', 'XLA JIT', 'with_xla_support',
False, 'xla')
set_build_var(environ_cp, 'TF_NEED_GDR', 'GDR', 'with_gdr_support',
Expand Down
7 changes: 0 additions & 7 deletions tensorflow/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -240,13 +240,6 @@ config_setting(
visibility = ["//visibility:public"],
)

config_setting(
name = "with_kafka_support_windows_override",
define_values = {"with_kafka_support": "true"},
values = {"cpu": "x64_windows"},
visibility = ["//visibility:public"],
)

config_setting(
name = "with_gcp_support_android_override",
define_values = {"with_gcp_support": "true"},
Expand Down
27 changes: 6 additions & 21 deletions tensorflow/contrib/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@ py_library(
"//tensorflow/contrib/image:single_image_random_dot_stereograms_py",
"//tensorflow/contrib/input_pipeline:input_pipeline_py",
"//tensorflow/contrib/integrate:integrate_py",
"//tensorflow/contrib/kafka",
"//tensorflow/contrib/keras",
"//tensorflow/contrib/kernel_methods",
"//tensorflow/contrib/kfac",
Expand Down Expand Up @@ -109,13 +110,7 @@ py_library(
"//tensorflow/python:util",
] + if_mpi(["//tensorflow/contrib/mpi_collectives:mpi_collectives_py"]) + if_tensorrt([
"//tensorflow/contrib/tensorrt:init_py",
]) + select({
"//tensorflow:with_kafka_support_windows_override": [],
"//tensorflow:with_kafka_support": [
"//tensorflow/contrib/kafka",
],
"//conditions:default": [],
}),
]),
)

cc_library(
Expand All @@ -125,6 +120,7 @@ cc_library(
"//tensorflow/contrib/boosted_trees:boosted_trees_kernels",
"//tensorflow/contrib/coder:all_kernels",
"//tensorflow/contrib/data/kernels:dataset_kernels",
"//tensorflow/contrib/kafka:dataset_kernels",
"//tensorflow/contrib/factorization/kernels:all_kernels",
"//tensorflow/contrib/input_pipeline:input_pipeline_ops_kernels",
"//tensorflow/contrib/layers:sparse_feature_cross_op_kernel",
Expand All @@ -137,13 +133,7 @@ cc_library(
"//tensorflow/contrib/text:all_kernels",
] + if_mpi(["//tensorflow/contrib/mpi_collectives:mpi_collectives_py"]) + if_cuda([
"//tensorflow/contrib/nccl:nccl_kernels",
]) + select({
"//tensorflow:with_kafka_support_windows_override": [],
"//tensorflow:with_kafka_support": [
"//tensorflow/contrib/kafka:dataset_kernels",
],
"//conditions:default": [],
}),
]),
)

cc_library(
Expand All @@ -156,6 +146,7 @@ cc_library(
"//tensorflow/contrib/factorization:all_ops",
"//tensorflow/contrib/framework:all_ops",
"//tensorflow/contrib/input_pipeline:input_pipeline_ops_op_lib",
"//tensorflow/contrib/kafka:dataset_ops_op_lib",
"//tensorflow/contrib/layers:sparse_feature_cross_op_op_lib",
"//tensorflow/contrib/nccl:nccl_ops_op_lib",
"//tensorflow/contrib/nearest_neighbor:nearest_neighbor_ops_op_lib",
Expand All @@ -166,13 +157,7 @@ cc_library(
"//tensorflow/contrib/tensor_forest:tensor_forest_ops_op_lib",
"//tensorflow/contrib/text:all_ops",
"//tensorflow/contrib/tpu:all_ops",
] + select({
"//tensorflow:with_kafka_support_windows_override": [],
"//tensorflow:with_kafka_support": [
"//tensorflow/contrib/kafka:dataset_ops_op_lib",
],
"//conditions:default": [],
}),
],
)

filegroup(
Expand Down
2 changes: 1 addition & 1 deletion tensorflow/contrib/boosted_trees/kernels/quantile_ops.cc
Original file line number Diff line number Diff line change
Expand Up @@ -253,7 +253,7 @@ class CreateQuantileAccumulatorOp : public OpKernel {
private:
float epsilon_;
int32 num_quantiles_;
// An upper bound on the number of entries that the summaries might have
// An upperbound on the number of enteries that the summaries might have
// for a feature.
int64 max_elements_;
bool generate_quantiles_;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ Status BatchFeatures::Initialize(
TF_CHECK_AND_RETURN_IF_ERROR(
dense_float_feature.dim_size(1) == 1,
errors::InvalidArgument(
"Dense float features may not be multivalent: dim_size(1) = ",
"Dense float features may not be multi-valent: dim_size(1) = ",
dense_float_feature.dim_size(1)));
dense_float_feature_columns_.emplace_back(dense_float_feature);
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ TEST_F(BatchFeaturesTest, DenseFloatFeatures_Multivalent) {
BatchFeatures batch_features(1);
auto dense_vec = AsTensor<float>({3.0f, 7.0f}, {1, 2});
auto expected_error = InvalidArgument(
"Dense float features may not be multivalent: dim_size(1) = 2");
"Dense float features may not be multi-valent: dim_size(1) = 2");
EXPECT_EQ(expected_error,
batch_features.Initialize({dense_vec}, {}, {}, {}, {}, {}, {}));
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ Status DropoutUtils::DropOutTrees(
if (probability_of_skipping_dropout < 0 ||
probability_of_skipping_dropout > 1) {
return errors::InvalidArgument(
"Probability of skipping dropout must be in [0,1] range");
"Probability of skiping dropout must be in [0,1] range");
}
const auto num_trees = weights.size();

Expand Down
2 changes: 1 addition & 1 deletion tensorflow/contrib/boosted_trees/lib/utils/dropout_utils.h
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ class DropoutUtils {
// Current weights and num_updates will be updated as a result of this
// func
std::vector<float>* current_weights,
// How many weight assignments have been done for each tree already.
// How many weight assignements have been done for each tree already.
std::vector<int32>* num_updates);
};

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ TEST_F(SparseColumnIterableTest, Empty) {
}

TEST_F(SparseColumnIterableTest, Iterate) {
// 8 examples having 7 sparse features with the 3rd and 7th multivalent.
// 8 examples having 7 sparse features with the 3rd and 7th multi-valent.
// This can be visualized like the following:
// Instance | Sparse |
// 0 | x |
Expand Down
2 changes: 1 addition & 1 deletion tensorflow/contrib/boosted_trees/proto/tree_config.proto
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ message DenseFloatBinarySplit {
// Float feature column and split threshold describing
// the rule feature <= threshold.
int32 feature_column = 1;
// If feature column is multivalent, this holds the index of the dimension
// If feature column is multivalent, this holds the index of the dimensiong
// for the split. Defaults to 0.
int32 dimension_id = 5;
float threshold = 2;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -120,8 +120,8 @@ def setUp(self):
"""Sets up the prediction tests.
Create a batch of two examples having one dense float, two sparse float
single valued, one sparse float multidimensional and one sparse int
features. The data looks like the following:
single valued, one sparse float multidimensionl and one sparse int features.
The data looks like the following:
| Instance | Dense0 | SparseF0 | SparseF1 | SparseI0 | SparseM
| 0 | 7 | -3 | | 9,1 | __, 5.0
| 1 | -2 | | 4 | | 3, ___
Expand Down Expand Up @@ -810,7 +810,7 @@ def testDropoutCenterBiasWithGrowingMeta(self):
# building. This tree should never be dropped.
num_trees = 10
with self.test_session():
# Empty tree ensemble.
# Empty tree ensenble.
tree_ensemble_config = tree_config_pb2.DecisionTreeEnsembleConfig()
# Add 10 trees with some weights.
for i in range(0, num_trees):
Expand Down Expand Up @@ -951,7 +951,7 @@ def testDropoutSeed(self):

def testDropOutZeroProb(self):
with self.test_session():
# Empty tree ensemble.
# Empty tree ensenble.
tree_ensemble_config = tree_config_pb2.DecisionTreeEnsembleConfig()
# Add 1000 trees with some weights.
for i in range(0, 999):
Expand Down Expand Up @@ -994,7 +994,7 @@ def testDropOutZeroProb(self):

def testAveragingAllTrees(self):
with self.test_session():
# Empty tree ensemble.
# Empty tree ensenble.
tree_ensemble_config = tree_config_pb2.DecisionTreeEnsembleConfig()
adjusted_tree_ensemble_config = (
tree_config_pb2.DecisionTreeEnsembleConfig())
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -482,7 +482,7 @@ def setUp(self):
"""Sets up the quantile op tests.
Create a batch of 4 examples having 2 dense and 4 sparse features.
Fourth sparse feature is multivalent (3 dimensional)
Forth sparse feature is multivalent (3 dimensional)
The data looks like this
| Instance | Dense 0 | Dense 1 | Sparse 0 | Sparse 1 |Sparse 2| SparseM
| 0 | -0.1 | -1 | -2 | 0.1 | |_ ,1,_
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,7 @@ def flush(self, stamp_token, next_stamp_token):
"""Finalizes quantile summary stream and resets it for next iteration.
Args:
stamp_token: Expected current token.
stamp_token: Exepcted current token.
next_stamp_token: Next value for the token.
Returns:
A list of quantiles or approximate boundaries.
Expand Down
3 changes: 0 additions & 3 deletions tensorflow/contrib/cmake/tf_tests.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -210,9 +210,6 @@ if (tensorflow_BUILD_PYTHON_TESTS)
"${tensorflow_source_dir}/tensorflow/contrib/learn/python/learn/learn_io/graph_io_test.py"
# Test is flaky on Windows GPU builds (b/38283730).
"${tensorflow_source_dir}/tensorflow/contrib/factorization/python/ops/gmm_test.py"
# Disable following manual tag in BUILD.
"${tensorflow_source_dir}/tensorflow/python/keras/_impl/keras/layers/convolutional_test.py"

)
if (WIN32)
set(tf_test_src_py_exclude
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -413,20 +413,6 @@ def testMapAndBatchPartialBatch(self):
def testMapAndBatchPartialBatchDropRemainder(self):
return self._testMapAndBatchPartialBatchHelper(drop_remainder=True)

def testMapAndBatchYieldsPartialBatch(self):
iterator = (dataset_ops.Dataset.range(10)
.apply(batching.map_and_batch(
lambda x: array_ops.reshape(x * x, [1]), 4))
.make_one_shot_iterator())
self.assertEqual([None, 1], iterator.output_shapes.as_list())
next_element = iterator.get_next()
with self.test_session() as sess:
self.assertAllEqual([[0], [1], [4], [9]], sess.run(next_element))
self.assertAllEqual([[16], [25], [36], [49]], sess.run(next_element))
self.assertAllEqual([[64], [81]], sess.run(next_element))
with self.assertRaises(errors.OutOfRangeError):
sess.run(next_element)

def testMapAndBatchSparse(self):

def _sparse(i):
Expand Down
6 changes: 1 addition & 5 deletions tensorflow/contrib/eager/python/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -270,11 +270,7 @@ cuda_py_test(
"//tensorflow/python/eager:test",
"//tensorflow/python/keras",
],
tags = [
"no_oss", # b/74395663
"no_windows", # TODO: needs investigation on Windows
"notsan",
],
tags = ["notsan"],
)

filegroup(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -418,6 +418,7 @@ def testTrainSpinn(self):
if event.summary.value
and event.summary.value[0].tag == "train/loss"]
self.assertEqual(config.epochs, len(train_losses))
self.assertLess(train_losses[-1], train_losses[0])

# 5. Verify that checkpoints exist and contains all the expected variables.
self.assertTrue(glob.glob(os.path.join(config.logdir, "ckpt*")))
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ def model_fn(...): # See `model_fn` in `Estimator`.
the train_op argument of `EstimatorSpec`.
loss_reduction: controls whether losses are summed or averaged.
devices: Optional list of devices to replicate the model across. This
argument can be used to replicate only on the subset of available GPUs.
argument can be used to replice only on the subset of available GPUs.
If `None`, then all available GPUs are going to be used for replication.
If no GPUs are available, then the model is going to be placed on the CPU.
Expand Down
2 changes: 1 addition & 1 deletion tensorflow/contrib/factorization/kernels/clustering_ops.cc
Original file line number Diff line number Diff line change
Expand Up @@ -353,7 +353,7 @@ class NearestNeighborsOp : public OpKernel {
auto worker_threads = *(context->device()->tensorflow_cpu_worker_threads());
const int64 num_threads = worker_threads.num_threads;
// This kernel might be configured to use fewer than the total number of
// available CPUs on the host machine. To avoid destructive interference
// available CPUs on the host machine. To avoid descructive interference
// with other jobs running on the host machine, we must only use a fraction
// of total available L3 cache. Unfortunately, we cannot query the host
// machine to get the number of physical CPUs. So, we use a fixed per-CPU
Expand Down
14 changes: 7 additions & 7 deletions tensorflow/contrib/factorization/python/ops/factorization_ops.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ class WALSModel(object):
# the prep_gramian_op for row(column) can be run.
worker_init_op = model.worker_init
# To be run once per integration sweep before the row(column) update
# To be run once per interation sweep before the row(column) update
# initialize ops can be run. Note that in the distributed training
# situations, this should only be run by the chief trainer. All other
# trainers need to block until this is done.
Expand All @@ -118,9 +118,9 @@ class WALSModel(object):
init_row_update_op = model.initialize_row_update_op
init_col_update_op = model.initialize_col_update_op
# Ops to update row(column). This can either take the entire sparse
# tensor or slices of sparse tensor. For distributed trainer, each
# trainer handles just part of the matrix.
# Ops to upate row(column). This can either take the entire sparse tensor
# or slices of sparse tensor. For distributed trainer, each trainer
# handles just part of the matrix.
_, row_update_op, unreg_row_loss, row_reg, _ = model.update_row_factors(
sp_input=matrix_slices_from_queue_for_worker_shard)
row_loss = unreg_row_loss + row_reg
Expand Down Expand Up @@ -220,7 +220,7 @@ def __init__(self,
in the form of [[w_0, w_1, ...], [w_k, ... ], [...]], with the number of
inner lists matching the number of row factor shards and the elements in
each inner list are the weights for the rows of the corresponding row
factor shard. In this case, w_ij = unobserved_weight +
factor shard. In this case, w_ij = unonbserved_weight +
row_weights[i] * col_weights[j].
- If this is a single non-negative real number, this value is used for
all row weights and w_ij = unobserved_weight + row_weights *
Expand Down Expand Up @@ -435,7 +435,7 @@ def _prepare_gramian(self, factors, gramian):
gramian: Variable storing the gramian calculated from the factors.
Returns:
A op that updates the gramian with the calculated value from the factors.
A op that updates the gramian with the calcuated value from the factors.
"""
partial_gramians = []
for f in factors:
Expand Down Expand Up @@ -564,7 +564,7 @@ def worker_init(self):
Note that specifically this initializes the cache of the row and column
weights on workers when `use_factors_weights_cache` is True. In this case,
if these weights are being calculated and reset after the object is created,
if these weights are being calcualted and reset after the object is created,
it is important to ensure this ops is run afterwards so the cache reflects
the correct values.
"""
Expand Down
Loading

0 comments on commit 108178d

Please sign in to comment.