Skip to content

Commit

Permalink
[BE] Remove dependency on six and future (pytorch#94709)
Browse files Browse the repository at this point in the history
Remove the Python 2 and 3 compatibility library [six](https://pypi.org/project/six) and [future](https://pypi.org/project/future) and `torch._six`. We only support Python 3.8+ now. It's time to retire them.

Pull Request resolved: pytorch#94709
Approved by: https://github.com/malfet, https://github.com/Skylion007
  • Loading branch information
XuehaiPan authored and pytorchmergebot committed Feb 14, 2023
1 parent 3951169 commit b005ec6
Show file tree
Hide file tree
Showing 73 changed files with 108 additions and 195 deletions.
5 changes: 0 additions & 5 deletions .ci/docker/requirements-ci.txt
Original file line number Diff line number Diff line change
Expand Up @@ -36,11 +36,6 @@ flatbuffers==2.0
#Pinned versions: 2.0
#test that import:

#future #this breaks linux-bionic-rocm4.5-py3.7
#Description: compatibility layer between python 2 and python 3
#Pinned versions:
#test that import:

hypothesis==5.35.1
# Pin hypothesis to avoid flakiness: https://github.com/pytorch/pytorch/issues/31136
#Description: advanced library for generating parametrized tests
Expand Down
2 changes: 1 addition & 1 deletion .circleci/config.yml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

5 changes: 2 additions & 3 deletions .circleci/scripts/binary_linux_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -82,8 +82,7 @@ if [[ "$PACKAGE_TYPE" == conda ]]; then
mkl>=2018 \
ninja \
typing-extensions \
${PROTOBUF_PACKAGE} \
six
${PROTOBUF_PACKAGE}
if [[ "$DESIRED_CUDA" == 'cpu' ]]; then
retry conda install -c pytorch -y cpuonly
else
Expand All @@ -100,7 +99,7 @@ if [[ "$PACKAGE_TYPE" == conda ]]; then
)
elif [[ "$PACKAGE_TYPE" != libtorch ]]; then
pip install "\$pkg" --extra-index-url "https://download.pytorch.org/whl/nightly/${DESIRED_CUDA}"
retry pip install -q future numpy protobuf typing-extensions six
retry pip install -q numpy protobuf typing-extensions
fi
if [[ "$PACKAGE_TYPE" == libtorch ]]; then
pkg="\$(ls /final_pkgs/*-latest.zip)"
Expand Down
2 changes: 1 addition & 1 deletion .circleci/verbatim-sources/job-specs/job-specs-custom.yml
Original file line number Diff line number Diff line change
Expand Up @@ -626,7 +626,7 @@
cd ${PROJ_ROOT}/ios/TestApp/benchmark
mkdir -p ../models
if [ ${USE_COREML_DELEGATE} == 1 ]; then
pip install coremltools==5.0b5 protobuf==3.20.1 six==1.16.0
pip install coremltools==5.0b5 protobuf==3.20.1
python coreml_backend.py
else
cd "${PROJ_ROOT}"
Expand Down
2 changes: 1 addition & 1 deletion .github/ci_commit_pins/xla.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
9cbcdb4008c14ad8251c5d4d7723aa616f659edb
d29eb67c27af0f18d4f487d76b86f43b0a69aade
1 change: 0 additions & 1 deletion .github/requirements/conda-env-macOS-ARM64
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@ cmake=3.22.*
typing-extensions=4.3.0
dataclasses=0.8
pip=22.2.2
six=1.16.0
pillow=9.2.0
pkg-config=0.29.2
wheel=0.37.1
Expand Down
1 change: 0 additions & 1 deletion .github/requirements/conda-env-macOS-X64
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@ cmake=3.22.*
typing-extensions=4.3.0
dataclasses=0.8
pip=22.2.2
six=1.16.0
pillow=9.2.0
libuv=1.40.0
pkg-config=0.29.2
Expand Down
1 change: 0 additions & 1 deletion .github/requirements/pip-requirements-iOS.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
# iOS simulator requirements
coremltools==5.0b5
protobuf==3.20.2
six==1.16.0
2 changes: 1 addition & 1 deletion .github/workflows/run_torchbench.yml
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ jobs:
conda activate pr-ci
conda install -y numpy="${NUMPY_VERSION}" requests ninja pyyaml mkl mkl-include \
setuptools cmake=3.22.* typing-extensions boto3 \
six pillow pytest tabulate gitpython git-lfs tqdm psutil
pillow pytest tabulate gitpython git-lfs tqdm psutil
pip install --pre torch torchvision torchtext -f https://download.pytorch.org/whl/nightly/cu116/torch_nightly.html
- name: Setup TorchBench branch
run: |
Expand Down
1 change: 0 additions & 1 deletion .lintrunner.toml
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,6 @@ init_command = [
'expecttest==0.1.3',
'mypy==0.960',
'types-requests==2.27.25',
'types-six==1.16.15',
'types-PyYAML==6.0.7',
'types-tabulate==0.8.8',
'types-protobuf==3.19.18',
Expand Down
2 changes: 1 addition & 1 deletion benchmarks/dynamo/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ build-deps: clone-deps
# conda create --name torchdynamo -y python=3.8
# conda activate torchdynamo
conda install -y astunparse numpy scipy ninja pyyaml mkl mkl-include setuptools cmake \
typing-extensions six requests protobuf numba cython scikit-learn
typing-extensions requests protobuf numba cython scikit-learn
conda install -y -c pytorch magma-cuda116
conda install -y -c conda-forge librosa
(cd ../../../torchvision && python setup.py clean && python setup.py develop)
Expand Down
4 changes: 1 addition & 3 deletions caffe2/experiments/python/device_reduce_sum_bench.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,6 @@
import logging
import os

from six import add_metaclass
import numpy as np

from caffe2.python import workspace, core
Expand All @@ -46,8 +45,7 @@ def __new__(metacls, name, bases, class_dict):
return cls


@add_metaclass(BenchmarkMeta)
class Benchmark:
class Benchmark(metaclass=BenchmarkMeta):

def __init__(self):
self.results = []
Expand Down
4 changes: 0 additions & 4 deletions docs/caffe2/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,10 +58,6 @@ Note that you might need to uninstall existing Eigen and pybind11 packages due t

## Python support

To use Caffe2 in Python, you need two libraries, future and six.

pip install future six

To run the tutorials, download additional source from GitHub.

git clone --recursive https://github.com/caffe2/tutorials.git caffe2_tutorials
Expand Down
1 change: 0 additions & 1 deletion docs/cpp/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,3 @@ docutils==0.16
-e git+https://github.com/pytorch/pytorch_sphinx_theme.git#egg=pytorch_sphinx_theme
bs4
lxml
six
1 change: 0 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@ requires = [
"setuptools",
"cmake",
"typing-extensions",
"six",
"requests",
]
# Use legacy backend to import local packages in setup.py
Expand Down
4 changes: 0 additions & 4 deletions scripts/build_tegra_x1.sh
Original file line number Diff line number Diff line change
Expand Up @@ -41,10 +41,6 @@ sudo apt-get install \
# the one provided by apt-get is quite old so we install it via pip
sudo pip install hypothesis

# Install the six module, which includes Python 2 and 3 compatibility utilities,
# and is required for Caffe2
sudo pip install six

# Now, actually build the android target.
echo "Building caffe2"
cd $BUILD_ROOT
Expand Down
4 changes: 0 additions & 4 deletions scripts/build_tizen.sh
Original file line number Diff line number Diff line change
Expand Up @@ -95,10 +95,6 @@ sudo zypper install \
# Obtain python hypothesis, which Caffe2 uses for unit testing. Note that
# the one provided by zypper is quite old so we install it via pip
sudo pip install hypothesis

# Install the six module, which includes Python 2 and 3 compatibility utilities,
# and is required for Caffe2
sudo pip install six
}

caffe2_full_build(){
Expand Down
2 changes: 1 addition & 1 deletion scripts/model_zoo/update-caffe2-models.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
import tarfile
import tempfile

from six.moves.urllib.request import urlretrieve
from urllib.request import urlretrieve

from caffe2.python.models.download import downloadFromURLToFile, getURLFromName, deleteDirectory

Expand Down
2 changes: 1 addition & 1 deletion scripts/model_zoo/update-models-from-caffe2.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@

import boto3

from six.moves.urllib.request import urlretrieve
from urllib.request import urlretrieve

from caffe2.python.models.download import downloadFromURLToFile, getURLFromName, deleteDirectory
from caffe2.proto import caffe2_pb2
Expand Down
3 changes: 1 addition & 2 deletions test/distributed/test_store.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,6 @@
sys.exit(0)

import torch.testing._internal.common_utils as common
from torch._six import string_classes
from torch.testing._internal.common_distributed import (
skip_if_win32,
create_tcp_store
Expand Down Expand Up @@ -336,7 +335,7 @@ def __init__(self):
self.store = {}

def set(self, key, value):
if not isinstance(key, string_classes):
if not isinstance(key, str):
raise AssertionError("Expected set to be called with string key")
if type(value) is not bytes:
raise AssertionError("Expected set to be called with bytes value")
Expand Down
2 changes: 1 addition & 1 deletion test/distributions/test_distributions.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@
# Distributions tests use double as the default dtype
torch.set_default_dtype(torch.double)

from torch._six import inf, nan
from torch import inf, nan
from torch.testing._internal.common_utils import \
(TestCase, run_tests, set_rng_seed, TEST_WITH_UBSAN, load_tests,
gradcheck, skipIfTorchDynamo)
Expand Down
2 changes: 1 addition & 1 deletion test/nn/test_pooling.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
import itertools
import math

from torch._six import inf, nan
from torch import inf, nan
import torch
from torch.testing import make_tensor
from torch.testing._internal.common_utils import TestCase, run_tests, TEST_WITH_UBSAN, set_default_dtype, \
Expand Down
2 changes: 1 addition & 1 deletion test/test_autograd.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
import torch

from torch import nn
from torch._six import inf, nan
from torch import inf, nan
from torch.autograd.function import once_differentiable
from torch.autograd.profiler import (profile, record_function, emit_nvtx, emit_itt)
from torch.autograd.profiler_util import (_format_time, EventList, FunctionEvent, FunctionEventAvg)
Expand Down
2 changes: 1 addition & 1 deletion test/test_binary_ufuncs.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
from functools import partial

import torch.autograd.forward_ad as fwAD
from torch._six import inf, nan
from torch import inf, nan
from torch.testing._internal.common_utils import (
TestCase,
slowTest,
Expand Down
4 changes: 2 additions & 2 deletions test/test_cuda.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,9 @@
import torch
import torch.cuda
import torch.cuda.comm as comm
from torch import inf, nan
from torch.nn.parallel import scatter_gather
from torch.utils.checkpoint import checkpoint_sequential
from torch._six import inf, nan
from torch.testing._internal.common_utils import TestCase, freeze_rng_state, run_tests, \
NO_MULTIPROCESSING_SPAWN, skipIfRocm, load_tests, IS_REMOTE_GPU, IS_SANDCASTLE, IS_WINDOWS, \
slowTest, skipCUDANonDefaultStreamIf, skipCUDAMemoryLeakCheckIf, TEST_WITH_ROCM, TEST_NUMPY, \
Expand Down Expand Up @@ -1595,7 +1595,7 @@ def _spawn_test_multinomial_invalid_probs_cuda(self, probs):
p = subprocess.Popen([sys.executable, '-c', f"""\
import sys
import torch
from torch._six import inf, nan
from torch import inf, nan
try:
with torch.random.fork_rng(devices=[0]):
torch.multinomial(torch.tensor({probs}).to('cuda'), 2, replacement=True)
Expand Down
2 changes: 1 addition & 1 deletion test/test_mps.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
import torch.nn.functional as F
import itertools
from collections import defaultdict
from torch._six import inf
from torch import inf
from torch.nn import Parameter
from torch.testing._internal import opinfo
from torch.testing._internal.common_utils import \
Expand Down
2 changes: 1 addition & 1 deletion test/test_nn.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
# NN tests use double as the default dtype
torch.set_default_dtype(torch.double)

from torch._six import inf, nan
from torch import inf, nan
import torch.autograd.forward_ad as fwAD
import torch.backends.cudnn as cudnn
import torch.nn as nn
Expand Down
2 changes: 1 addition & 1 deletion test/test_reductions.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
from itertools import product, combinations, permutations
import warnings

from torch._six import inf, nan
from torch import inf, nan
from torch.testing import make_tensor
from torch.testing._internal.common_dtype import (
all_types_and_complex_and, get_all_math_dtypes, integral_types, complex_types, floating_types_and,
Expand Down
2 changes: 1 addition & 1 deletion test/test_shape_ops.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
import random
import warnings

from torch._six import nan
from torch import nan
from torch.testing import make_tensor
from torch.testing._internal.common_utils import (
TestCase, run_tests, skipIfTorchDynamo, torch_to_numpy_dtype_dict)
Expand Down
2 changes: 1 addition & 1 deletion test/test_sort_and_select.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
import numpy as np

import random
from torch._six import nan
from torch import nan
from itertools import permutations, product

from torch.testing import make_tensor
Expand Down
4 changes: 2 additions & 2 deletions test/test_torch.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
import subprocess
import weakref
import sys
from torch._six import inf, nan, string_classes
from torch import inf, nan
from itertools import product, combinations, permutations
from functools import partial
from torch import multiprocessing as mp
Expand Down Expand Up @@ -8288,7 +8288,7 @@ def _test_namespace(ns, *skips):
ns_name = ns.__name__
skip_regexes = []
for r in skips:
if isinstance(r, string_classes):
if isinstance(r, str):
skip_regexes.append(re.compile('^{}$'.format(re.escape(r))))
else:
skip_regexes.append(r)
Expand Down
2 changes: 1 addition & 1 deletion test/test_unary_ufuncs.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
import random
import unittest

from torch._six import inf, nan
from torch import inf, nan
from torch.testing._internal.common_utils import (
TestCase,
run_tests,
Expand Down
3 changes: 1 addition & 2 deletions torch/_C/_VariableFunctions.pyi.in
Original file line number Diff line number Diff line change
@@ -1,8 +1,7 @@
# ${generated_comment}

from torch import Tensor, Generator, strided, memory_format, contiguous_format, strided
from torch import Tensor, Generator, strided, memory_format, contiguous_format, strided, inf
from typing import List, Tuple, Optional, Union, Any, ContextManager, Callable, overload, Iterator, NamedTuple, Sequence, Literal, TypeVar
from torch._six import inf

from torch.types import _int, _float, _bool, Number, _dtype, _device, _qscheme, _size, _layout, SymInt, Device
import torch
Expand Down
Loading

0 comments on commit b005ec6

Please sign in to comment.