Skip to content

Commit

Permalink
Replace follow_imports = silent with normal (pytorch#118414)
Browse files Browse the repository at this point in the history
This is a lot of files changed! Don't panic! Here's how it works:

* Previously, we set `follow_imports = silent` for our mypy.ini configuration. Per https://mypy.readthedocs.io/en/stable/running_mypy.html#follow-imports, what this does is whenever we have an import to a module which is not listed as a file to be typechecked in mypy, we typecheck it as normal but suppress all errors that occurred in that file.
* When mypy is run inside lintrunner, the list of files is precisely the files covered by the glob in lintrunner.toml, but with files in excludes excluded.
* The top-level directive `# mypy: ignore-errors` instructs mypy to typecheck the file as normal, but ignore all errors.
* Therefore, it should be equivalent to set `follow_imports = normal`, if we put `# mypy: ignore-errors` on all files that were previously excluded from the file list.
* Having done this, we can remove the exclude list from .lintrunner.toml, since excluding a file from typechecking is baked into the files themselves.
* torch/_dynamo and torch/_inductor were previously in the exclude list, because they were covered by MYPYINDUCTOR. It is not OK to mark these as `# mypy: ignore-errors` as this will impede typechecking on the alternate configuration. So they are temporarily being checked twice, but I am suppressing the errors in these files as the configurations are not quite the same. I plan to unify the configurations so this is only a temporary state.
* There were some straggler type errors after these changes somehow, so I fixed them as needed. There weren't that many.

In the future, to start type checking a file, just remove the ignore-errors directive from the top of the file.

The codemod was done with this script authored by GPT-4:

```
import glob

exclude_patterns = [
    ...
]

for pattern in exclude_patterns:
    for filepath in glob.glob(pattern, recursive=True):
        if filepath.endswith('.py'):
            with open(filepath, 'r+') as f:
                content = f.read()
                f.seek(0, 0)
                f.write('# mypy: ignore-errors\n\n' + content)
```

Signed-off-by: Edward Z. Yang <[email protected]>

Pull Request resolved: pytorch#118414
Approved by: https://github.com/thiagocrepaldi, https://github.com/albanD
  • Loading branch information
ezyang authored and pytorchmergebot committed Jan 27, 2024
1 parent af1338b commit 9bce208
Show file tree
Hide file tree
Showing 138 changed files with 270 additions and 45 deletions.
33 changes: 0 additions & 33 deletions .lintrunner.toml
Original file line number Diff line number Diff line change
Expand Up @@ -122,39 +122,6 @@ include_patterns = [
]
exclude_patterns = [
'**/fb/**',
'torch/include/**',
'torch/csrc/**',
'torch/_dynamo/**/*.py',
'torch/_inductor/**/*.py',
'torch/_numpy/**/*.py',
'torch/_functorch/aot_autograd.py',
'torch/_functorch/benchmark_utils.py',
'torch/_functorch/compile_utils.py',
'torch/_functorch/compilers.py',
'torch/_functorch/eager_transforms.py',
'torch/_functorch/fx_minifier.py',
'torch/_functorch/partitioners.py',
'torch/_functorch/top_operators_github_usage.py',
'torch/_functorch/vmap.py',
'torch/_subclasses/schema_check_mode.py',
'torch/distributed/elastic/agent/server/api.py',
'torch/testing/_internal/**',
'torch/distributed/fsdp/fully_sharded_data_parallel.py',
# TODO(suo): these exclusions were added just to get lint clean on master.
# Follow up to do more target suppressions and remove them.
'torch/ao/quantization/fx/convert.py',
'torch/ao/quantization/_dbr/function_fusion.py',
'test/test_datapipe.py',
'caffe2/contrib/fakelowp/test/test_batchmatmul_nnpi_fp16.py',
'test/test_numpy_interop.py',
'torch/torch_version.py',
'torch/fx/proxy.py',
'torch/fx/passes/shape_prop.py',
'torch/fx/node.py',
'torch/fx/experimental/symbolic_shapes.py',
'torch/fx/experimental/proxy_tensor.py',
'torch/_subclasses/fake_utils.py',
'torch/_subclasses/fake_tensor.py',
]
command = [
'python3',
Expand Down
2 changes: 2 additions & 0 deletions caffe2/contrib/fakelowp/test/test_batchmatmul_nnpi_fp16.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

import numpy as np
import unittest
import caffe2.python.fakelowp.init_shared_libs # noqa
Expand Down
8 changes: 7 additions & 1 deletion mypy.ini
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ warn_redundant_casts = True
show_error_codes = True
show_column_numbers = True
check_untyped_defs = True
follow_imports = silent
follow_imports = normal

# do not reenable this:
# https://github.com/pytorch/pytorch/pull/60006#issuecomment-866130657
Expand Down Expand Up @@ -270,3 +270,9 @@ ignore_missing_imports = True

[mypy-usort.*]
ignore_missing_imports = True

[mypy-torch._inductor.*]
ignore_errors = True

[mypy-torch._dynamo.*]
ignore_errors = True
2 changes: 1 addition & 1 deletion test/dynamo/test_higher_order_ops.py
Original file line number Diff line number Diff line change
Expand Up @@ -2423,7 +2423,7 @@ def fn(x):
self.assertIn(
"""\
triggered by the following guard failure(s):
- torch._C._functorch.maybe_current_level() is None # with vmap_increment_nesting(batch_size, randomness) as vmap_level: # _functorch/vmap.py:399 in _flat_vmap""",
- torch._C._functorch.maybe_current_level() is None # with vmap_increment_nesting(batch_size, randomness) as vmap_level:""",
record.getMessage(),
)

Expand Down
2 changes: 2 additions & 0 deletions test/test_datapipe.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

# Owner(s): ["module: dataloader"]

import copy
Expand Down
2 changes: 2 additions & 0 deletions test/test_numpy_interop.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

# Owner(s): ["module: numpy"]

import torch
Expand Down
2 changes: 2 additions & 0 deletions torch/_functorch/aot_autograd.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

import itertools
from contextlib import nullcontext
from functools import partial, wraps
Expand Down
2 changes: 2 additions & 0 deletions torch/_functorch/benchmark_utils.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

import contextlib
import time
import os
Expand Down
2 changes: 2 additions & 0 deletions torch/_functorch/compile_utils.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors


import torch
import torch.fx as fx
Expand Down
2 changes: 2 additions & 0 deletions torch/_functorch/compilers.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

import copy
import logging
import os
Expand Down
2 changes: 2 additions & 0 deletions torch/_functorch/eager_transforms.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

# Copyright (c) Facebook, Inc. and its affiliates.
# All rights reserved.
#
Expand Down
2 changes: 2 additions & 0 deletions torch/_functorch/fx_minifier.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

import torch.fx as fx
import copy
import torch
Expand Down
2 changes: 2 additions & 0 deletions torch/_functorch/partitioners.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

from torch.fx.experimental.proxy_tensor import is_sym_node, py_sym_types
from torch.fx.experimental.sym_node import magic_methods, method_to_operator
from torch.fx.experimental.symbolic_shapes import (
Expand Down
2 changes: 2 additions & 0 deletions torch/_functorch/top_operators_github_usage.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

"""
From https://docs.google.com/spreadsheets/d/12R3nCOLskxPYjjiNkdqy4OdQ65eQp_htebXGODsjSeA/edit#gid=0
Try to keep this list in sync with that.
Expand Down
2 changes: 2 additions & 0 deletions torch/_functorch/vmap.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

# Copyright (c) Facebook, Inc. and its affiliates.
# All rights reserved.
#
Expand Down
2 changes: 2 additions & 0 deletions torch/_numpy/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

from . import fft, linalg, random
from ._dtypes import * # noqa: F403
from ._funcs import * # noqa: F403
Expand Down
2 changes: 2 additions & 0 deletions torch/_numpy/_binary_ufuncs_impl.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

"""Export torch work functions for binary ufuncs, rename/tweak to match numpy.
This listing is further exported to public symbols in the `torch._numpy/_ufuncs.py` module.
"""
Expand Down
2 changes: 2 additions & 0 deletions torch/_numpy/_casting_dicts.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

import torch

# These two dicts are autogenerated with autogen/gen_dtypes.py,
Expand Down
2 changes: 2 additions & 0 deletions torch/_numpy/_dtypes.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

""" Define analogs of numpy dtypes supported by pytorch.
Define the scalar types and supported dtypes and numpy <--> torch dtype mappings.
"""
Expand Down
2 changes: 2 additions & 0 deletions torch/_numpy/_dtypes_impl.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

"""Dtypes/scalar type implementaions with torch dtypes.
Here `dtype` is always a torch.dtype, this module knows nothing about
Expand Down
2 changes: 2 additions & 0 deletions torch/_numpy/_funcs.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

import inspect
import itertools

Expand Down
2 changes: 2 additions & 0 deletions torch/_numpy/_funcs_impl.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

"""A thin pytorch / numpy compat layer.
Things imported from here have numpy-compatible signatures but operate on
Expand Down
2 changes: 2 additions & 0 deletions torch/_numpy/_getlimits.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

import torch

from . import _dtypes
Expand Down
2 changes: 2 additions & 0 deletions torch/_numpy/_ndarray.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

from __future__ import annotations

import builtins
Expand Down
2 changes: 2 additions & 0 deletions torch/_numpy/_normalizations.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

""" "Normalize" arguments: convert array_likes to tensors, dtypes to torch dtypes and so on.
"""
from __future__ import annotations
Expand Down
2 changes: 2 additions & 0 deletions torch/_numpy/_reductions_impl.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

""" Implementation of reduction operations, to be wrapped into arrays, dtypes etc
in the 'public' layer.
Expand Down
2 changes: 2 additions & 0 deletions torch/_numpy/_ufuncs.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

from __future__ import annotations

from typing import Optional
Expand Down
2 changes: 2 additions & 0 deletions torch/_numpy/_unary_ufuncs_impl.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

"""Export torch work functions for unary ufuncs, rename/tweak to match numpy.
This listing is further exported to public symbols in the `_numpy/_ufuncs.py` module.
"""
Expand Down
2 changes: 2 additions & 0 deletions torch/_numpy/_util.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

"""Assorted utilities, which do not need anything other then torch and stdlib.
"""

Expand Down
2 changes: 2 additions & 0 deletions torch/_numpy/fft.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

from __future__ import annotations

import functools
Expand Down
2 changes: 2 additions & 0 deletions torch/_numpy/linalg.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

from __future__ import annotations

import functools
Expand Down
2 changes: 2 additions & 0 deletions torch/_numpy/random.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

"""Wrapper to mimic (parts of) np.random API surface.
NumPy has strict guarantees on reproducibility etc; here we don't give any.
Expand Down
2 changes: 2 additions & 0 deletions torch/_numpy/testing/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

from .utils import (
_gen_alignment_data,
assert_,
Expand Down
2 changes: 2 additions & 0 deletions torch/_numpy/testing/utils.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

"""
Utility function to facilitate testing.
Expand Down
2 changes: 2 additions & 0 deletions torch/_subclasses/fake_tensor.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

import contextlib
import functools
import itertools
Expand Down
2 changes: 2 additions & 0 deletions torch/_subclasses/fake_utils.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

import functools
import warnings
from typing import Callable, Union
Expand Down
2 changes: 2 additions & 0 deletions torch/_subclasses/schema_check_mode.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

from collections import namedtuple
from copy import deepcopy
from itertools import combinations
Expand Down
2 changes: 2 additions & 0 deletions torch/ao/quantization/fx/convert.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

from typing import Any, Dict, List, Optional, Set, Tuple, Union, Type, Callable
from torch.ao.quantization.quant_type import QuantType
import torch
Expand Down
3 changes: 2 additions & 1 deletion torch/csrc/jit/tensorexpr/codegen_external.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
#!/usr/bin/env python3
# mypy: ignore-errors

import argparse

import torchgen.model as model
Expand Down
2 changes: 2 additions & 0 deletions torch/csrc/jit/tensorexpr/scripts/bisect.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

import subprocess

import click
Expand Down
2 changes: 2 additions & 0 deletions torch/csrc/lazy/test_mnist.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

import os

import torch
Expand Down
2 changes: 1 addition & 1 deletion torch/distributed/elastic/agent/server/api.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/usr/bin/env python3
# mypy: ignore-errors

# Copyright (c) Facebook, Inc. and its affiliates.
# All rights reserved.
Expand Down
2 changes: 2 additions & 0 deletions torch/distributed/fsdp/fully_sharded_data_parallel.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

import contextlib
import copy
import functools
Expand Down
2 changes: 2 additions & 0 deletions torch/fx/experimental/proxy_tensor.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

# Copyright (c) Facebook, Inc. and its affiliates.
# All rights reserved.
#
Expand Down
2 changes: 2 additions & 0 deletions torch/fx/experimental/symbolic_shapes.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

import builtins
import collections
import functools
Expand Down
2 changes: 2 additions & 0 deletions torch/fx/node.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

# Nodes represent a definition of a value in our graph of operators.
from typing import TYPE_CHECKING, Union, Callable, Any, Tuple, List, Optional, Dict, Set
from ._compatibility import compatibility
Expand Down
2 changes: 2 additions & 0 deletions torch/fx/passes/shape_prop.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

import torch
import torch.fx
import traceback
Expand Down
2 changes: 2 additions & 0 deletions torch/fx/proxy.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

import enum
import dis
import copy
Expand Down
2 changes: 1 addition & 1 deletion torch/jit/_monkeytype_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,7 @@ class JitTypeTraceConfig: # type: ignore[no-redef]
def __init__(self):
pass

monkeytype_trace = None # noqa: F811
monkeytype_trace = None # type: ignore[assignment] # noqa: F811


def jit_code_filter(code: CodeType) -> bool:
Expand Down
2 changes: 1 addition & 1 deletion torch/jit/frontend.py
Original file line number Diff line number Diff line change
Expand Up @@ -365,7 +365,7 @@ def _forward(self):
# for the arguments from type_trace_db
type_trace_db = torch.jit._script._get_type_trace_db()
pdt_arg_types = None
if monkeytype_trace and not isinstance(fn, _ParsedDef):
if monkeytype_trace and not isinstance(fn, _ParsedDef): # type: ignore[truthy-function]
qualname = get_qualified_name(fn)
pdt_arg_types = type_trace_db.get_args_types(qualname)

Expand Down
2 changes: 1 addition & 1 deletion torch/onnx/_internal/fx/diagnostics.py
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ def _torch_nn_parameter(obj: torch.nn.Parameter) -> str:

@_format_argument.register
def _onnxscript_torch_script_tensor(obj: graph_building.TorchScriptTensor) -> str:
return f"`TorchScriptTensor({fx_type_utils.from_torch_dtype_to_abbr(obj.dtype)}{_stringify_shape(obj.shape)})`"
return f"`TorchScriptTensor({fx_type_utils.from_torch_dtype_to_abbr(obj.dtype)}{_stringify_shape(obj.shape)})`" # type: ignore[arg-type] # noqa: B950


@_format_argument.register
Expand Down
2 changes: 2 additions & 0 deletions torch/testing/_internal/autocast_test_lists.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

import torch
from torch.testing._internal.common_utils import TEST_WITH_ROCM

Expand Down
2 changes: 2 additions & 0 deletions torch/testing/_internal/autograd_function_db.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

import torch
from functools import partial
from torch.testing import make_tensor
Expand Down
2 changes: 2 additions & 0 deletions torch/testing/_internal/check_kernel_launches.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

import os
import re
import sys
Expand Down
1 change: 1 addition & 0 deletions torch/testing/_internal/codegen/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# mypy: ignore-errors
2 changes: 2 additions & 0 deletions torch/testing/_internal/common_cuda.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

r"""This file is allowed to initialize CUDA context when imported."""

import functools
Expand Down
2 changes: 2 additions & 0 deletions torch/testing/_internal/common_device_type.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# mypy: ignore-errors

import copy
import gc
import inspect
Expand Down
Loading

0 comments on commit 9bce208

Please sign in to comment.