Skip to content

Commit

Permalink
[SPARK-49771][PYTHON] Improve Pandas Scalar Iter UDF error when outpu…
Browse files Browse the repository at this point in the history
…t rows exceed input rows

### What changes were proposed in this pull request?

This PR changes the `assert` error into a user-facing PySpark error when the pandas_iter UDF has more output rows than input rows.

### Why are the changes needed?

To make the error message more user-friendly. After the PR, the error will be
`pyspark.errors.exceptions.base.PySparkRuntimeError: [PANDAS_UDF_OUTPUT_EXCEEDS_INPUT_ROWS] The Pandas SCALAR_ITER UDF outputs more rows than input rows.`

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Existing tests

### Was this patch authored or co-authored using generative AI tooling?

No

Closes apache#48231 from allisonwang-db/spark-49771-pd-iter-err.

Authored-by: allisonwang-db <[email protected]>
Signed-off-by: Hyukjin Kwon <[email protected]>
  • Loading branch information
allisonwang-db authored and HyukjinKwon committed Sep 24, 2024
1 parent 55d0233 commit afe8bf9
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 4 deletions.
5 changes: 5 additions & 0 deletions python/pyspark/errors/error-conditions.json
Original file line number Diff line number Diff line change
Expand Up @@ -802,6 +802,11 @@
"<package_name> >= <minimum_version> must be installed; however, it was not found."
]
},
"PANDAS_UDF_OUTPUT_EXCEEDS_INPUT_ROWS" : {
"message": [
"The Pandas SCALAR_ITER UDF outputs more rows than input rows."
]
},
"PIPE_FUNCTION_EXITED": {
"message": [
"Pipe function `<func_name>` exited with error code <error_code>."
Expand Down
9 changes: 5 additions & 4 deletions python/pyspark/worker.py
Original file line number Diff line number Diff line change
Expand Up @@ -1565,14 +1565,15 @@ def map_batch(batch):
num_output_rows = 0
for result_batch, result_type in result_iter:
num_output_rows += len(result_batch)
# This assert is for Scalar Iterator UDF to fail fast.
# This check is for Scalar Iterator UDF to fail fast.
# The length of the entire input can only be explicitly known
# by consuming the input iterator in user side. Therefore,
# it's very unlikely the output length is higher than
# input length.
assert (
is_map_pandas_iter or is_map_arrow_iter or num_output_rows <= num_input_rows
), "Pandas SCALAR_ITER UDF outputted more rows than input rows."
if is_scalar_iter and num_output_rows > num_input_rows:
raise PySparkRuntimeError(
errorClass="PANDAS_UDF_OUTPUT_EXCEEDS_INPUT_ROWS", messageParameters={}
)
yield (result_batch, result_type)

if is_scalar_iter:
Expand Down

0 comments on commit afe8bf9

Please sign in to comment.