id
int64 | number
int64 | title
string | state
string | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | html_url
string | is_pull_request
bool | pull_request_url
string | pull_request_html_url
string | user_login
string | comments_count
int64 | body
string | labels
list | reactions_plus1
int64 | reactions_minus1
int64 | reactions_laugh
int64 | reactions_hooray
int64 | reactions_confused
int64 | reactions_heart
int64 | reactions_rocket
int64 | reactions_eyes
int64 | comments
list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3,269,017,396
| 61,979
|
DOC: Update documentation for using natural sort with `sort_values`
|
closed
| 2025-07-28T09:32:19
| 2025-07-28T16:11:30
| 2025-07-28T16:11:24
|
https://github.com/pandas-dev/pandas/pull/61979
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61979
|
https://github.com/pandas-dev/pandas/pull/61979
|
marc-jones
| 1
|
The previous documentation recommended to use the lambda function `lambda x: np.argsort(index_natsorted(x))` as a key argument to `sort_values`. However, while this works when sorting on a single column, it causes incorrect sorting when sorting multiple columns with duplicated values. For example:
```
>>> df = pd.DataFrame(
... {
... "hours": ["0hr", "128hr", "0hr", "64hr", "64hr", "128hr"],
... "mins": ["10mins", "40mins", "40mins", "40mins", "10mins", "10mins"],
... "value": [10, 20, 30, 40, 50, 60],
... }
... )
>>> df
hours mins value
0 0hr 10mins 10
1 128hr 40mins 20
2 0hr 40mins 30
3 64hr 40mins 40
4 64hr 10mins 50
5 128hr 10mins 60
>>> from natsort import index_natsorted
>>> df.sort_values(
... by=["hours", "mins"],
... key=lambda x: np.argsort(index_natsorted(x)),
... )
hours mins value
0 0hr 10mins 10
2 0hr 40mins 30
3 64hr 40mins 40
4 64hr 10mins 50
1 128hr 40mins 20
5 128hr 10mins 60
```
Note how the `hours` column is sorted correctly, but the `mins` column isn't.
This PR updates the documentation to use `natsort_keygen`, which is robust to sorting on multiple columns.
Commit 2: Removes the calls to `natsort_keygen()` in the example code as the output generated was too long and doctest didn't seem to like having the tuple formatted.
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 3
| 0
| 0
|
[
"Thanks @marc-jones "
] |
3,267,529,078
| 61,978
|
ENH: Function‑scoped bad‑line callback for read_csv that provides line_number (immediate per‑row handling without global side‑effects)
|
open
| 2025-07-27T23:40:04
| 2025-08-06T02:48:49
| null |
https://github.com/pandas-dev/pandas/issues/61978
| true
| null | null |
laelhalawani
| 1
|
### Feature Type
- [x] Adding new functionality to pandas
- [x] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
When parsing CSVs with `pd.read_csv`, there is no function‑scoped way to (a) react to malformed rows as they happen and (b) capture the exact CSV line numbers for those rows, without introducing process‑global side effects or using a subprocess.
- `on_bad_lines='warn'` emits a Python warning like `“Skipping line N: …”` (includes the line number). But to programmatically capture those line numbers during parsing, one must intercept warnings, which either delays them (`warnings.catch_warnings`) or redirects all process warnings (`logging.captureWarnings`), both undesirable in large applications.
- `on_bad_lines=<callable>` allows immediate, local handling (ideal), but the callable only receives the parsed fields (`list[str]`) and does NOT receive the source line number. This prevents building a precise per‑line record of malformed rows in real time.
A local, non‑global mechanism that surfaces the line number for each bad line during parsing, enabling immediate logging and exact recording of which lines were affected, without impacting the rest of the application would solve this.
### Feature Description
Add a function‑scoped callback that is invoked for every malformed row and provides structured context including the CSV line number. Either of the following designs would solve the problem:
Option A (new parameter):
- Introduce a new parameter to `read_csv`, e.g. `bad_line_callback`, called synchronously for each malformed row:
```
def bad_line_callback(
fields: list[str], *,
line_number: int,
raw_line: str | None = None,
message: str | None = None
) -> list[str] | None:
"""
Return None to skip the row (default), or return a corrected list[str] to keep it.
Called per malformed record; function-scoped with no global side-effects.
"""
```
Usage (capturing exact line numbers):
```
bad_line_numbers: list[int] = []
def capture_bad_line(fields, *, line_number, raw_line=None, message=None):
bad_line_numbers.append(line_number)
# Optional: log or store message/raw_line if needed
return None # keep default skip behavior
df = pd.read_csv(
path,
engine="python",
sep=None,
on_bad_lines="skip", # existing semantics preserved
bad_line_callback=capture_bad_line
)
# bad_line_numbers now contains the exact CSV line numbers seen as bad.
```
Option B (extend existing callable):
- Enhance `on_bad_lines=<callable>` to accept optional keyword-only context parameters if supported by the user’s callable:
```
def on_bad_lines_callable(fields, *, line_number=None, raw_line=None, message=None):
...
```
- Backward compatible: if the user’s callable only accepts positional fields, pandas behaves exactly as today; if it accepts the kwargs, pandas supplies the line number and optional context.
Common semantics (both options):
- `line_number` is 1-based and matches current warning text (`“Skipping line N: …”`).
- Callback is function-scoped, synchronous, and has no process-global effects.
- Works with `engine="python"` (which supports `on_bad_lines`) and with `sep=None` (sniffer). Behavior with chunksize should be documented and consistent.
- The callback can be used purely for observability (logging/capture of exact line numbers) or to fix/replace malformed rows by returning a corrected `list[str]`.
### Alternative Solutions
- `on_bad_lines='warn'` + `warnings.catch_warnings(record=True)`: allows extracting line numbers post-parse by parsing warning text, but warnings are not emitted live and this approach is brittle.
- `logging.captureWarnings(True)`: routes all Python warnings process-wide; enables live capture but introduces global side-effects and potential interference in large apps.
- Overriding `warnings.showwarning`: process-global, not thread-safe, and risky even if restored carefully.
- Running parsing in a subprocess/worker: safe isolation but adds orchestration/ops overhead.
None provide a simple, function-scoped hook that delivers line numbers for immediate, per-row handling without global effects.
### Additional Context
- Typical current warning format: `“Skipping line N: …”`. Users might need to log these events as they occur and record the exact line numbers (for audit, remediation, or user-facing summaries) without altering application-wide logging/warnings behavior.
- This enhancement would significantly improve operational robustness for ETL/ingestion pipelines and large applications that need precise, real-time observability of malformed input rows.
|
[
"Enhancement",
"IO CSV",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Hi @laelhalawani ,\nI think this is a duplicate of #61838 ."
] |
3,267,416,510
| 61,977
|
BUG: Fix infer_dtype result for complex with pd.NA
|
closed
| 2025-07-27T20:28:09
| 2025-07-28T16:31:34
| 2025-07-28T16:13:41
|
https://github.com/pandas-dev/pandas/pull/61977
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61977
|
https://github.com/pandas-dev/pandas/pull/61977
|
yuanx749
| 1
|
- [x] closes #61976
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Fix a bug in `api.types.infer_dtype` returning "mixed" for complex and ``pd.NA`` mix.
|
[
"Dtype Conversions"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @yuanx749 "
] |
3,267,408,457
| 61,976
|
BUG: infer_dtype returns "mixed" for complex and pd.NA mix
|
closed
| 2025-07-27T20:14:57
| 2025-07-28T16:13:42
| 2025-07-28T16:13:42
|
https://github.com/pandas-dev/pandas/issues/61976
| true
| null | null |
yuanx749
| 1
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import numpy as np
from pandas.api.types import infer_dtype
print(infer_dtype([1 + 1j, np.nan]))
# complex
print(infer_dtype([1 + 1j, pd.NA]))
# mixed
```
### Issue Description
`infer_dtype` on complex arrays with NA does not produce consistent results.
Similar to #61621, which has been fixed for the case of float type. I will submit a PR.
### Expected Behavior
Should return `complex`.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 49ca01ba9023b677f2b2d1c42e99f45595258b74
python : 3.10.16
python-bits : 64
OS : Linux
OS-release : 5.10.16.3-microsoft-standard-WSL2
Version : #1 SMP Fri Apr 2 22:23:49 UTC 2021
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 3.0.0.dev0+1580.g68d9dcab5b.dirty
numpy : 1.26.4
dateutil : 2.9.0.post0
pip : 24.3.1
Cython : 3.0.11
sphinx : 8.1.3
IPython : 8.31.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
bottleneck : 1.4.2
fastparquet : 2024.11.0
fsspec : 2024.12.0
html5lib : 1.1
hypothesis : 6.124.7
gcsfs : 2024.12.0
jinja2 : 3.1.5
lxml.etree : 5.3.0
matplotlib : 3.10.0
numba : 0.60.0
numexpr : 2.10.2
odfpy : None
openpyxl : 3.1.5
psycopg2 : 2.9.9
pymysql : 1.4.6
pyarrow : 19.0.0
pyiceberg : None
pyreadstat : 1.2.8
pytest : 8.3.4
python-calamine : None
pytz : 2024.2
pyxlsb : 1.0.10
s3fs : 2024.12.0
scipy : 1.15.1
sqlalchemy : 2.0.37
tables : 3.10.1
tabulate : 0.9.0
xarray : 2024.9.0
xlrd : 2.0.1
xlsxwriter : 3.2.0
zstandard : 0.23.0
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Missing-data",
"Dtype Conversions"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Confirmed on main. PRs are welcome!\n\nThanks for raising this!"
] |
3,267,311,390
| 61,975
|
BUG: 'Sphinx parallel build error' when building docs locally prevents index.html creation
|
closed
| 2025-07-27T18:12:58
| 2025-08-05T16:05:55
| 2025-08-05T16:05:55
|
https://github.com/pandas-dev/pandas/issues/61975
| true
| null | null |
jeffersbaxter
| 6
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
python make.py html
```
### Issue Description
The following error prevents doc/build/html/index.html from getting created. Here are the steps to reproduce.
1. Pulled down from upstream/original Pandas
2. Created an environment using Option 2 (pip) from https://pandas.pydata.org/docs/dev/development/contributing_environment.html
3. Followed steps for building the documentation locally from https://pandas.pydata.org/docs/dev/development/contributing_documentation.html
4. After running python make.py html, received a 'Sphinx parallel build error', 'Runtime unexpected exception' error in file `/doc/source/getting_started/comparison/comparison_with_sas.rst` line 135`
Screenshot of error
<img width="2100" height="1500" alt="Image" src="https://github.com/user-attachments/assets/6ad9a45c-b4c5-47b5-aef2-f64917f01f44" />
Machine:
2020 Macbook Pro 1.4 GHz Quad-Core Intel Core i5
### Expected Behavior
I expected the docs to be built and for doc/build/html/index.html to be created.
### Installed Versions
<details>
pandas 3.0.0.dev0+2267.ge4a03b6e47
INSTALLED VERSIONS
------------------
commit : e4a03b6e47a8ef9cd045902916289cbc976d3d33
python : 3.12.0
python-bits : 64
OS : Darwin
OS-release : 23.1.0
Version : Darwin Kernel Version 23.1.0: Mon Oct 9 21:27:27 PDT 2023; root:xnu-10002.41.9~6/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 3.0.0.dev0+2267.ge4a03b6e47
numpy : 2.2.6
dateutil : 2.9.0.post0
pip : 23.2.1
Cython : 3.1.2
sphinx : 8.1.3
IPython : 9.4.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
bottleneck : 1.5.0
fastparquet : 2024.11.0
fsspec : 2025.7.0
html5lib : 1.1
hypothesis : 6.136.4
gcsfs : 2025.7.0
jinja2 : 3.1.6
lxml.etree : 6.0.0
matplotlib : 3.10.3
numba : 0.61.2
numexpr : 2.11.0
odfpy : None
openpyxl : 3.1.5
psycopg2 : 2.9.10
pymysql : 1.4.6
pyarrow : 21.0.0
pyiceberg : 0.9.1
pyreadstat : 1.3.0
pytest : 8.4.1
python-calamine : None
pytz : 2025.2
pyxlsb : 1.0.10
s3fs : 2025.7.0
scipy : 1.16.0
sqlalchemy : 2.0.41
tables : 3.10.2
tabulate : 0.9.0
xarray : 2025.7.1
xlrd : 2.0.2
xlsxwriter : 3.2.5
zstandard : 0.23.0
qtpy : None
pyqt5 : None
</details>
|
[
"Docs",
"Needs Info",
"Closing Candidate"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"I tried to troubleshoot with:\n\npython make.py clean\npython make.py html",
"I'm willing to resolve and open a PR if a team member decides that a PR is necessary. Let me know!",
"take",
"Thanks for the report. With the information provided, I do not believe there is anything more we can do. I would suggest trying to build the documentation with `--num-jobs=1` and find what code around line 135 causes errors when run in isolation.",
"Thank you for your reply! I'll investigate this further with your suggested command and report what I find.",
"Your recommended command with the --num-jobs=1 flag helped to identify that I was having an issue locally with SSL. A new contributor like me might be naive to the fact that their local SSL config could cause the build to fail, which makes me think that adding this error case in the documentation could help future developers.\n\nWIth this in mind, I have made a PR, for you to consider, if you think it would help. If not, no worries!"
] |
3,267,163,751
| 61,974
|
ENH: Include line number and number of fields when read_csv() callable with `engine="python"` raises ParserWarning
|
closed
| 2025-07-27T14:38:41
| 2025-07-29T01:20:30
| 2025-07-28T16:21:26
|
https://github.com/pandas-dev/pandas/pull/61974
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61974
|
https://github.com/pandas-dev/pandas/pull/61974
|
sanggon6107
| 2
|
- [X] closes #61838
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
## Description of the change
`read_csv()` currently provides the description of an invalid row(expected_columns, actual_columns, number, text) when the row has too many elements where `engine="pyarrow"`, but the callable can only include the contents of the row when `engine="python"`.
(For more details on pyarrow.csv.InvalidRow, see [pyarrow documentation](https://arrow.apache.org/docs/python/generated/pyarrow.csv.ParseOptions.html#pyarrow.csv.ParseOptions.invalid_row_handler))
This PR proposes to additionally pass `expected_columns`, `actual_columns` and `row` when `on_bad_lines` is a callable and `engine="python"`, so that users can desribe the invalid row more in detail.
The order of the arguments has been aligned with `pyarrow`.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the PR, but this enhancement needs more discussion before moving forward with a PR. Additionally this approach.\r\n\r\n1. Is an API breaking change for user pass the older form of the callable\r\n2. You callable description doesn't seem to match PyArrow from the example in https://arrow.apache.org/docs/python/generated/pyarrow.csv.ParseOptions.html#pyarrow.csv.ParseOptions\r\n\r\nso closing",
"Many thanks @mroeschke ,\n\n>1. Is an API breaking change for user pass the older form of the callable\n\nUnderstood. Maybe there could be some further discussions regarding this in the near future considering there are some suggestions at #61978 .\n\n\n>2. You callable description doesn't seem to match PyArrow from the example in https://arrow.apache.org/docs/python/generated/pyarrow.csv.ParseOptions.html#pyarrow.csv.ParseOptions\n\nI've meant the callable has been aligned with `pyarrow.csv.InvalidRow`, but as you mentioned, this also needs to be considered in terms of backwards compatibility."
] |
3,267,136,196
| 61,973
|
BUG: The lines skipped by pd.read_csv(path, chunksize=n, on_bad_lines='skip') depends on `n`.
|
open
| 2025-07-27T14:03:17
| 2025-07-30T20:35:09
| null |
https://github.com/pandas-dev/pandas/issues/61973
| true
| null | null |
alexbra1
| 5
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
content = "a,b\n1,2\n3\n4,5,extra\n6,7"
with open("malformed.csv", "w", newline="") as f:
f.write(content)
path = "malformed.csv"
print(f"Processing file: {path} with chunksize=3")
for chunk in pd.read_csv(path, chunksize=3, on_bad_lines="skip"):
print(chunk)
# Output:
# Processing file: malformed.csv with chunksize=3
# a b
# 0 1 2.0
# 1 3 NaN
# 2 6 7.0
print(f"Processing file: {path} with chunksize=2")
for chunk in pd.read_csv(path, chunksize=2, on_bad_lines="skip"):
print(chunk)
# Output:
# Processing file: malformed.csv with chunksize=2
# a b
# 0 1 2.0
# 1 3 NaN
# a b
# 2 4 5
# 3 6 7
```
### Issue Description
Depending on the `chunksize` parameter, some rows are skipped or not.
### Expected Behavior
The rows that are skipped should be independent of chunksize.
### Installed Versions
INSTALLED VERSIONS
------------------
commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6
python : 3.12.3
python-bits : 64
OS : Linux
OS-release : 6.6.87.2-microsoft-standard-WSL2
Version : #1 SMP PREEMPT_DYNAMIC Thu Jun 5 18:30:46 UTC 2025
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : None
LOCALE : C.UTF-8
pandas : 2.3.1
numpy : 2.3.2
pytz : 2025.2
dateutil : 2.9.0.post0
pip : None
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
None
|
[
"Bug",
"IO CSV"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the report! Confirmed on main, further investigations and PRs to fix are welcome!",
"I also noticed that this issue does not happen when setting `engine=\"python\"`.",
"take",
"The issue occurs when the bad line is the first line in chunk:\n\n```python\ncontent = \"a,b\\n1,2\\n3\\n4,5,extra\\n6,7\"\n======= Chunk Size: 2 =======\n a b\n0 1 2.0\n1 3 NaN\n a b\n2 4 5 # Bad Line\n3 6 7\n======= Chunk Size: 3 =======\n a b\n0 1 2.0\n1 3 NaN\n2 6 7.0\n\n\ncontent: \"a,b\\n1,2\\n3\\n8,9\\n4,5,extra\\n6,7\" # added one more row 8,9 before 5,4,extra\n======= Chunk Size: 2 =======\n a b\n0 1 2.0\n1 3 NaN\n a b\n2 8 9\n3 6 7\n======= Chunk Size: 3 =======\n a b\n0 1 2.0\n1 3 NaN\n2 8 9.0\n a b\n3 4 5 # Bad Line\n4 6 7\n```\n\n\nTest Script:\n```python\nimport pandas as pd\ncontents = [\"a,b\\n1,2\\n3\\n4,5,extra\\n6,7\",\n \"a,b\\n1,2\\n3\\n8,9\\n4,5,extra\\n6,7\"] \npath = \"malformed.csv\"\nfor i, content in enumerate(contents):\n with open(f\"malformed_{i}.csv\", \"w\", newline=\"\") as f:\n f.write(content)\n for chunksize in [2, 3]:\n print(\"======= Chunk Size: \", chunksize, \"=======\")\n for chunk in pd.read_csv(f\"malformed_{i}.csv\", chunksize=chunksize, on_bad_lines=\"skip\"):\n print(chunk)\n```",
"@rhshadrach The issue is with this code block:\nhttps://github.com/pandas-dev/pandas/blob/d4ae6494f2c4489334be963e1bdc371af7379cd5/pandas/_libs/src/parser/tokenizer.c#L416-L427\n\nwhen the bad line is the first line in the chunk the self->lines will be 1 and self->header_end will be 0, so `!(self->lines <= self->header_end + 1)` becomes False and line skip will not happen. This is the edge case and not sure if it's worth fixing."
] |
3,267,036,631
| 61,972
|
BUG: Series.replace with CoW when made from an Index
|
closed
| 2025-07-27T12:05:10
| 2025-07-28T18:47:32
| 2025-07-28T16:24:07
|
https://github.com/pandas-dev/pandas/pull/61972
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61972
|
https://github.com/pandas-dev/pandas/pull/61972
|
rhshadrach
| 1
|
- [x] closes #61622 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
When we create a Series from an Index, it's zero copy which means that with CoW there are weak refs to the Index. Comparison of these weak refs uses `Index.__eq__`, which operates on the array (unlike `Block.__eq__` which is merely `is`). This leads to failure in `Series.replace`.
Instead, we replace the equality checks with `is`, plus some additional logic for performance. I believe this is the only place where we are using `__eq__` on these references.
|
[
"Bug",
"replace",
"Copy / view semantics"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @rhshadrach "
] |
3,266,955,396
| 61,971
|
contributing codebase is revised
|
closed
| 2025-07-27T10:05:29
| 2025-07-27T17:40:53
| 2025-07-27T17:40:46
|
https://github.com/pandas-dev/pandas/pull/61971
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61971
|
https://github.com/pandas-dev/pandas/pull/61971
|
vishwajeetsinghrana8
| 3
|
- Issue: #61968
- DOC: code coverage app provided in documentation is invalid #61968
Open
- [https://github.com/pandas-dev/pandas/blob/main/doc/source/development/contributing_codebase.rst](https://github.com/pandas-dev/pandas/blob/main/doc/source/development/contributing_codebase.rst)
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@vishwajeetsinghrana8 - why are you removing these lines?",
"These lines doesn't make sense.",
"Thanks for the PR but the changes are not applicable to the linked issues so closing"
] |
3,266,897,434
| 61,970
|
DOC: rephrase CoW ChainedAssignmentError message now CoW is always enabled
|
closed
| 2025-07-27T08:47:36
| 2025-07-30T16:34:33
| 2025-07-30T16:34:33
|
https://github.com/pandas-dev/pandas/pull/61970
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61970
|
https://github.com/pandas-dev/pandas/pull/61970
|
jorisvandenbossche
| 1
|
The "When using the Copy-on-Write mode" can be updated now it is no longer a mode that is enabled opt-in, but the only behaviour.
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"> Related, I noticed we use a `ChainedAssignmentError` with these messages to raise a warning, not an exception. Do you think we should change the name of this subclass to `ChainedAssignmentWarning` as a clearer name\r\n\r\nYeah, it's probably confusing .. \r\nSo it was originally an exception (and was then called that way), but then we changed it to a warning because of some false positives that occurred in cython code (https://github.com/pandas-dev/pandas/pull/51926). You asked the question at the time (https://github.com/pandas-dev/pandas/pull/51926#discussion_r1134827320), and so the idea is that it really is an error that you should fix (typically, except for those false positives if you are writing cython code), and that was the reasoning to prefer the \"stronger\" wording about an error. But of course a warning class being called `Error` is also confusing.."
] |
3,266,178,695
| 61,969
|
BUG: Fix Series.reindex losing values when reindexing to MultiIndex
|
closed
| 2025-07-26T22:07:49
| 2025-08-07T15:39:19
| 2025-08-07T15:39:02
|
https://github.com/pandas-dev/pandas/pull/61969
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61969
|
https://github.com/pandas-dev/pandas/pull/61969
|
Roline-Stapny
| 4
|
- [X] closes #60923
- [X ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [X] Added an entry in the latest `doc/source/whatsnew/v3.0.0.rst` file if fixing a bug or adding a new feature.
##Series.reindex()
#Before
```
# Create a Series with a named Index
series = pd.Series([26.73, 24.255], index=pd.Index([81, 82], name='a'))
# Create a MultiIndex with level names 'a', 'b', 'c'
target = pd.MultiIndex.from_product(
[[81, 82], [np.nan], ["2018-06-01", "2018-07-01"]],
names=["a", "b", "c"]
)
# This would incorrectly set all values to NaN
series.reindex(target)
# a b c
# 81 NaN 2018-06-01 NaN
# 2018-07-01 NaN
# 82 NaN 2018-06-01 NaN
# 2018-07-01 NaN
# But this works correctly
series.reindex(target, level="a")
# a b c
# 81 NaN 2018-06-01 26.73
# 2018-07-01 26.73
# 82 NaN 2018-06-01 24.255
# 2018-07-01 24.255
```
#After
```
# Same setup as before
series = pd.Series([26.73, 24.255], index=pd.Index([81, 82], name='a'))
target = pd.MultiIndex.from_product(
[[81, 82], [np.nan], ["2018-06-01", "2018-07-01"]],
names=["a", "b", "c"]
)
# Now both produce the same correct result
series.reindex(target) # Automatically detects level='a'
# a b c
# 81 NaN 2018-06-01 26.73
# 2018-07-01 26.73
# 82 NaN 2018-06-01 24.255
# 2018-07-01 24.255
```
##Datafram.reindex()
```
df = pd.DataFrame({
'value': [26.73, 24.255],
'other': ['A', 'B']
}, index=pd.Index([81, 82], name='a'))
target = pd.MultiIndex.from_product(
[[81, 82], [np.nan], ["2018-06-01", "2018-07-01"]],
names=["a", "b", "c"]
)
```
Before
```
df.reindex(index = target)
value other
a b c
81 NaN 2018-06-01 NaN NaN
2018-07-01 NaN NaN
82 NaN 2018-06-01 NaN NaN
2018-07-01 NaN NaN
```
After
```
df.reindex(index = target)
value other
a b c
81 NaN 2018-06-01 26.730 A
2018-07-01 26.730 A
82 NaN 2018-06-01 24.255 B
2018-07-01 24.255 B
```
|
[
"MultiIndex",
"Index"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"> * reviewers\r\n\r\n\r\n\r\n> Does `DataFrame.reindex` also need the same handling?\r\n\r\nYes, Dataframe with single index is having the same issue\r\n```\r\ndf = pd.DataFrame({\r\n 'value': [26.73, 24.255],\r\n 'other': ['A', 'B']\r\n}, index=pd.Index([81, 82], name='a'))\r\n\r\n# Create a MultiIndex with level names 'a', 'b', 'c'\r\ntarget = pd.MultiIndex.from_product(\r\n [[81, 82], [np.nan], [\"2018-06-01\", \"2018-07-01\"]], \r\n names=[\"a\", \"b\", \"c\"]\r\n)\r\n\r\n\r\n\r\ndf.reindex(target)\r\n value other\r\na b c\r\n81 NaN 2018-06-01 NaN NaN\r\n 2018-07-01 NaN NaN\r\n82 NaN 2018-06-01 NaN NaN\r\n 2018-07-01 NaN NaN\r\n\r\ndf.reindex(target, level=\"a\")\r\n value other\r\na b c\r\n81 NaN 2018-06-01 26.730 A\r\n 2018-07-01 26.730 A\r\n82 NaN 2018-06-01 24.255 B\r\n 2018-07-01 24.255 B\r\n```\r\n\r\n\r\nHow its the same scenario for multiindex, reindex only works if all index are matching. Infact specifying level for multiIndex dataframe is raising TypeError\r\n\r\n```\r\nraise TypeError(\"Join on level between two MultiIndex objects is ambiguous\")\r\nTypeError: Join on level between two MultiIndex objects is ambiguous\r\n```\r\n\r\n```\r\n source_idx = pd.MultiIndex.from_product(\r\n [[81, 82], [\"2018-06-01\"]],\r\n names=[\"a\", \"c\"]\r\n )\r\n df = pd.DataFrame(\r\n {\"value\": [26.73, 24.255]},\r\n index=source_idx\r\n )\r\n\r\n # Create target with same level names but different structure\r\n target_idx = pd.MultiIndex.from_product(\r\n [[81, 82], [np.nan], [\"2018-06-01\", \"2018-07-01\"]],\r\n names=[\"a\", \"b\", \"c\"]\r\n )\r\n\r\n \r\n>>> df.reindex(target_idx) # Reindexing doesnt copy matching index values\r\n value\r\na b c\r\n81 NaN 2018-06-01 NaN\r\n 2018-07-01 NaN\r\n82 NaN 2018-06-01 NaN\r\n 2018-07-01 NaN\r\n```\r\n\r\nReindex MultiIndex dataframe works iff all indexes match.\r\n\r\nI will leave the multiIndex dataframe functionality as is and address the issue in single index dataframe like the example above. lmk what you think.",
"@mroeschke could you please review it when you get a chance",
"@mroeschke , sorry for tagging again. I have addressed all comments from what I can see, but still seeing \"Changes Requested\" \"[mroeschke](https://github.com/mroeschke) Requested changes\". Have I missed addressing any of your comments? Do you mind pointing which comment is not being addressed. \r\n",
"Thanks @Roline-Stapny "
] |
3,266,083,738
| 61,968
|
DOC: code coverage app provided in documentation is invalid
|
closed
| 2025-07-26T20:18:56
| 2025-07-27T15:45:01
| 2025-07-27T15:44:53
|
https://github.com/pandas-dev/pandas/issues/61968
| true
| null | null |
eicchen
| 1
|
### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/development/contributing_codebase.html
### Documentation problem
The link to the pandas-coverage-app links to an empty page on heroku.
### Suggested fix for documentation
Either fix the doc to not mention the documentation coverage tool, rework the tool to be within the code base, or update the heroku link to contain the tool
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the report. When viewing the development documentation, it is best to use the dev docs:\n\nhttps://pandas.pydata.org/pandas-docs/dev/\n\nThere the link has already been updated. Closing."
] |
3,266,080,711
| 61,967
|
ENH: usecols takes input order for read_csv implementation review
|
open
| 2025-07-26T20:13:19
| 2025-08-05T05:29:20
| null |
https://github.com/pandas-dev/pandas/pull/61967
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61967
|
https://github.com/pandas-dev/pandas/pull/61967
|
eicchen
| 0
|
- [ ] closes #61386 (not really though)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
This is just the implementation for using usecols order for read_csv that I wanted to have people look at before moving to apply it to other places like read_excel and read_clipboard. If it all looks good, I'll go back and add all necessary documentation about future deprecation along with a popup when using usecols. This is mainly just for checking that the implementation doesn't have any glaring issues.
I ran the entire test suite just to be safe and it all looks good. The only thing the errored were some datetime tests that had nothing to do with the changes that I could find.
Oh it is also worth noting that pyarrow already uses the usecols order by default so that's probably worth adding to the documentation regardless
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,266,023,975
| 61,966
|
[backport 2.3.x] Output formatting: preserve quoting for string categories (#61891)
|
closed
| 2025-07-26T19:13:47
| 2025-07-26T20:25:10
| 2025-07-26T20:25:06
|
https://github.com/pandas-dev/pandas/pull/61966
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61966
|
https://github.com/pandas-dev/pandas/pull/61966
|
jorisvandenbossche
| 0
|
Backport of #61891
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,265,723,848
| 61,965
|
[backport 2.3.x] BUG: fix to_json() with JSON Table Schema work correctly with string dtype (#61900)
|
closed
| 2025-07-26T14:15:09
| 2025-07-26T18:43:39
| 2025-07-26T17:14:47
|
https://github.com/pandas-dev/pandas/pull/61965
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61965
|
https://github.com/pandas-dev/pandas/pull/61965
|
jorisvandenbossche
| 0
|
Backport of https://github.com/pandas-dev/pandas/pull/61900
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,265,665,525
| 61,964
|
BUG: fix Series.str.fullmatch() and Series.str.match() with a compiled regex failing with arrow strings
|
closed
| 2025-07-26T13:45:46
| 2025-08-15T07:08:13
| 2025-08-14T06:20:28
|
https://github.com/pandas-dev/pandas/pull/61964
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61964
|
https://github.com/pandas-dev/pandas/pull/61964
|
khemkaran10
| 5
|
Fixes: #61952
After Fix:
```python
DATA = ["applep", "bananap", "Cherryp", "DATEp", "eGGpLANTp", "123p", "23.45p"]
s=pd.Series(DATA)
s.str.fullmatch(re.compile(r"applep"))
Output:
0 True
1 False
2 False
3 False
4 False
5 False
6 False
dtype: bool
```
```python
DATA = ["applep", "bananap", "Cherryp", "DATEp", "eGGpLANTp", "123p", "23.45p"]
sa=pd.Series(DATA, dtype="string[pyarrow]")
sa.str.match(re.compile(r"applep"))
Output:
0 True
1 False
2 False
3 False
4 False
5 False
6 False
dtype: boolean
```
- [x] closes #61952
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
|
[
"Strings",
"Arrow"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@jorisvandenbossche Moved tests to `pandas/tests/strings/test_find_replace.py` and made a minor change to the docstring. I’m not sure what changes need to be made in docs. could you please provide more details?",
"> I’m not sure what changes need to be made in docs. could you please provide more details?\r\n\r\nThe suggestions of @yuanx749 are in the good direction\r\n",
"Thanks @khemkaran10 ",
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 3cefa1ee6b30843a24065fa67392fbfa63d0769b\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #61964: BUG: fix Series.str.fullmatch() and Series.str.match() with a compiled regex failing with arrow strings '\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61964-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61964 on branch 2.3.x (BUG: fix Series.str.fullmatch() and Series.str.match() with a compiled regex failing with arrow strings )\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ",
"Manual backport -> https://github.com/pandas-dev/pandas/pull/62113"
] |
3,265,526,215
| 61,963
|
[backport 2.3.x] BUG: fix fill value for gouped sum in case of unobserved categories for string dtype (empty string instead of 0) (#61909)
|
open
| 2025-07-26T12:24:19
| 2025-08-24T12:56:47
| null |
https://github.com/pandas-dev/pandas/pull/61963
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61963
|
https://github.com/pandas-dev/pandas/pull/61963
|
jorisvandenbossche
| 0
|
Backport of #61909
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,265,397,831
| 61,962
|
BUG: fix .str.isdigit to honor unicode superscript for older pyarrow
|
open
| 2025-07-26T09:55:59
| 2025-08-21T07:21:00
| null |
https://github.com/pandas-dev/pandas/pull/61962
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61962
|
https://github.com/pandas-dev/pandas/pull/61962
|
jorisvandenbossche
| 2
|
- [x] closes #61466
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Strings",
"Arrow"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"pandas/tests/strings/test_strings.py::test_isnumeric_unicode",
"> pandas/tests/strings/test_strings.py::test_isnumeric_unicode\r\n\r\nYeah, see https://github.com/pandas-dev/pandas/issues/61466#issuecomment-3121827923 (but I suppose the best option is just to accept that difference and update the test to reflect it. Alternatively we could still only use pyarrow for ascii, and always fall back to python for unicode, if we really want consistent behaviour)"
] |
3,265,376,571
| 61,961
|
Backport PR #61921 on branch 2.3.x (DOC: explicitly mention new str dtype is no longer a numpy dtype in migration guide)
|
closed
| 2025-07-26T09:19:28
| 2025-07-26T11:33:33
| 2025-07-26T11:33:33
|
https://github.com/pandas-dev/pandas/pull/61961
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61961
|
https://github.com/pandas-dev/pandas/pull/61961
|
meeseeksmachine
| 0
|
Backport PR #61921: DOC: explicitly mention new str dtype is no longer a numpy dtype in migration guide
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,265,364,121
| 61,960
|
DOC: update .str.contains/match/startswith docstring examples for default behaviour
|
closed
| 2025-07-26T09:04:32
| 2025-07-30T09:27:14
| 2025-07-28T16:50:49
|
https://github.com/pandas-dev/pandas/pull/61960
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61960
|
https://github.com/pandas-dev/pandas/pull/61960
|
jorisvandenbossche
| 1
|
Updating the docstrings of `.str.` predicate methods that have the `na` keyword.
For the examples, the current text is no longer correct (because the default behaviour with str dtype is now to already return False).
For now I just removed those examples. I could instead update the example to create an object-dtype Series to still show the `na` behaviour, but personally I feel that would make the docstring examples more complex than needed, and that it is fine to let they focus on just the default dtype. But no strong opinion ;)
|
[
"Docs",
"Strings"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jorisvandenbossche "
] |
3,265,354,656
| 61,959
|
Backport PR #61958 on branch 2.3.x (DOC: Add release notes template for 2.3.2)
|
closed
| 2025-07-26T08:53:23
| 2025-07-26T09:27:37
| 2025-07-26T09:27:37
|
https://github.com/pandas-dev/pandas/pull/61959
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61959
|
https://github.com/pandas-dev/pandas/pull/61959
|
meeseeksmachine
| 0
|
Backport PR #61958: DOC: Add release notes template for 2.3.2
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,265,328,560
| 61,958
|
DOC: Add release notes template for 2.3.2
|
closed
| 2025-07-26T08:19:40
| 2025-07-26T08:52:55
| 2025-07-26T08:52:55
|
https://github.com/pandas-dev/pandas/pull/61958
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61958
|
https://github.com/pandas-dev/pandas/pull/61958
|
jorisvandenbossche
| 0
|
For the case we do another 2.3.x release, this makes it easier to already backport things (and if we end up not doing another release, we can just move the items later)
And going to merge this quickly, to unblock other PRs
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,265,174,806
| 61,957
|
Flattened footer
|
closed
| 2025-07-26T06:06:18
| 2025-08-02T03:09:16
| 2025-08-02T03:09:15
|
https://github.com/pandas-dev/pandas/pull/61957
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61957
|
https://github.com/pandas-dev/pandas/pull/61957
|
maddiew95
| 6
|
Flattened the footer with pandas custom footer, sphinx-version, and theme-version in single line as shown
<img width="1433" height="71" alt="Screenshot 2025-07-26 at 15 51 18" src="https://github.com/user-attachments/assets/f45acd94-dd78-44e9-b026-20191153a9e8" />
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"#51536 ",
"Thanks could you remove the custom template as described in that issue",
"hi @mroeschke thanks for checking out! I read their doc about, but in order to have the pandas copyright, we will have to use their customized template approach as mentioned https://pydata-sphinx-theme.readthedocs.io/en/stable/user_guide/layout.html#add-your-own-html-templates-to-theme-sections.\r\n\r\nAlthough in their library, the \"copyright\" keyword is in the package and if you were to do without custom template, you can access into their library on server side to change the copyright.html as shown https://github.com/pydata/pydata-sphinx-theme/blob/main/src/pydata_sphinx_theme/theme/pydata_sphinx_theme/components/copyright.html\r\n\r\nHope this clear things up. ",
"No you can add their copyright component to add into the `html_theme_options` config: https://pydata-sphinx-theme.readthedocs.io/en/stable/user_guide/layout.html#built-in-components-to-insert-into-sections\r\n\r\nIt uses the `copyright` variable defined in `conf.py` ",
"Hi, you were right, it can be done without _template, here's the work i had. Hope this solves it!",
"Hi, the reason I used Markup() because without it the footer would read the code in ascii and show it as this\r\n \r\n<img width=\"2938\" height=\"200\" alt=\"image\" src=\"https://github.com/user-attachments/assets/56d8a45f-8ebf-4afe-b2c3-db7a506dc949\" />\r\n\r\nGenerated source code:\r\n<img width=\"2938\" height=\"326\" alt=\"image\" src=\"https://github.com/user-attachments/assets/7eba5b38-c92c-4e80-b4c7-8cad8e89b1b4\" />\r\n\r\n"
] |
3,265,051,457
| 61,956
|
DOC: added button to edit on GitHub
|
open
| 2025-07-26T04:01:58
| 2025-08-04T02:38:51
| null |
https://github.com/pandas-dev/pandas/pull/61956
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61956
|
https://github.com/pandas-dev/pandas/pull/61956
|
Sivayogeith
| 11
|
- [x] closes #39859 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"pre-commit.ci autofix",
"hey @afeld, could you take a look at this? TIA!",
"/preview",
"Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/61956/",
"> Thanks but the links do not direct to editable pages\r\n\r\nIt seems that’s the only page that isn’t working. Sorry, but could you explain how `index.rst` works? I have the file locally for some reason, but it’s not on GitHub. Is it supposed to redirect to `index.rst.template`?",
"Ah OK, yes I see this works for some straightforward `.rst` pages.\r\n\r\nYes ideally we would only only want this button on pages that are not templates or API pages. Is there a straightforward way in pydata-sphinx-theme to only add this button to select pages? ",
"Sadly, there is no straightforward way to exclude some pages. But, I am gonna try to make a extension for this (might take some time as I am new to sphinx lol).",
"Ok, so I added a new list called exclude_edit_page_button and it will exclude adding the button to those pages. But, I have some questions:\r\n\r\n1. Would you like the button to be in the This Page menu? (like how it was in #61997)\r\n2. What do you mean by API page? Should the button be excluded in every page in /reference?",
"1. I don't know what 'This Page' refers to, so no?\r\n2. The `/reference` pages can have the button a page for a particular pandas API e.g. https://pandas.pydata.org/preview/pandas-dev/pandas/61956/docs/reference/api/pandas.melt.html#pandas.melt should not have the button",
"<img width=\"152\" height=\"151\" alt=\"Screenshot 2025-08-01 at 9 52 48 AM\" src=\"https://github.com/user-attachments/assets/7272c6ca-bec6-40cc-8eb2-83364dd917d0\" /> \r\n\r\nI was referring to this - though I noticed it's not in production?\r\n\r\nI’ve updated the logic to exclude the index page and any pages that include api in their path.\r\nLet me know what you think!",
"pre-commit.ci autofix"
] |
3,264,517,043
| 61,955
|
DOC: shift argument in `Series.shift()` is not used
|
closed
| 2025-07-25T22:14:34
| 2025-08-18T15:50:36
| 2025-08-18T15:50:36
|
https://github.com/pandas-dev/pandas/issues/61955
| true
| null | null |
Dr-Irv
| 2
|
### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/reference/api/pandas.Series.shift.html#pandas.Series.shift
### Documentation problem
The documentation for `Series.shift()` documents the parameter `suffix`, but it has no effect in the code.
### Suggested fix for documentation
I think the `suffix` argument should be removed from the docs.
|
[
"Docs",
"Transformations"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"It's used by the DataFrame.shift implementation, and both methods use a shared docstring .. But we can mention that the keyword is ignored for Series (like there is also a mention for `axis`)",
"take"
] |
3,263,981,792
| 61,954
|
docs: Improve README with helpful contributor resources
|
closed
| 2025-07-25T18:00:22
| 2025-07-25T19:10:18
| 2025-07-25T19:10:18
|
https://github.com/pandas-dev/pandas/pull/61954
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61954
|
https://github.com/pandas-dev/pandas/pull/61954
|
abujabarmubarak
| 1
|
Added a small section to the end of the README that provides useful resources for new contributors, including:
- Official Pandas cheat sheet
- Beginner tutorials
- “Good first issues” link
- Slack community link
This addition aims to encourage and guide new contributors without altering any of the existing README content.
Let me know if this fits the community guidelines — happy to adjust!
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the PR.\r\n\r\nSince there's no issue discussing this inclusion, I don't think we necessarily need to add this at this time so closing. If interested in contributing feel free to tackle issues labeled `good first issue`"
] |
3,263,967,922
| 61,953
|
docs: Improve README with helpful contributor resources
|
closed
| 2025-07-25T17:54:00
| 2025-07-25T17:55:48
| 2025-07-25T17:55:48
|
https://github.com/pandas-dev/pandas/pull/61953
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61953
|
https://github.com/pandas-dev/pandas/pull/61953
|
abujabarmubarak
| 0
|
Added a small section to the end of the README that provides useful resources for new contributors, including:
- Official Pandas cheat sheet
- Beginner tutorials
- “Good first issues” link
- Slack community link
This addition aims to encourage and guide new contributors without altering any of the existing README content.
Let me know if this fits the community guidelines — happy to adjust!
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,263,943,096
| 61,952
|
BUG: Using `Series.str.fullmatch()` and `Series.str.match()` with a compiled regex fails with arrow strings
|
closed
| 2025-07-25T17:43:38
| 2025-08-14T06:20:29
| 2025-08-14T06:20:28
|
https://github.com/pandas-dev/pandas/issues/61952
| true
| null | null |
Dr-Irv
| 2
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import re
DATA = ["applep", "bananap", "Cherryp", "DATEp", "eGGpLANTp", "123p", "23.45p"]
s=pd.Series(DATA)
s.str.fullmatch(re.compile(r"applep"))
s.str.match(re.compile(r"applep"))
sa=pd.Series(DATA, dtype="string[pyarrow]")
sa.str.fullmatch(re.compile(r"applep"))
sa.str.match(re.compile(r"applep"))
```
### Issue Description
with pyarrow strings, the last line fails with:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Condadirs\envs\pandasstubs311\Lib\site-packages\pandas\core\strings\accessor.py", line 140, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Condadirs\envs\pandasstubs311\Lib\site-packages\pandas\core\strings\accessor.py", line 1429, in fullmatch
result = self._data.array._str_fullmatch(pat, case=case, flags=flags, na=na)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Condadirs\envs\pandasstubs311\Lib\site-packages\pandas\core\arrays\_arrow_string_mixins.py", line 320, in _str_fullmatch
if not pat.endswith("$") or pat.endswith("\\$"):
^^^^^^^^^^^^
AttributeError: 're.Pattern' object has no attribute 'endswith'
>>> sa.str.match(re.compile(r"applep"))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Condadirs\envs\pandasstubs311\Lib\site-packages\pandas\core\strings\accessor.py", line 140, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Condadirs\envs\pandasstubs311\Lib\site-packages\pandas\core\strings\accessor.py", line 1388, in match
result = self._data.array._str_match(pat, case=case, flags=flags, na=na)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Condadirs\envs\pandasstubs311\Lib\site-packages\pandas\core\arrays\_arrow_string_mixins.py", line 309, in _str_match
if not pat.startswith("^"):
^^^^^^^^^^^^^^
AttributeError: 're.Pattern' object has no attribute 'startswith'
```
### Expected Behavior
No exception
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6
python : 3.11.9
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.26100
machine : AMD64
processor : Intel64 Family 6 Model 183 Stepping 1, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 2.3.1
numpy : 2.3.1
pytz : 2024.1
dateutil : 2.9.0.post0
pip : 24.2
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : 1.4.2
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : 1.1
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 6.0.0
matplotlib : 3.10.3
numba : None
numexpr : 2.10.1
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 20.0.0
pyreadstat : 1.3.0
pytest : 8.4.1
python-calamine : None
pyxlsb : 1.0.10
s3fs : None
scipy : 1.16.0
sqlalchemy : 2.0.41
tables : 3.10.1
tabulate : 0.9.0
xarray : 2025.6.1
xlrd : 2.0.2
xlsxwriter : 3.2.5
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Strings",
"Arrow"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"take",
"FWIW, we don't actually document (or test, I think) that this is supported. But because `pat` is passed to `re.compile(..)` in \n\nhttps://github.com/pandas-dev/pandas/blob/e4a03b6e47a8ef9cd045902916289cbc976d3d33/pandas/core/strings/object_array.py#L249-L259\n\nthis works (since `re.compile` accepts that).\n\nGiven this works currently, I think it is certainly a good idea to keep this working for the string dtype as well. But then probably also should update the typing and docs."
] |
3,263,847,483
| 61,951
|
BUG: Implicit upcast to float64 for large series (more than 1000000 rows)
|
open
| 2025-07-25T17:04:29
| 2025-08-02T02:56:46
| null |
https://github.com/pandas-dev/pandas/issues/61951
| true
| null | null |
stertingen
| 6
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import numpy as np
a = pd.Series(np.zeros(1000000), dtype="float32") + np.float32(1)
b = pd.Series(np.zeros(1000001), dtype="float32") + np.float32(1)
print(a.dtype, b.dtype)
```
### Issue Description
Performing binary operations on larger `Series` with `dtype == 'float32'` leads to unexpected upcasts to `float64`.
Above example prints `float32 float64`.
Using `to_numpy()` on the series before addition inhibits the implicit upcast.
### Expected Behavior
I expect above snippet to print `float32 float32`.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6
python : 3.12.10
python-bits : 64
OS : Windows
OS-release : 11
Version : 10.0.26100
machine : AMD64
processor : Intel64 Family 6 Model 165 Stepping 2, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : de_DE.cp1252
pandas : 2.3.1
numpy : 2.1.3
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.1.1
Cython : 3.1.2
sphinx : None
IPython : 9.4.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : 1.5.0
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.7.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : 3.10.3
numba : 0.61.2
numexpr : 2.11.0
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : 2.9.10
pymysql : None
pyarrow : 20.0.0
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.16.0
sqlalchemy : 2.0.41
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : None
zstandard : 0.23.0
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Dtype Conversions",
"Numeric Operations"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"After stepping through with a debugger, I have the following insights to share:\n\nWith series larger than 1000000 items, Pandas uses NumExpr.\nAlso, pandas converts the numpy float32 scalar to a Python floating point number in `ops.maybe_prepare_scalar_for_op`.\nThen, NumExpr behaves as described in https://numexpr.readthedocs.io/en/latest/user_guide.html#casting-rules, assuming a double precision floating point value.\n\n",
"Hi, I'd like to tackle this issue, I am new to the project, but I agree that the conversion to the Python float is causing this bug. A fix I could implement would be to remove the use of float() in order to preserve type [as it is being converted to a Python float].\n\nThis code inside[ [_array_ops.py](https://github.com/pandas-dev/pandas/blob/e4a03b6e47a8ef9cd045902916289cbc976d3d33/pandas/core/ops/array_ops.py#L575C2-L576C26:)]\n\n```\n elif isinstance(obj, np.floating):\n return float(obj)\n```\nShould return a float32 scaler instead, by removing the assertion:\n\n```\n elif isinstance(obj, np.floating):\n if obj.dtype == np.float32:\n return obj\n else:\n return float(obj)\n```\n\nI will test this within my own fork prior to making a PR.",
"take",
"> Hi, I'd like to tackle this issue, I am new to the project, but I agree that the conversion to the Python float is causing this bug. A fix I could implement would be to remove the use of float() in order to preserve type [as it is being converted to a Python float].\n> \n> This code inside[ [_array_ops.py](https://github.com/pandas-dev/pandas/blob/e4a03b6e47a8ef9cd045902916289cbc976d3d33/pandas/core/ops/array_ops.py#L575C2-L576C26:)]\n> \n> ```\n> elif isinstance(obj, np.floating):\n> return float(obj)\n> ```\n> \n> Should return a float32 scaler instead, by removing the assertion:\n> \n> ```\n> elif isinstance(obj, np.floating):\n> if obj.dtype == np.float32:\n> return obj\n> else:\n> return float(obj)\n> ```\n> \n> I will test this within my own fork prior to making a PR.\n\nA few thoughts from my side as a user, not a library maintainer:\n\nWell, you could just remove the conversion in case of all numpy objects in that case, not just float32.\n\nHowever, I think this piece of code exists for a reason.\nIt was introduced in https://github.com/pandas-dev/pandas/pull/55739, referring to https://numpy.org/neps/nep-0050-scalar-promotion.html.\nIt looks like scalars are converted to Python scalars in order to invoke Numpy's introspective casting behavior (inspecting the values and determining the best Numpy value for the result), as documented in NEP 50.\nHowever, Numexpr does not have this introspective casting behavior and casts the result to `float64`.\n\nSo IMHO the fix would be to only cast to Python scalar when using Numpy, not when using Numexpr.",
"Agreed this is a bug. I haven't looked into the details on the proposed fixes so can't give any feedback there, but PRs to fix would be welcome!",
"Will not-upcasting prevent us from using numexpr?"
] |
3,263,634,189
| 61,950
|
TST: run python-dev CI on 3.14-dev
|
closed
| 2025-07-25T15:37:55
| 2025-08-08T19:55:08
| 2025-08-08T16:05:36
|
https://github.com/pandas-dev/pandas/pull/61950
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61950
|
https://github.com/pandas-dev/pandas/pull/61950
|
ngoldbaum
| 10
|
I'd like to see how widespread the test breakage is due to https://github.com/pandas-dev/pandas/issues/61368.
Also 3.14rc1 came out earlier this week so Pandas should probably start thinking about 3.14 support soonish.
|
[
"CI"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"After turning off the warning the tests results look much more reasonable. Here's the summary on Linux CI:\r\n\r\n<details>\r\n\r\n```\r\n=========================== short test summary info ============================\r\nFAILED pandas/tests/copy_view/test_chained_assignment_deprecation.py::test_series_setitem[0] - Failed: DID NOT WARN. No warnings of type (<class 'Warning'>,) were emitted.\r\n Emitted warnings: [].\r\nFAILED pandas/tests/copy_view/test_chained_assignment_deprecation.py::test_series_setitem[indexer1] - Failed: DID NOT WARN. No warnings of type (<class 'Warning'>,) were emitted.\r\n Emitted warnings: [].\r\nFAILED pandas/tests/copy_view/test_chained_assignment_deprecation.py::test_series_setitem[indexer2] - Failed: DID NOT WARN. No warnings of type (<class 'Warning'>,) were emitted.\r\n Emitted warnings: [].\r\nFAILED pandas/tests/copy_view/test_chained_assignment_deprecation.py::test_series_setitem[indexer3] - Failed: DID NOT WARN. No warnings of type (<class 'Warning'>,) were emitted.\r\n Emitted warnings: [].\r\nFAILED pandas/tests/copy_view/test_chained_assignment_deprecation.py::test_frame_setitem[a] - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/copy_view/test_chained_assignment_deprecation.py::test_frame_setitem[indexer1] - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/copy_view/test_chained_assignment_deprecation.py::test_frame_setitem[indexer2] - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/copy_view/test_chained_assignment_deprecation.py::test_frame_setitem[indexer3] - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/copy_view/test_clip.py::test_clip_chained_inplace - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/copy_view/test_interp_fillna.py::test_fillna_chained_assignment - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/copy_view/test_interp_fillna.py::test_interpolate_chained_assignment[interpolate] - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/copy_view/test_interp_fillna.py::test_interpolate_chained_assignment[ffill] - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/copy_view/test_interp_fillna.py::test_interpolate_chained_assignment[bfill] - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/copy_view/test_methods.py::test_chained_where_mask[mask] - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/copy_view/test_methods.py::test_chained_where_mask[where] - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/copy_view/test_methods.py::test_update_chained_assignment - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/copy_view/test_replace.py::test_replace_chained_assignment - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/reshape/merge/test_merge.py::test_merge_suffix_length_error[a-a-suffixes0-too many values to unpack \\\\(expected 2\\\\)] - AssertionError: Regex pattern did not match.\r\n Regex: 'too many values to unpack \\\\(expected 2\\\\)'\r\n Input: 'too many values to unpack (expected 2, got 3)'\r\nFAILED pandas/tests/scalar/period/test_period.py::TestPeriodConstruction::test_invalid_arguments - AssertionError: Regex pattern did not match.\r\n Regex: 'day is out of range for month'\r\n Input: 'day 0 must be in range 1..31 for month 1 in year 1: 0'\r\nFAILED pandas/tests/scalar/timestamp/test_constructors.py::TestTimestampConstructorPositionalAndKeywordSupport::test_constructor_positional - AssertionError: Regex pattern did not match.\r\n Regex: 'day is out of range for month'\r\n Input: 'day 0 must be in range 1..31 for month 1 in year 2000'\r\nFAILED pandas/tests/scalar/timestamp/test_constructors.py::TestTimestampConstructorPositionalAndKeywordSupport::test_constructor_keyword - AssertionError: Regex pattern did not match.\r\n Regex: 'day is out of range for month'\r\n Input: 'day 0 must be in range 1..31 for month 1 in year 2000'\r\nFAILED pandas/tests/series/accessors/test_dt_accessor.py::TestSeriesDatetimeValues::test_dt_accessor_not_writeable - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/series/indexing/test_indexing.py::test_underlying_data_conversion - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/series/methods/test_update.py::TestUpdate::test_update - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexes/test_indexing.py::TestContains::test_contains_requires_hashable_raises[interval] - AssertionError: Regex pattern did not match.\r\n Regex: \"unhashable type: 'dict'|must be real number, not dict|an integer is required|\\\\{\\\\}|pandas\\\\._libs\\\\.interval\\\\.IntervalTree' is not iterable\"\r\n Input: \"argument of type 'pandas._libs.interval.IntervalTree' is not a container or iterable\"\r\nFAILED pandas/tests/indexing/multiindex/test_chaining_and_caching.py::test_detect_chained_assignment - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/multiindex/test_chaining_and_caching.py::test_cache_updating - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/multiindex/test_partial.py::TestMultiIndexPartial::test_partial_set - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/multiindex/test_setitem.py::test_frame_setitem_copy_raises - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/multiindex/test_setitem.py::test_frame_setitem_copy_no_write - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestCaching::test_setitem_cache_updating_slices - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestChaining::test_setitem_chained_setfault - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestChaining::test_detect_chained_assignment - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestChaining::test_detect_chained_assignment_raises - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestChaining::test_detect_chained_assignment_fails - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestChaining::test_detect_chained_assignment_doc_example - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestChaining::test_detect_chained_assignment_object_dtype - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestChaining::test_detect_chained_assignment_undefined_column - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestChaining::test_detect_chained_assignment_changing_dtype - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestChaining::test_setting_with_copy_bug - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestChaining::test_detect_chained_assignment_warnings_errors - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestChaining::test_iloc_setitem_chained_assignment - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestChaining::test_getitem_loc_assignment_slice_state - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/tools/test_to_datetime.py::TestToDatetime::test_datetime_invalid_scalar[None-00:01:99] - AssertionError: Regex pattern did not match.\r\n Regex: '^time data \"a\" doesn\\\\\\'t match format \"%H:%M:%S\". You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\\\'ISO8601\\\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\\\'mixed\\\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$|^Given date string \"a\" not likely a datetime$|^unconverted data remains when parsing with format \"%H:%M:%S\": \"9\". You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\\\'ISO8601\\\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\\\'mixed\\\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$|^second must be in 0..59: 00:01:99$'\r\n Input: 'second must be in 0..59, not 99: 00:01:99'\r\nFAILED pandas/tests/tools/test_to_datetime.py::TestToDatetime::test_datetime_invalid_index[None-values1] - AssertionError: Regex pattern did not match.\r\n Regex: '^Given date string \"a\" not likely a datetime$|^time data \"a\" doesn\\\\\\'t match format \"%H:%M:%S\". You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\\\'ISO8601\\\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\\\'mixed\\\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$|^unconverted data remains when parsing with format \"%H:%M:%S\": \"9\". You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\\\'ISO8601\\\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\\\'mixed\\\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$|^second must be in 0..59: 00:01:99$'\r\n Input: 'second must be in 0..59, not 99: 00:01:99'\r\nFAILED pandas/tests/tools/test_to_datetime.py::TestDaysInMonth::test_day_not_in_month_raise[True] - AssertionError: Regex pattern did not match.\r\n Regex: 'day is out of range for month: 2015-02-29'\r\n Input: 'day 29 must be in range 1..28 for month 2 in year 2015: 2015-02-29'\r\nFAILED pandas/tests/tools/test_to_datetime.py::TestDaysInMonth::test_day_not_in_month_raise[False] - AssertionError: Regex pattern did not match.\r\n Regex: 'day is out of range for month: 2015-02-29'\r\n Input: 'day 29 must be in range 1..28 for month 2 in year 2015: 2015-02-29'\r\nFAILED pandas/tests/tools/test_to_datetime.py::TestDaysInMonth::test_day_not_in_month_raise_value[True-2015-02-29-%Y-%m-%d-^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$] - AssertionError: Regex pattern did not match.\r\n Regex: \"^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$\"\r\n Input: \"day 29 must be in range 1..28 for month 2 in year 2015. You might want to try:\\n - passing `format` if your strings have a consistent format;\\n - passing `format='ISO8601'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\n - passing `format='mixed'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.\"\r\nFAILED pandas/tests/tools/test_to_datetime.py::TestDaysInMonth::test_day_not_in_month_raise_value[True-2015-29-02-%Y-%d-%m-^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$] - AssertionError: Regex pattern did not match.\r\n Regex: \"^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$\"\r\n Input: \"day 29 must be in range 1..28 for month 2 in year 2015. You might want to try:\\n - passing `format` if your strings have a consistent format;\\n - passing `format='ISO8601'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\n - passing `format='mixed'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.\"\r\nFAILED pandas/tests/tools/test_to_datetime.py::TestDaysInMonth::test_day_not_in_month_raise_value[True-2015-04-31-%Y-%m-%d-^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$] - AssertionError: Regex pattern did not match.\r\n Regex: \"^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$\"\r\n Input: \"day 31 must be in range 1..30 for month 4 in year 2015. You might want to try:\\n - passing `format` if your strings have a consistent format;\\n - passing `format='ISO8601'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\n - passing `format='mixed'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.\"\r\nFAILED pandas/tests/tools/test_to_datetime.py::TestDaysInMonth::test_day_not_in_month_raise_value[True-2015-31-04-%Y-%d-%m-^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$] - AssertionError: Regex pattern did not match.\r\n Regex: \"^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$\"\r\n Input: \"day 31 must be in range 1..30 for month 4 in year 2015. You might want to try:\\n - passing `format` if your strings have a consistent format;\\n - passing `format='ISO8601'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\n - passing `format='mixed'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.\"\r\nFAILED pandas/tests/tools/test_to_datetime.py::TestDaysInMonth::test_day_not_in_month_raise_value[False-2015-02-29-%Y-%m-%d-^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$] - AssertionError: Regex pattern did not match.\r\n Regex: \"^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$\"\r\n Input: \"day 29 must be in range 1..28 for month 2 in year 2015. You might want to try:\\n - passing `format` if your strings have a consistent format;\\n - passing `format='ISO8601'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\n - passing `format='mixed'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.\"\r\nFAILED pandas/tests/tools/test_to_datetime.py::TestDaysInMonth::test_day_not_in_month_raise_value[False-2015-29-02-%Y-%d-%m-^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$] - AssertionError: Regex pattern did not match.\r\n Regex: \"^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$\"\r\n Input: \"day 29 must be in range 1..28 for month 2 in year 2015. You might want to try:\\n - passing `format` if your strings have a consistent format;\\n - passing `format='ISO8601'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\n - passing `format='mixed'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.\"\r\nFAILED pandas/tests/tools/test_to_datetime.py::TestDaysInMonth::test_day_not_in_month_raise_value[False-2015-04-31-%Y-%m-%d-^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$] - AssertionError: Regex pattern did not match.\r\n Regex: \"^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$\"\r\n Input: \"day 31 must be in range 1..30 for month 4 in year 2015. You might want to try:\\n - passing `format` if your strings have a consistent format;\\n - passing `format='ISO8601'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\n - passing `format='mixed'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.\"\r\nFAILED pandas/tests/tools/test_to_datetime.py::TestDaysInMonth::test_day_not_in_month_raise_value[False-2015-31-04-%Y-%d-%m-^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$] - AssertionError: Regex pattern did not match.\r\n Regex: \"^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$\"\r\n Input: \"day 31 must be in range 1..30 for month 4 in year 2015. You might want to try:\\n - passing `format` if your strings have a consistent format;\\n - passing `format='ISO8601'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\n - passing `format='mixed'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.\"\r\nFAILED pandas/tests/frame/indexing/test_setitem.py::TestDataFrameSetitemCopyViewSemantics::test_setitem_column_update_inplace - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/frame/indexing/test_xs.py::TestXS::test_xs_view - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/frame/methods/test_fillna.py::TestFillNA::test_fillna_on_column_view - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/frame/methods/test_interpolate.py::TestDataFrameInterpolate::test_interp_inplace - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/util/test_show_versions.py::test_show_versions - DeprecationWarning: codecs.open() is deprecated. Use open() instead.\r\nFAILED pandas/tests/util/test_show_versions.py::test_json_output_match - DeprecationWarning: codecs.open() is deprecated. Use open() instead.\r\nFAILED pandas/tests/io/parser/test_quoting.py::test_bad_quote_char[python-kwargs0-\"quotechar\" must be a(n)? 1-character string] - AssertionError: Regex pattern did not match.\r\n Regex: '\"quotechar\" must be a(n)? 1-character string'\r\n Input: '\"quotechar\" must be a unicode character or None, not a string of length 3'\r\nFAILED pandas/tests/io/parser/test_quoting.py::test_bad_quote_char[python-kwargs2-\"quotechar\" must be string( or None)?, not int] - AssertionError: Regex pattern did not match.\r\n Regex: '\"quotechar\" must be string( or None)?, not int'\r\n Input: '\"quotechar\" must be a unicode character or None, not int'\r\nFAILED pandas/tests/io/parser/test_quoting.py::test_null_quote_char[python--0] - AssertionError: Regex pattern did not match.\r\n Regex: '\"quotechar\" must be a 1-character string'\r\n Input: '\"quotechar\" must be a unicode character or None, not a string of length 0'\r\nFAILED pandas/tests/io/test_common.py::test_codecs_encoding[csv-None] - DeprecationWarning: codecs.open() is deprecated. Use open() instead.\r\nFAILED pandas/tests/io/test_common.py::test_codecs_encoding[csv-utf-8] - DeprecationWarning: codecs.open() is deprecated. Use open() instead.\r\nFAILED pandas/tests/io/test_common.py::test_codecs_encoding[json-None] - DeprecationWarning: codecs.open() is deprecated. Use open() instead.\r\nFAILED pandas/tests/io/test_common.py::test_codecs_encoding[json-utf-8] - DeprecationWarning: codecs.open() is deprecated. Use open() instead.\r\nFAILED pandas/tests/frame/test_block_internals.py::TestDataFrameBlockInternals::test_stale_cached_series_bug_473 - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/frame/test_block_internals.py::test_update_inplace_sets_valid_block_values - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/generic/test_generic.py::TestGeneric::test_nonzero[DataFrame] - ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().\r\nFAILED pandas/tests/generic/test_generic.py::TestGeneric::test_nonzero[Series] - ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().\r\n= 72 failed, 167066 passed, 24154 skipped, 781 xfailed, 83 xpassed, 31 warnings in 453.44s (0:07:33) =\r\n```\r\n\r\n</details>\r\n\r\nBesides the tests looking for warnings but not seeing any, I see some failures due to new deprecations in Python, some that look like changes in the regex and datetime modules maybe and a few other failures that I can't classify just looking at the failure report.\r\n\r\n@jorisvandenbossche did you ever have time to look closer at generating the chained assignment warning on 3.14 since it was reported in April? Unfortunately we're probably past the time when we can get C API changes merged into CPython to support this use-case, so it may not be easily feasible to detect what you're looking for just based on refcounts in 3.14 and newer.",
"> @jorisvandenbossche did you ever have time to look closer at generating the chained assignment warning on 3.14 since it was reported in April? Unfortunately we're probably past the time when we can get C API changes merged into CPython to support this use-case, so it may not be easily feasible to detect what you're looking for just based on refcounts in 3.14 and newer.\r\n\r\nI didn't get to it yet, but now installed python 3.14 to try myself and took a first look. I added some more context to the issue https://github.com/pandas-dev/pandas/issues/61368. Based on that I am also afraid we won't be able to \"fix\" this (but let's further discuss that on the issue). \r\nBut in any case, to start testing Python 3.14, certainly fine to disable those warnings for now (and then the tests that currently check for the presence of a warning can just be skipped, I think)",
"OK, I think I've gotten everything except for the two test failures in `pandas/tests/generic/test_generic.py`, which I don't understand. It looks like `pytest.raises` is broken somehow or it's broken as a side effect of something else? Because the exception should be getting caught as far as I can see but it's not.\r\n\r\n<details>\r\n\r\n```\r\ngoldbaum at Nathans-MBP in ~/Documents/pandas on 3.14-ci\r\n± pytest pandas/tests/generic/test_generic.py\r\n============================= test session starts ==============================\r\nplatform darwin -- Python 3.14.0rc1, pytest-8.4.1, pluggy-1.6.0\r\nrootdir: /Users/goldbaum/Documents/pandas\r\nconfigfile: pyproject.toml\r\nplugins: xdist-3.8.0, hypothesis-6.136.4, cov-6.2.1, run-parallel-0.5.1.dev0\r\ncollected 79 items\r\nCollected 0 items to run in parallel\r\n\r\npandas/tests/generic/test_generic.py .........FF....................................................................\r\n\r\n=================================== FAILURES ===================================\r\n_____________________ TestGeneric.test_nonzero[DataFrame] ______________________\r\n\r\nself = <pandas.tests.generic.test_generic.TestGeneric object at 0x10aa25a90>\r\nframe_or_series = <class 'pandas.DataFrame'>\r\n\r\n def test_nonzero(self, frame_or_series):\r\n # GH 4633\r\n # look at the boolean/nonzero behavior for objects\r\n obj = construct(frame_or_series, shape=4)\r\n msg = f\"The truth value of a {frame_or_series.__name__} is ambiguous\"\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj == 0)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj == 1)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj)\r\n\r\n obj = construct(frame_or_series, shape=4, value=1)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj == 0)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj == 1)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj)\r\n\r\n obj = construct(frame_or_series, shape=4, value=np.nan)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj == 0)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj == 1)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj)\r\n\r\n # empty\r\n obj = construct(frame_or_series, shape=0)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj)\r\n\r\n # invalid behaviors\r\n\r\n obj1 = construct(frame_or_series, shape=4, value=1)\r\n obj2 = construct(frame_or_series, shape=4, value=1)\r\n\r\n with pytest.raises(ValueError, match=msg):\r\n if obj1:\r\n pass\r\n\r\n with pytest.raises(ValueError, match=msg):\r\n> obj1 and obj2\r\n\r\npandas/tests/generic/test_generic.py:152:\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\nself = 0 1 2 3\r\n0 1.0 1.0 1.0 1.0\r\n1 1.0 1.0 1.0 1.0\r\n2 1.0 1.0 1.0 1.0\r\n3 1.0 1.0 1.0 1.0\r\n\r\n @final\r\n def __bool__(self) -> NoReturn:\r\n> raise ValueError(\r\n f\"The truth value of a {type(self).__name__} is ambiguous. \"\r\n \"Use a.empty, a.bool(), a.item(), a.any() or a.all().\"\r\n )\r\nE ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().\r\n\r\npandas/core/generic.py:1503: ValueError\r\n_______________________ TestGeneric.test_nonzero[Series] _______________________\r\n\r\nself = <pandas.tests.generic.test_generic.TestGeneric object at 0x10aa25b80>\r\nframe_or_series = <class 'pandas.Series'>\r\n\r\n def test_nonzero(self, frame_or_series):\r\n # GH 4633\r\n # look at the boolean/nonzero behavior for objects\r\n obj = construct(frame_or_series, shape=4)\r\n msg = f\"The truth value of a {frame_or_series.__name__} is ambiguous\"\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj == 0)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj == 1)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj)\r\n\r\n obj = construct(frame_or_series, shape=4, value=1)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj == 0)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj == 1)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj)\r\n\r\n obj = construct(frame_or_series, shape=4, value=np.nan)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj == 0)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj == 1)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj)\r\n\r\n # empty\r\n obj = construct(frame_or_series, shape=0)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj)\r\n\r\n # invalid behaviors\r\n\r\n obj1 = construct(frame_or_series, shape=4, value=1)\r\n obj2 = construct(frame_or_series, shape=4, value=1)\r\n\r\n with pytest.raises(ValueError, match=msg):\r\n if obj1:\r\n pass\r\n\r\n with pytest.raises(ValueError, match=msg):\r\n> obj1 and obj2\r\n\r\npandas/tests/generic/test_generic.py:152:\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\nself = 0 1.0\r\n1 1.0\r\n2 1.0\r\n3 1.0\r\ndtype: float64\r\n\r\n @final\r\n def __bool__(self) -> NoReturn:\r\n> raise ValueError(\r\n f\"The truth value of a {type(self).__name__} is ambiguous. \"\r\n \"Use a.empty, a.bool(), a.item(), a.any() or a.all().\"\r\n )\r\nE ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().\r\n\r\npandas/core/generic.py:1503: ValueError\r\n------ generated xml file: /Users/goldbaum/Documents/pandas/test-data.xml ------\r\n============================= slowest 30 durations =============================\r\n0.01s call pandas/tests/generic/test_generic.py::TestGeneric::test_truncate_out_of_bounds[DataFrame]\r\n\r\n(29 durations < 0.005s hidden. Use -vv to show these durations.)\r\n=========================== short test summary info ============================\r\nFAILED pandas/tests/generic/test_generic.py::TestGeneric::test_nonzero[DataFrame] - ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.boo...\r\nFAILED pandas/tests/generic/test_generic.py::TestGeneric::test_nonzero[Series] - ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool()...\r\n========================= 2 failed, 77 passed in 0.19s =====\r\n```\r\n\r\n</details>",
"> Because the exception should be getting caught as far as I can see but it's not.\r\n\r\nOK, here's a weird one. This script runs without error on Python 3.13 but dies with an uncaught `ValueError` on 3.14.0rc1:\r\n\r\n```python\r\nimport pandas as pd\r\nobj1 = pd.DataFrame({'0': [1, 1, 1, 1], '1': [1, 1, 1, 1]})\r\nobj2 = pd.DataFrame({'0': [1, 1, 1, 1], '1': [1, 1, 1, 1]})\r\ntry:\r\n obj1 and obj2\r\nexcept ValueError:\r\n pass\r\n```\r\n\r\n```\r\ngoldbaum at Nathans-MBP in ~/Documents/test\r\n○ python test.py\r\nTraceback (most recent call last):\r\n File \"/Users/goldbaum/Documents/test/test.py\", line 5, in <module>\r\n obj1 and obj2\r\n File \"/Users/goldbaum/.pyenv/versions/3.14.0rc1/lib/python3.14/site-packages/pandas/core/generic.py\", line 1577, in __nonzero__\r\n raise ValueError(\r\n ...<2 lines>...\r\n )\r\nValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().\r\n```\r\n\r\nSeems kinda like a Python bug to me?",
"@ngoldbaum I've been tracking this PR and happened to see your comments tonight and thought \"no way, that can't be\", but yeah I don't understand how that's possible without it being a python. Even if pandas is doing something wrong somehow it shouldn't be magically getting around a try/except.",
"> Seems kinda like a Python bug to me?\r\n\r\nI think this is https://github.com/python/cpython/issues/137288, which I think should be fixed in 3.14.0rc2. It's a little tricky to ignore these test failures because I can't actually catch these particular exceptions... I guess I can just skip them for `sys.version_info == (3, 14, 0, 'candidate', 1)` and then we can reassess when rc2 comes out?",
"@ngoldbaum thanks for finding that upstream issue, good to see it is already fixed. FWIW, numpy has the same issue (`obj1 and obj2` where those objects are numpy arrays also bypasses the `except ValueError`)",
"Yup! Not really surprising to me that the Pandas test suite caught the upstream bug but the NumPy tests missed it, Pandas has much more comprehensive tests...\n\nI think I might split off the changes for the new error messages and `codecs.open` into their own PR so they can be merged separately. If you decide we ultimately need to disable the warning and workarounds in C aren't possible, we can merge this and then work with @mpage to get a fix in for 3.14.1. But hopefully you figure out how to get the warning working again!",
"Actually on second thought I don't think it makes sense to PR the warnings changes without any 3.14 testing, so I'll leave that here. @jorisvandenbossche please feel free to cherry-pick fc51e5f6fa5a8573db4c7e00750f4d9499c029a7 if you end up coming up with a better approach. I'll go ahead and re-enable all the CI to make sure I didn't break anything on older Python versions.",
"Closing in favor of Joris' PRs. Please feel free to cherry-pick [180081b](https://github.com/pandas-dev/pandas/commit/180081b04fa9c18ebec787b1c40b321f93a0dce2)"
] |
3,263,504,724
| 61,949
|
BUG: border attribute used in html.py was deprecated in HTML5
|
open
| 2025-07-25T14:55:19
| 2025-07-26T20:50:00
| null |
https://github.com/pandas-dev/pandas/issues/61949
| true
| null | null |
JonahDelman
| 2
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
if self.border is None:
border_attr = ""
else:
border_attr = f' border="{self.border}"'
```
### Issue Description
It still renders properly, but the table border attribute is deprecated in favor of CSS styling
### Expected Behavior
In-line CSS should be used instead
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6
python : 3.12.11
python-bits : 64
OS : Darwin
OS-release : 24.5.0
Version : Darwin Kernel Version 24.5.0: Tue Apr 22 19:48:46 PDT 2025; root:xnu-11417.121.6~2/RELEASE_ARM64_T8103
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.3.1
numpy : 2.3.1
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.7.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 6.0.0
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 21.0.0
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.16.0
sqlalchemy : 2.0.41
tables : None
tabulate : None
xarray : None
xlrd : 2.0.2
xlsxwriter : 3.2.5
zstandard : 0.23.0
tzdata : 2025.2
qtpy : None
pyqt5 : None
None
</details>
|
[
"Bug",
"IO HTML"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the report! PRs to update are welcome!",
"take"
] |
3,263,423,538
| 61,948
|
BUG: Replacement fails after NA value with PyArrow-backed strings
|
closed
| 2025-07-25T14:28:33
| 2025-07-25T18:17:20
| 2025-07-25T18:17:20
|
https://github.com/pandas-dev/pandas/issues/61948
| true
| null | null |
rhshadrach
| 1
|
This does not occur on the main branch, only 2.3.x. I plan to run a git-bisect later today.
```python
pd.set_option("infer_string", True)
ser = pd.Series(["a", np.nan, "a", "a"])
print(ser.replace({"a": "b"}))
# 0 b
# 1 NaN
# 2 a
# 3 b
# dtype: str
```
|
[
"Bug",
"replace",
"Arrow"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"This was https://github.com/pandas-dev/pandas/pull/57865. Might want to backport this, but will discuss in that PR. Closing."
] |
3,263,291,757
| 61,947
|
CI: enable doctest errors again + fixup categorical examples
|
closed
| 2025-07-25T13:46:35
| 2025-07-26T18:43:32
| 2025-07-26T17:14:21
|
https://github.com/pandas-dev/pandas/pull/61947
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61947
|
https://github.com/pandas-dev/pandas/pull/61947
|
jorisvandenbossche
| 1
|
Updating the categorical docstring examples after https://github.com/pandas-dev/pandas/pull/61891
This now closes https://github.com/pandas-dev/pandas/issues/61886 and enables the doctests again
|
[
"Docs",
"CI",
"Strings"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jorisvandenbossche "
] |
3,262,892,048
| 61,946
|
BUG: Fix Series.str.contains with compiled regex on Arrow string dtype
|
closed
| 2025-07-25T11:21:33
| 2025-08-15T08:22:49
| 2025-08-14T08:27:10
|
https://github.com/pandas-dev/pandas/pull/61946
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61946
|
https://github.com/pandas-dev/pandas/pull/61946
|
Aniketsy
| 13
|
closes #61942
This PR fixes an issue in `Series.str.contains()` where passing a compiled regex object failed when the underlying string data is backed by PyArrow.
Please, provide feedback if my approach is not correct , I would love to improve and contribute in this.
|
[
"Bug",
"Strings",
"Arrow"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Hi @mroeschke \r\nI've worked on the issue\r\nBUG: Fix Series.str.contains with compiled regex on Arrow string dtype ([#61942])\r\nand have opened a pull request for it.\r\n\r\nI'd appreciate it if you could take a look and share your feedback.\r\nPlease let me know if anything needs to be improved or clarified.\r\n\r\nThanks!",
"Thankyou for the feedback!\r\nI will update that.",
"Additionally, if this is something that is not implemented by pyarrow, we should not raise a NotImplementedError, but fall back on the python object implementation (you can see a similar pattern in some other str methods, like `ArrowStringArray._str_replace`)",
"@jorisvandenbossche Thank you for the feedback! I will update the PR accordingly.\r\n\r\nWould you mind letting me know the reason behind the one failing check (pre-commit.ci)?\r\nThanks again!",
"> Would you mind letting me know the reason behind the one failing check (pre-commit.ci)?\r\n\r\nruff is failing, which is used for auto formatting. I would recommend to install the pre-commit locally to avoid having this fail on CI: https://pandas.pydata.org/docs/dev/development/contributing_codebase.html#pre-commit",
"hi @jorisvandenbossche\r\nPlease review this PR, and if area needs changes please suggest.\r\nAlso I want to know if i would need to write unit test for this .\r\n\r\nThankyou!",
"Can you try to run the test you added locally? Then you can make sure to get it working correctly. Right now it is still failing according to CI",
"Sure, I will try to run tests locally and update this PR .\r\nThankyou !",
"Thanks @Aniketsy",
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 1d2233185083423b8ecb27986f11175b2d6e8fa6\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #61946: BUG: Fix Series.str.contains with compiled regex on Arrow string dtype'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61946-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61946 on branch 2.3.x (BUG: Fix Series.str.contains with compiled regex on Arrow string dtype)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ",
"@jorisvandenbossche \r\nBig thanks for patiently guiding me at every step and helping me get this right. I learned a lot from this, and I’m glad the PR is now merged!\r\nThankyou .",
"@Aniketsy you're welcome!",
"Manual backport -> https://github.com/pandas-dev/pandas/pull/62116"
] |
3,262,764,488
| 61,945
|
BUG: Fix Series.str.contains with compiled regex on Arrow strings (#61942)
|
closed
| 2025-07-25T10:32:26
| 2025-07-25T10:58:33
| 2025-07-25T10:35:39
|
https://github.com/pandas-dev/pandas/pull/61945
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61945
|
https://github.com/pandas-dev/pandas/pull/61945
|
Aniketsy
| 0
|
#61942
This PR improves the handling of this case:
If the compiled regex has no flags, we extract the pattern string and proceed.
If the regex includes flags, a clear NotImplementedError is raised with guidance.
This avoids a confusing low-level PyArrow TypeError and ensures consistent behavior with non-Arrow strings.
Please , suggest feedback if my approach is not correct . I would love to improve and fix this.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,261,825,366
| 61,944
|
DOC: Standardize noncompliant docstrings in pandas/io/html.py (flake8-docstrings)
|
open
| 2025-07-25T02:59:21
| 2025-07-29T04:57:25
| null |
https://github.com/pandas-dev/pandas/issues/61944
| true
| null | null |
gumus-g
| 7
|
### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://github.com/pandas-dev/pandas/blob/main/pandas/io/html.py
### Documentation problem
Several functions in `pandas/io/html.py` have docstrings that violate [PEP 257](https://peps.python.org/pep-0257/) and pandas documentation guidelines. Flagged violations include:
- `D400`: Docstring summary should end with a period
- `D205`: Docstring summary should be followed by a blank line
- `D401`: First line of docstring should be in imperative mood
These inconsistencies reduce clarity and hinder automated validation.
### Suggested fix for documentation
Standardize docstring formatting based on `flake8-docstrings` and `pydocstyle` feedback to meet pandas’ documentation standards. This includes:
- Adding missing punctuation and spacing
- Rewriting summaries for clarity and imperative voice
- Ensuring consistent style across the module
This fix is scoped to documentation and does not impact functionality.
Suggested labels: doc, refactor, good first issue
|
[
"Docs",
"IO HTML"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for raising this!\n\nCould you provide a list of the specific functions in `pandas/io/html.py` that are violating these rules? That'll help us confirm the issue and scope the fix appropriately.",
"Thanks for the quick response! \n\nI ran flake8 --select=D on pandas/io/html.py and mapped the violations to their corresponding functions. Here's the list of functions currently not compliant with PEP 257 and pandas docstring guidelines:\n\n- _remove_whitespace (line 70): D205, D400\n- _read (line 118): D400 (multiple occurrences at lines 389 and 564)\n- _build_xpath_expr (line 680): D205, D400\n- read_html (line 1028): D205, D400, D401\n\nThese cover missing periods, missing blank lines after summary, and summaries not written in imperative mood. Let me know if you'd like me to open a PR to standardize these — happy to help!\n",
"is this issue still open? I'd like to contribute.",
"Hey @gumus-g , thanks for the note! I checked `_remove_whitespace` and it doesn’t seem to be violating D205 or D400 (haven't checked the others). The summary ends with a period and there’s a blank line after it. Maybe your local version is outdated?",
"Thanks @arthurlw! Confirmed! I ran python -m pydocstyle pandas/io/html.py using version 6.3.0 and got consistent results across multiple functions:\n- _build_xpath_expr: D205, D400\n- _build_doc: D205, D400, D401\n- _equals_tag and _handle_hidden_tables: D400\n_remove_whitespace wasn’t flagged in my environment, so that may be version- or config-dependent.\nPlanning to scope docstring fixes to the functions above based on these validated violations.\nLet me know if you'd like me to include others for consistency! \n",
"I think `_build_xpath_expr` is already formatted correctly, but I see that the others are violating guidelines. (Also noticed that `_build_doc` doesn't have a summary at all!)\n\nConfirmed on main; PRs are welcome.",
"Thanks for confirming! I'm currently working on a PR to update formatting for `_build_xpath_expr`, add a summary for `_build_doc`, and clean up guideline issues across the other helpers. I’ll open it shortly—appreciate your openness to contributions!\n"
] |
3,261,706,347
| 61,943
|
BUG: series.groupby(level=0) and series.groupby(pd.Grouper(level=0)) produce different orders
|
open
| 2025-07-25T01:47:26
| 2025-07-30T21:30:04
| null |
https://github.com/pandas-dev/pandas/issues/61943
| true
| null | null |
sfc-gh-mvashishtha
| 4
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
series = pd.Series([5, 7], index=['b', 'a'])
assert list(series.groupby(level=0).groups.keys()) == list(series.groupby(pd.Grouper(level=0)).groups.keys())
```
### Issue Description
series.groupby(level=0) and series.groupby(pd.Grouper(level=0)) produce different orders
### Expected Behavior
The result of series.groupby(level=0) should be sorted just as the result of series.groupby(pd.Grouper(level=0)) is.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6
python : 3.13.5
python-bits : 64
OS : Darwin
OS-release : 24.5.0
Version : Darwin Kernel Version 24.5.0: Tue Apr 22 19:54:25 PDT 2025; root:xnu-11417.121.6~2/RELEASE_ARM64_T6020
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.3.1
numpy : 2.3.2
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.1
Cython : None
sphinx : None
IPython : 9.4.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Groupby",
"Needs Discussion",
"API - Consistency"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"The issue is caused by inconsistent default values for the sort parameter:\n\n - pd.Grouper.__init__ has sort: bool = False\n - Series.groupby has sort: bool = True\n\n This explains why:\n - series.groupby(level=0) returns keys in sorted order: ['a', 'b'] (uses sort=True default)\n - series.groupby(pd.Grouper(level=0)) returns keys in original index order: ['b', 'a'] (uses sort=False default)",
"Thanks for the report. I do think we should align these to default to `sort=True`. @jbrockmendel - do you agree?",
"agreed i would expect these to match, no real preference on what the default should be.",
"Main reason I would change Grouper is that I'd guess it sees a lot less use."
] |
3,261,452,864
| 61,942
|
BUG: Using `Series.str.contains()` with a compiled regex and arrow strings fails
|
closed
| 2025-07-24T22:44:11
| 2025-08-14T08:27:11
| 2025-08-14T08:27:11
|
https://github.com/pandas-dev/pandas/issues/61942
| true
| null | null |
Dr-Irv
| 0
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
>>> import pandas as pd
>>> import re
>>> DATA = ["applep", "bananap", "Cherryp", "DATEp", "eGGpLANTp", "123p", "23.45p"]
>>> s = pd.Series(DATA)
>>> s.str.contains(re.compile(r"a"), regex=True)
0 True
1 True
2 False
3 False
4 False
5 False
6 False
dtype: bool
>>> pd.options.future.infer_string=True
>>> s = pd.Series(DATA)
>>> s.dtype
str
>>> s.str.contains(re.compile(r"a"), regex=True)
```
### Issue Description
The last line fails with:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Condadirs\envs\pandasstubs\lib\site-packages\pandas\core\strings\accessor.py", line 140, in wrapper
return func(self, *args, **kwargs)
File "C:\Condadirs\envs\pandasstubs\lib\site-packages\pandas\core\strings\accessor.py", line 1346, in contains
result = self._data.array._str_contains(pat, case, flags, na, regex)
File "C:\Condadirs\envs\pandasstubs\lib\site-packages\pandas\core\arrays\string_arrow.py", line 359, in _str_contains
return ArrowStringArrayMixin._str_contains(self, pat, case, flags, na, regex)
File "C:\Condadirs\envs\pandasstubs\lib\site-packages\pandas\core\arrays\_arrow_string_mixins.py", line 299, in _str_contains
result = pa_contains(self._pa_array, pat, ignore_case=not case)
File "C:\Condadirs\envs\pandasstubs\lib\site-packages\pyarrow\compute.py", line 265, in wrapper
options = _handle_options(func_name, options_class, options,
File "C:\Condadirs\envs\pandasstubs\lib\site-packages\pyarrow\compute.py", line 228, in _handle_options
return options_class(*args, **kwargs)
File "pyarrow\\_compute.pyx", line 1121, in pyarrow._compute.MatchSubstringOptions.__init__
File "pyarrow\\_compute.pyx", line 1104, in pyarrow._compute._MatchSubstringOptions._set_options
File "<stringsource>", line 15, in string.from_py.__pyx_convert_string_from_py_6libcpp_6string_std__in_string
TypeError: expected bytes, re.Pattern found
```
### Expected Behavior
No failure.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6
python : 3.10.14
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.26100
machine : AMD64
processor : Intel64 Family 6 Model 183 Stepping 1, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 2.3.1
numpy : 2.2.6
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 24.2
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : 1.1
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 6.0.0
matplotlib : 3.10.3
numba : None
numexpr : 2.11.0
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 21.0.0
pyreadstat : 1.3.0
pytest : 8.4.1
python-calamine : None
pyxlsb : 1.0.10
s3fs : None
scipy : 1.15.3
sqlalchemy : 2.0.41
tables : 3.10.1
tabulate : 0.9.0
xarray : 2025.6.1
xlrd : 2.0.2
xlsxwriter : 3.2.5
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Strings",
"Arrow"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,261,447,335
| 61,941
|
BUG: Fix TypeError in assert_index_equal when comparing CategoricalIndex with check_categorical=True and exact=False
|
open
| 2025-07-24T22:40:10
| 2025-08-22T06:25:38
| null |
https://github.com/pandas-dev/pandas/pull/61941
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61941
|
https://github.com/pandas-dev/pandas/pull/61941
|
Aniketsy
| 6
|
Fixes #61935
- Fixes a bug where `assert_index_equal` raises a `TypeError` instead of `AssertionError` when comparing two `CategoricalIndex` objects with `check_categorical=True` and `exact=False`.
- Ensures consistency with expected testing behavior by properly raising an `AssertionError` in these cases.
Please let me know if my approach or fix needs any improvements . I’m open to feedback and happy to make changes based on suggestions.
|
[
"Testing"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Hi @mroeschke\r\nI've opened a pull request addressing\r\nBUG: Fix TypeError in assert_index_equal when comparing CategoricalIndex with check_categorical=True and exact=False ([#61941])\r\nThe changes are ready for review.\r\n\r\nI'd really appreciate it if you could take a look and provide feedback .\r\nPlease let me know if anything needs to be improved or clarified.\r\n\r\nThanks!",
"Hi @mroeschke,\r\n\r\nThank you for your review. I’ve updated the PR based on your feedback ,please have a look when convenient.\r\n\r\nAdditionally, I noticed one check failure (pre-commit.ci-pr) and wanted to ask if you could help clarify the reason behind it. Apologies if this isn't the appropriate way to raise this, please do let me know the correct approach if needed.\r\n\r\nThanks again!",
"<img width=\"885\" height=\"477\" alt=\"Checks fail\" src=\"https://github.com/user-attachments/assets/f5ab18a6-533e-4641-a10c-8a533d596ab2\" />\r\n\r\nHi @jorisvandenbossche, I ran pre-commit locally and all hooks passed. However, the GitHub checks are still showing a failure. Could you please advise if I’ve missed something?\r\n",
"Hi @mroeschke\r\nWhen you have a moment, could you please review this PR? I've been working on resolving the check failure, but haven't been able to pinpoint the issue yet. Any insights or suggestions you could provide would be greatly appreciated.\r\n\r\nThank you!",
"Hi @mroeschke \r\nI just wanted to check in on this PR to see if there’s anything further you’d like me to update or improve.\r\nThankyou !",
"Hi @jorisvandenbossche, I’ve added the separate test as suggested. Please let me know if you’d prefer me to merge these into a single parametrized test instead."
] |
3,261,383,708
| 61,940
|
BUG: Using `/` operator with a `Path` and `Series` of string dtype fails
|
open
| 2025-07-24T22:02:26
| 2025-08-21T07:19:47
| null |
https://github.com/pandas-dev/pandas/issues/61940
| true
| null | null |
Dr-Irv
| 8
|
### Reproducible Example
```python
import pandas as pd
from pathlib import Path
pd.options.future.infer_string = True # Only needed with 2.3.1
folder = Path.cwd()
files = pd.Series(["a.png", "b.png"])
folder / files[0] # This works
folder / files # This raises an exception
```
### Issue Description
The `/` operator with `Path` works fine with 2.3.1 with strings being object dtype, but not with arrow strings. The last statement produces this stacktrace:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Condadirs\envs\pandasstubs\lib\site-packages\pandas\core\ops\common.py", line 76, in new_method
return method(self, other)
File "C:\Condadirs\envs\pandasstubs\lib\site-packages\pandas\core\arraylike.py", line 214, in __rtruediv__
return self._arith_method(other, roperator.rtruediv)
File "C:\Condadirs\envs\pandasstubs\lib\site-packages\pandas\core\series.py", line 6146, in _arith_method
return base.IndexOpsMixin._arith_method(self, other, op)
File "C:\Condadirs\envs\pandasstubs\lib\site-packages\pandas\core\base.py", line 1391, in _arith_method
result = ops.arithmetic_op(lvalues, rvalues, op)
File "C:\Condadirs\envs\pandasstubs\lib\site-packages\pandas\core\ops\array_ops.py", line 273, in arithmetic_op
res_values = op(left, right)
File "C:\Condadirs\envs\pandasstubs\lib\site-packages\pandas\core\roperator.py", line 27, in rtruediv
return right / left
File "C:\Condadirs\envs\pandasstubs\lib\site-packages\pandas\core\ops\common.py", line 76, in new_method
return method(self, other)
File "C:\Condadirs\envs\pandasstubs\lib\site-packages\pandas\core\arraylike.py", line 214, in __rtruediv__
return self._arith_method(other, roperator.rtruediv)
File "C:\Condadirs\envs\pandasstubs\lib\site-packages\pandas\core\arrays\arrow\array.py", line 836, in _arith_method
return self._evaluate_op_method(other, op, ARROW_ARITHMETIC_FUNCS)
File "C:\Condadirs\envs\pandasstubs\lib\site-packages\pandas\core\arrays\arrow\array.py", line 768, in _evaluate_op_method
other = self._box_pa(other)
File "C:\Condadirs\envs\pandasstubs\lib\site-packages\pandas\core\arrays\arrow\array.py", line 407, in _box_pa
return cls._box_pa_scalar(value, pa_type)
File "C:\Condadirs\envs\pandasstubs\lib\site-packages\pandas\core\arrays\string_arrow.py", line 154, in _box_pa_scalar
pa_scalar = super()._box_pa_scalar(value, pa_type)
File "C:\Condadirs\envs\pandasstubs\lib\site-packages\pandas\core\arrays\arrow\array.py", line 443, in _box_pa_scalar
pa_scalar = pa.scalar(value, type=pa_type, from_pandas=True)
File "pyarrow\\scalar.pxi", line 1670, in pyarrow.lib.scalar
File "pyarrow\\error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow\\error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Could not convert WindowsPath('c:/Code/pandas-stubs') with type WindowsPath: did not recognize Python value type when inferring an Arrow data type
```
While the error says something about Windows, a similar error occurs in Linux.
### Expected Behavior
No exception thrown
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6
python : 3.10.14
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.26100
machine : AMD64
processor : Intel64 Family 6 Model 183 Stepping 1, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 2.3.1
numpy : 2.2.6
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 24.2
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : 1.1
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 6.0.0
matplotlib : 3.10.3
numba : None
numexpr : 2.11.0
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 21.0.0
pyreadstat : 1.3.0
pytest : 8.4.1
python-calamine : None
pyxlsb : 1.0.10
s3fs : None
scipy : 1.15.3
sqlalchemy : 2.0.41
tables : 3.10.1
tabulate : 0.9.0
xarray : 2025.6.1
xlrd : 2.0.2
xlsxwriter : 3.2.5
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
|
[
"Enhancement",
"Strings"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"This is not actually specific to pyarrow, also with the object-backed string dtype this does not work, so this is a general issue of supporting `/` (`__truediv__`) for the string dtype.\n\n```python\n>>> folder = Path.cwd()\n>>> files = pd.Series([\"a.png\", \"b.png\"], dtype=pd.StringDtype(\"python\", na_value=np.nan))\n>>> folder / files\n...\nFile ~/scipy/repos/pandas/pandas/core/arraylike.py:217, in OpsMixin.__rtruediv__(self, other)\n 215 @unpack_zerodim_and_defer(\"__rtruediv__\")\n 216 def __rtruediv__(self, other):\n--> 217 return self._arith_method(other, roperator.rtruediv)\n\nFile ~/scipy/repos/pandas/pandas/core/arrays/string_.py:1057, in StringArray._cmp_method(self, other, op)\n 1054 valid = ~mask\n 1056 if not lib.is_scalar(other):\n-> 1057 if len(other) != len(self):\n 1058 # prevent improper broadcasting when other is 2D\n 1059 raise ValueError(\n 1060 f\"Lengths of operands do not match: {len(self)} != {len(other)}\"\n 1061 )\n 1063 # for array-likes, first filter out NAs before converting to numpy\n\nTypeError: object of type 'PosixPath' has no len()\n```\n\nFor object dtype this works, because in that case we just defer calling the operation on the individual objects, and then `str` will defer to Path to handle it. \nSo I think the question is if we want to support this specific case of `/` for string dtype (I am fine with that, as this seems a useful use case)",
"> So I think the question is if we want to support this specific case of `/` for string dtype (I am fine with that, as this seems a useful use case)\n\nIt was reported with `pandas-stubs` at https://github.com/pandas-dev/pandas-stubs/issues/682 (so people are doing it) and added to the tests there, which is how I uncovered this issue with the new `StringDtype`.",
"Do you expect this to return an array of strings or an object array of Path objects?",
"> Do you expect this to return an array of strings or an object array of Path objects?\n\nWith pandas now (without arrow string types), it returns an object array of Path objects. So I think that shouldn't change.\n",
"Any preferences between \"special case rtruediv with Path\" vs \"try operating regardless which may cause a object-conversion before raising\"?",
"> Any preferences between \"special case rtruediv with Path\" vs \"try operating regardless which may cause a object-conversion before raising\"?\n\nI think the latter is what is implemented now, so let's just make sure it works for arrow strings.\n\n",
"@jbrockmendel do you have a PR nearing ready? (just seeing what can make it in for 2.3.2)",
"No, i have a branch, but it is not on the verge of being pushed."
] |
3,261,317,149
| 61,939
|
DOC: Docstring for BooleanDType inconsistent with the rest of repo
|
open
| 2025-07-24T21:27:51
| 2025-08-01T04:48:01
| null |
https://github.com/pandas-dev/pandas/issues/61939
| true
| null | null |
saguaro1234
| 6
|
### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://github.com/pandas-dev/pandas/blob/main/pandas/core/arrays/boolean.py
### Documentation problem
The docstring for the class BooleanDType has very little information about the functionality of the class.
"""
Extension dtype for boolean data.
.. warning::
BooleanDtype is considered experimental. The implementation and
parts of the API may change without warning.
Attributes
----------
None
Methods
-------
None
See Also
--------
StringDtype : Extension dtype for string data.
Examples
--------
>>> pd.BooleanDtype()
BooleanDtype
"""
What pieces of the docstring are included are sparse, and there is no parameters section included. In order to understand anything that the class is used for, I have to read through the code, which defeats the purpose of having the docstring. I read through the other class in the file and its docstring is far more developed and had a multi sentence summary of the classes uses, a parameters section and examples that used example data to convey a use case.
### Suggested fix for documentation
I suggest that somebody find what parameters are being used and add a section for them. Also, that a more specific example section be added and that the summary section be extended to include more description of the class's function.
I would be happy to start remedying these problems.
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the report. PRs to flesh out this documentation are welcome.",
"I'm willing to work on this and submit a PR, if its OK with everyone.",
"take",
"Is it still possible that I might be able to work on this?",
"Go for it! I'm forgetting how to unassign myself at the moment. I won't make a PR if you want to fix it.",
"Ok ill get on it"
] |
3,261,231,258
| 61,938
|
BUG: `metadata` on `np.ndarray` of `np.dtype("O")` dtype lost when roundtripped through `pandas.Series`
|
open
| 2025-07-24T20:57:13
| 2025-07-25T13:28:19
| null |
https://github.com/pandas-dev/pandas/issues/61938
| true
| null | null |
ilan-gold
| 2
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd, numpy as np
arr = np.array(["a"], dtype=np.dtype("O", metadata={"foo":"bar"}))
assert pd.Series(arr).to_numpy().dtype.metadata is not None
```
### Issue Description
The above does not have an assertion error under `pandas<3` (i.e., `2.3.1`) but does as of the displayed versions. I apologize if I'm not supposed to report this - I see the "I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas." box but of course, it doesn't appear to happen on `2.3.1`. This came up in https://github.com/pydata/xarray/pull/10564
It occurs to me now that 3.0.0 may not be what is on `main` but that is the reported version so I'm not sure if this is a 3.0.0 problem or a `main` problem but here is how I work around it:
https://github.com/pydata/xarray/pull/10564/files#diff-43c76e9be8425b5b6897dcecab4b240c32580447455c0c8c0b9b7fd84ce8a15dR221-R228
See https://github.com/pydata/xarray/issues/10553 for the origin of this issue
### Expected Behavior
`metadata` is maintained
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : faf3bbb1d7831f7db8fc72b36f3e83e7179bb3f9
python : 3.12.3
python-bits : 64
OS : Darwin
OS-release : 24.1.0
Version : Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:15 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 3.0.0.dev0+2259.gfaf3bbb1d7
numpy : 2.2.6
dateutil : 2.9.0.post0
pip : None
Cython : None
sphinx : 8.2.3
IPython : 9.4.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
bottleneck : 1.5.0
fastparquet : None
fsspec : 2025.7.0
html5lib : None
hypothesis : 6.136.2
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 6.0.0
matplotlib : 3.10.3
numba : 0.61.2
numexpr : None
odfpy : None
openpyxl : None
psycopg2 : None
pymysql : None
pyarrow : 21.0.0
pyiceberg : None
pyreadstat : None
pytest : 8.4.1
python-calamine : None
pytz : 2025.2
pyxlsb : None
s3fs : None
scipy : 1.16.0
sqlalchemy : None
tables : None
tabulate : None
xarray : 2025.7.2.dev13+gfb49a3b0.d20250724
xlrd : None
xlsxwriter : None
zstandard : None
qtpy : None
pyqt5 : None
</details>
|
[
"Strings"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"The reason for this change is that with pandas nightly / upcoming 3.0, passing an object array of strings to `pd.Series` will be inferred as the new `str` dtype, and no longer retain the numpy object dtype:\n\n```python\n>>> arr = np.array([\"a\"], dtype=np.dtype(\"O\", metadata={\"foo\":\"bar\"}))\n... pd.Series(arr)\n0 a\ndtype: str # <-- no longer object dtype\n```\n\nIf you do not want this default inference, you can specify the dtype to the Series constructor being object dtype, and in that case it seems to preserve the object dtype metadata:\n\n```python\n>>> arr = np.array([\"a\"], dtype=np.dtype(\"O\", metadata={\"foo\":\"bar\"}))\n... pd.Series(arr, dtype=object).to_numpy().dtype.metadata\nmappingproxy({'foo': 'bar'})\n```",
"I would be ok closing this issue then. I don't really see how we could use the fix in our use-case without somehow checking the underlying type of the `object` array as strings, so I'd be ok closing this issue. If you have a good way to check that would be great. It seems `xarray` relies on the `object` casting to other types as well (at least it seems so because I tried your fix and it broke other conversions in https://github.com/pydata/xarray/pull/10564/files#diff-43c76e9be8425b5b6897dcecab4b240c32580447455c0c8c0b9b7fd84ce8a15dR219)"
] |
3,261,195,419
| 61,937
|
BUG: `Index` accepts lists as values
|
open
| 2025-07-24T20:44:35
| 2025-07-29T01:50:12
| null |
https://github.com/pandas-dev/pandas/issues/61937
| true
| null | null |
Dr-Irv
| 5
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
>>> import pandas as pd
>>> ind=pd.Index([["str", "rts"]])
>>> ind
Index([['str', 'rts']], dtype='object')
```
### Issue Description
It seems that `Index` accepts a list of list of strings, but that's not hashable, and docs say that we only accept hashable values in an Index.
### Expected Behavior
This should be rejected during construction.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 2cc37625532045f4ac55b27176454bbbc9baf213
python : 3.10.14
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.26100
machine : AMD64
processor : Intel64 Family 6 Model 183 Stepping 1, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 2.3.0
numpy : 2.2.6
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 24.2
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : 1.1
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 6.0.0
matplotlib : 3.10.3
numba : None
numexpr : 2.11.0
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 20.0.0
pyreadstat : 1.3.0
pytest : 8.4.1
python-calamine : None
pyxlsb : 1.0.10
s3fs : None
scipy : 1.15.3
sqlalchemy : 2.0.41
tables : 3.10.1
tabulate : 0.9.0
xarray : 2025.6.1
xlrd : 2.0.2
xlsxwriter : 3.2.5
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Needs Discussion",
"Index",
"Constructors"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"In the Index constructor ( __new__ in pandas/core/indexes/base.py), we can validate each element by checking if it’s hashable using hash(item). This is where the Index is first constructed from user input.\n\n",
"`Index` has always been special at accepting non-Python-hashable values since I think the khash indexing engine is able to hash more Python objects than just `hash`. Although it's an antipattern to store nested objects like this in an `Index` not sure if we should ban it ",
"Except at https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.html#pandas.Index we say \"An Index instance can only contain hashable objects.\"",
"xref #20285",
"This is effectively a duplicate of #60925"
] |
3,261,106,428
| 61,936
|
Issue #14601 fix - pytables.py put() and append() docstring fix
|
closed
| 2025-07-24T20:08:56
| 2025-07-25T16:01:32
| 2025-07-25T16:01:32
|
https://github.com/pandas-dev/pandas/pull/61936
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61936
|
https://github.com/pandas-dev/pandas/pull/61936
|
JoeDediop
| 5
|
**Problem Summary**
The current documentation for `min_itemsize` in HDFStore methods is unclear about:
1. How string length is calculated (bytes vs characters)
2. How encoding affects the required size
3. Proper usage with multi-byte characters
Proposed Changes
1. Enhanced `put()` method docstring
```python
def put(
self,
key: str,
value: DataFrame | Series,
format=None,
index: bool = True,
append: bool = False,
complib=None,
complevel: int | None = None,
min_itemsize: int | dict[str, int] | None = None,
nan_rep=None,
data_columns: Literal[True] | list[str] | None = None,
encoding=None,
errors: str = "strict",
track_times: bool = True,
dropna: bool = False,
) -> None:
"""
Store object in HDFStore.
Parameters
----------
key : str
value : {Series, DataFrame}
format : 'fixed(f)|table(t)', default is 'fixed'
Format to use when storing object in HDFStore. Value can be one of:
``'fixed'``
Fixed format. Fast writing/reading. Not-appendable, nor searchable.
``'table'``
Table format. Write as a PyTables Table structure which may perform
worse but allow more flexible operations like searching / selecting
subsets of the data.
index : bool, default True
Write DataFrame index as a column.
append : bool, default False
This will force Table format, append the input data to the existing.
min_itemsize : int, dict of str: int, or None, default None
Minimum size in bytes for string columns. This parameter is only used when
format='table'. Can be:
- int: Apply the same minimum size to all string columns
- dict: Map column names to their minimum sizes
- None: Use default sizing
**Important**: The size refers to the number of bytes after encoding, not
the number of characters. For multi-byte characters (e.g., Chinese, Arabic),
you need to account for the encoding. For example, the character '香' is
1 character but 3 bytes when encoded as UTF-8.
See examples below for proper usage with encoded strings.
data_columns : list of columns or True, default None
List of columns to create as data columns, or True to use all columns.
See `here
<https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#query-via-data-columns>`__.
encoding : str, default None
Provide an encoding for strings. When using multi-byte characters,
this affects how min_itemsize should be calculated.
track_times : bool, default True
Parameter is propagated to 'create_table' method of 'PyTables'.
If set to False it enables to have the same h5 files (same hashes)
independent on creation time.
dropna : bool, default False, optional
Remove missing values.
Examples
--------
Basic usage with ASCII strings:
>>> df = pd.DataFrame([['hello', 'world']], columns=['A', 'B'])
>>> store = pd.HDFStore("store.h5", 'w') # doctest: +SKIP
>>> store.put('data', df, format='table', min_itemsize={'A': 10, 'B': 10}) # doctest: +SKIP
Usage with multi-byte characters:
>>> df_unicode = pd.DataFrame([['香港', '北京']], columns=['city1', 'city2']) # doctest: +SKIP
>>> # Each Chinese character is 3 bytes in UTF-8, so '香港' needs 6 bytes
>>> store.put('cities', df_unicode, format='table', # doctest: +SKIP
... min_itemsize={'city1': 12, 'city2': 12}, encoding='utf-8') # doctest: +SKIP
Determining the correct size for encoded strings:
>>> text = '香港' # doctest: +SKIP
>>> len(text) # Character length # doctest: +SKIP
2
>>> len(text.encode('utf-8')) # Byte length # doctest: +SKIP
6
>>> # Use the byte length for min_itemsize
"""
```
2. Enhanced `append()` method docstring
```python
def append(
self,
key: str,
value: DataFrame | Series,
format=None,
axes=None,
index: bool | list[str] = True,
append: bool = True,
complib=None,
complevel: int | None = None,
columns=None,
min_itemsize: int | dict[str, int] | None = None,
nan_rep=None,
chunksize: int | None = None,
expectedrows=None,
dropna: bool | None = None,
data_columns: Literal[True] | list[str] | None = None,
encoding=None,
errors: str = "strict",
) -> None:
"""
Append to Table in file.
Node must already exist and be Table format.
Parameters
----------
key : str
value : {Series, DataFrame}
format : 'table' is the default
Format to use when storing object in HDFStore. Value can be one of:
``'table'``
Table format. Write as a PyTables Table structure which may perform
worse but allow more flexible operations like searching / selecting
subsets of the data.
index : bool, default True
Write DataFrame index as a column.
append : bool, default True
Append the input data to the existing.
data_columns : list of columns, or True, default None
List of columns to create as indexed data columns for on-disk
queries, or True to use all columns. By default only the axes
of the object are indexed. See `here
<https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#query-via-data-columns>`__.
min_itemsize : int, dict of str: int, or None, default None
Minimum size in bytes for string columns. Can be:
- int: Apply the same minimum size to all string columns
- dict: Map column names to their minimum sizes
- None: Use the existing table's column sizes
**Important**: This parameter is only effective when creating a new table.
If the table already exists, the column sizes are fixed and cannot be
changed. The size refers to the number of bytes after encoding, not
the number of characters.
For multi-byte characters, calculate the size using the encoded byte length.
For example: len('香'.encode('utf-8')) returns 3, not len('香') which returns 1.
nan_rep : str to use as str nan representation
chunksize : size to chunk the writing
expectedrows : expected TOTAL row size of this table
encoding : str, default None
Provide an encoding for strings. This should match the encoding used
when the table was initially created.
dropna : bool, default False, optional
Do not write an ALL nan row to the store settable
by the option 'io.hdf.dropna_table'.
Notes
-----
Does *not* check if data being appended overlaps with existing
data in the table, so be careful.
When appending to an existing table, the min_itemsize parameter has no effect
as column sizes are already fixed. Set min_itemsize when initially creating
the table with put() or the first append() call.
Examples
--------
Creating a table and appending data:
>>> df1 = pd.DataFrame([['short', 'text']], columns=['A', 'B'])
>>> store = pd.HDFStore("store.h5", 'w') # doctest: +SKIP
>>> # Set min_itemsize when creating the table
>>> store.put('data', df1, format='table', min_itemsize={'A': 20, 'B': 20}) # doctest: +SKIP
>>>
>>> df2 = pd.DataFrame([['longer text here', 'more text']], columns=['A', 'B'])
>>> store.append('data', df2) # doctest: +SKIP
>>> store.close() # doctest: +SKIP
Handling multi-byte characters:
>>> df_en = pd.DataFrame([['hello']], columns=['text'])
>>> df_zh = pd.DataFrame([['你好世界']], columns=['text']) # "Hello World" in Chinese
>>> store = pd.HDFStore("store.h5", 'w') # doctest: +SKIP
>>> # Calculate size needed: len('你好世界'.encode('utf-8')) = 12 bytes
>>> store.put('messages', df_en, format='table',
... min_itemsize={'text': 15}, encoding='utf-8') # doctest: +SKIP
>>> store.append('messages', df_zh) # doctest: +SKIP
>>> store.close() # doctest: +SKIP
Common error when min_itemsize is too small:
>>> df = pd.DataFrame([['香']], columns=['char']) # 3 bytes in UTF-8
>>> store = pd.HDFStore("store.h5", 'w') # doctest: +SKIP
>>> # This will raise ValueError: string length [3] exceeds limit [1]
>>> # store.put('test', df, format='table', min_itemsize={'char': 1})
>>> # Correct usage:
>>> store.put('test', df, format='table', min_itemsize={'char': 3}) # doctest: +SKIP
>>> store.close() # doctest: +SKIP
"""
```
Key Improvements Made
1. **Clear parameter type documentation**: Explicitly state that min_itemsize can be int, dict, or None
2. **Byte vs character clarification**: Emphasize that size refers to bytes after encoding
3. **Multi-byte character examples**: Show real examples with Chinese characters
4. **Encoding relationship**: Explain how encoding affects size calculations
5. **Timing clarification**: Explain when min_itemsize is effective vs ignored
6. **Error prevention**: Show common mistakes and how to avoid them
7. **Practical examples**: Include realistic use cases that users encounter
**Notes**
I'm a new contributor, I followed the [contribution guide](https://pandas.pydata.org/docs/dev/development/contributing.html#submitting-a-pull-request) and it looks like in that process other modifications were made to the pytables.py file. As far as manual changes go, what I noted above are the only changes I implemented.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Is this AI?",
"> Is this AI?\n\nHello! Yes this was written with AI assistance. I am a CS student in my junior year, the assignment was focused on simply submitting a PR so I utilized AI to aid in the process so I could meet my deadlines. \n\nHowever, if that causes problems I would love to work with you to adjust changes I've made so that it could be usable!",
"I want to add to my previous reply. I actually misunderstood my own assignment, I was not supposed to use AI to contribute toward a repo at all. However, I would like to start again fresh with helping resolve this issue as I think it would be a good learning experience for me. \n\nThis time around I wouldn't generate code with AI, I will likely use it to help me understand what the code is doing, but I will work toward contributing proper documentation written in my own words that follows your style and guidelines, at that point I'd make a new PR with the new work. \n\nSo I'll start by asking, does my original PR do a good job at summarizing the issue and proposing a decent solution? Or is it off track on what the variable 'min_itemsize' is used for? Once I know that I can work toward fully understanding the code so I can help create better documentation, perhaps for more than just the two methods I changed! \n\nLet me know and I'll get to work right away! ",
"Please understand you're asking me to put more time and effort into reviewing this PR than you put into making it. If you'd like to start over without AI and a ton of unrelated (and wrong) edits, I'll take a look at that.",
"That sounds perfectly reasonable. I will work over the next couple weeks to better understand the code and style guidelines! I'll close this PR so I can start fresh. \r\n\r\nI'll work harder to properly understand how min_itemsize works and then once I have a better idea of how it works I'll write clear documentation in my own words. Then I'll make sure the new PR is more polished than this rushed one. Thank you for the feedback, I can definitely understand why you would want a proper level of effort for the contributions to a project this large! "
] |
3,260,721,290
| 61,935
|
BUG: `assert_index_equal(CategoricalIndex, CategoricalIndex, check_categorical=True, exact=False)` raises TypeError instead of AssertionError
|
open
| 2025-07-24T17:48:56
| 2025-07-24T17:49:27
| null |
https://github.com/pandas-dev/pandas/issues/61935
| true
| null | null |
mroeschke
| 0
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
In [1]: import pandas as pd
In [2]: p_left = pd.Index([1, 2, 3], name="a", dtype="category")
In [3]: p_right = pd.Index([1, 2, 6], name="a", dtype="category")
In [4]: pd.testing.assert_index_equal(p_left, p_right, check_categorical=True, exact=False)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[10], line 1
----> 1 pd.testing.assert_index_equal(p_left, p_right, check_categorical=True, exact=False)
[... skipping hidden 1 frame]
File ~/pandas/core/ops/common.py:70, in _unpack_zerodim_and_defer.<locals>.new_method(self, other)
66 return NotImplemented
68 other = item_from_zerodim(other)
---> 70 return method(self, other)
File ~/pandas/core/arrays/categorical.py:143, in _cat_compare_op.<locals>.func(self, other)
141 msg = "Categoricals can only be compared if 'categories' are the same."
142 if not self._categories_match_up_to_permutation(other):
--> 143 raise TypeError(msg)
145 if not self.ordered and not self.categories.equals(other.categories):
146 # both unordered and different order
147 other_codes = recode_for_categories(
148 other.codes, other.categories, self.categories, copy=False
149 )
TypeError: Categoricals can only be compared if 'categories' are the same.
```
### Issue Description
pandas asserters should always raise an `AssertionError`.
### Expected Behavior
I would expect this to raise an `AssertionError`.
### Installed Versions
<details>
Replace this line with the output of pd.show_versions()
</details>
|
[
"Bug",
"Testing"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,260,664,864
| 61,934
|
Backport PR #61933 on branch 2.3.x (unpin scipy since statsmodels was fixed)
|
closed
| 2025-07-24T17:29:41
| 2025-07-26T08:03:30
| 2025-07-24T20:02:20
|
https://github.com/pandas-dev/pandas/pull/61934
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61934
|
https://github.com/pandas-dev/pandas/pull/61934
|
Dr-Irv
| 0
|
Backport of #61933
|
[
"Dependencies"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,260,304,428
| 61,933
|
unpin scipy since statsmodels was fixed
|
closed
| 2025-07-24T15:22:34
| 2025-07-26T08:03:49
| 2025-07-24T16:14:39
|
https://github.com/pandas-dev/pandas/pull/61933
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61933
|
https://github.com/pandas-dev/pandas/pull/61933
|
Dr-Irv
| 3
|
In #61750 and #61754 @jorisvandenbossche pinned scipy due to a `statsmodels` issue. That has apparently been fixed, so this unpins the upper bound on scipy.
|
[
"Dependencies"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @Dr-Irv ",
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 e72c8a1e0ad421c1b8a7b918d995f24bed595cc3\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #61933: unpin scipy since statsmodels was fixed'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61933-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61933 on branch 2.3.x (unpin scipy since statsmodels was fixed)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ",
"Manual backport -> https://github.com/pandas-dev/pandas/pull/61934"
] |
3,259,108,800
| 61,932
|
BUG: Unexpected Code Segment Executed, Causing Logical Error
|
closed
| 2025-07-24T09:04:20
| 2025-07-24T20:04:14
| 2025-07-24T20:04:05
|
https://github.com/pandas-dev/pandas/issues/61932
| true
| null | null |
LAUNAHC
| 2
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
# 示例数据
df = pd.DataFrame({
'product_code': ['X', 'X', 'X', 'Y', 'Y'],
'units': ['P', 'P', 'Q', 'Q', 'Q']
})
df2 = df.head(2)
df2 = df2.sort_values('product_code', ascending=False)\
.groupby(['product_code',
'unit_name'])\
.first().reset_index(drop=True)
print(df2)
```
### Issue Description
When there are only 2 lines of data in df, this code will run successfully, even if the fields do not exist. We have not found this situation in other rows so far
### Expected Behavior
It exists in version 2.2.2 of pandas。
### Installed Versions
<details>
Replace this line with the output of pd.show_versions()
</details>
|
[
"Bug",
"Groupby"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Hi @mroeschke \nI've also observed the same issue. I’d like to work on this issue and submit a PR to fix it.",
"Thanks for the report, but this is expected behavior. When you provide an iterable the same length as the data being grouped, pandas will use that as the groups.\n\n```python\ndf = pd.DataFrame({\"a\": [1, 2, 3]})\nprint(df.groupby([4, 5, 5]).sum())\n# a\n# 4 1\n# 5 5\n```\n\nClosing."
] |
3,258,438,675
| 61,931
|
DOC: Add tzdata to README dependencies list
|
closed
| 2025-07-24T04:23:43
| 2025-08-12T03:21:43
| 2025-07-25T16:37:24
|
https://github.com/pandas-dev/pandas/pull/61931
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61931
|
https://github.com/pandas-dev/pandas/pull/61931
|
n-dett
| 1
|
- [x] closes #61927
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Added tzdata to the list of dependencies in the README in order to match official documentation.
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @n-dett "
] |
3,256,650,953
| 61,930
|
PERF: `DataFrame.copy(deep=True)` returns a view on the original pyarrow buffer
|
open
| 2025-07-23T14:37:26
| 2025-07-25T18:26:17
| null |
https://github.com/pandas-dev/pandas/issues/61930
| true
| null | null |
TomAugspurger
| 9
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this issue exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this issue exists on the main branch of pandas.
### Reproducible Example
Over in https://github.com/dask/dask/issues/12022#issuecomment-3104950072, I'm debugging a test failure with dask and pandas 3.x that comes down to the behavior of `DataFrame.copy(deep=True)` with an arrow-backed extension array.
In https://github.com/pandas-dev/pandas/blob/628c7fb1ec33bc02f1f0010381b03cb3d10f87df/pandas/core/arrays/arrow/array.py#L1092, we deliberately return a *shallow* copy (a new object with a view on the original buffers) of the backing array. For correctness, this is fine since pyarrow arrays are immutable, so copying should be unnecessary. However, it does mean that after a `DataFrame.copy(deep=True)`, you'll still have a reference back to the original buffer. If the output of the `.copy(deep=True)` is the only one with a reference to the original buffer, then it won't be garbage collected. Consider:
```python
import pandas as pd
import pyarrow as pa
pool = pa.default_memory_pool()
print("before", pool.bytes_allocated()) # 0
df = pd.DataFrame({"a": ["a", "b", "c"] * 1000})
print("df", pool.bytes_allocated()) # 27200
del df
print("df", pool.bytes_allocated()) # 0
df2 = pd.DataFrame({"a": ["a", "b", "c"] * 1000})
clone = df2.iloc[:0].copy(deep=True)
print("df2", pool.bytes_allocated()) # 27200
del df2
print("after - clone", pool.bytes_allocated()) # 27200
```
Maybe this is fine. We can probably figure out some workaround in dask (in this case we're making an empty dataframe object as a kind of Schema object. We can probably do something other than `df.iloc[:0].copy(deep=True)`). But perhaps pandas could consider changing the behavior here.
The downside is that `df.copy(deep=True)` will become more expensive and use more memory.
### Installed Versions
<details>
```
In [4]: pd.show_versions()
INSTALLED VERSIONS
------------------
commit : 962168f06d15d1aced28b414eb82909d3c930916
python : 3.12.8
python-bits : 64
OS : Darwin
OS-release : 24.5.0
Version : Darwin Kernel Version 24.5.0: Tue Apr 22 19:53:27 PDT 2025; root:xnu-11417.121.6~2/RELEASE_ARM64_T6041
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 3.0.0.dev0+2254.g962168f06d
numpy : 2.4.0.dev0+git20250717.d02611a
dateutil : 2.9.0.post0
pip : None
Cython : None
sphinx : None
IPython : 9.4.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
bottleneck : None
fastparquet : None
fsspec : 2025.7.0
html5lib : None
hypothesis : 6.136.1
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
psycopg2 : None
pymysql : None
pyarrow : 21.0.0
pyiceberg : None
pyreadstat : None
pytest : 8.4.1
python-calamine : None
pytz : 2025.2
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
qtpy : None
pyqt5 : None
```
</details>
### Prior Performance
_No response_
|
[
"Performance",
"Needs Discussion",
"Copy / view semantics"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"cc @mroeschke if you have any thoughts here. I'll start looking for a workaround in dask though.",
"Using `pyarrow.compute.take` with an empty array seems to work:\n\n```python\n out = x.iloc[:0].copy(deep=True)\n\n for k, v in out.items():\n if isinstance(v.array, pd.arrays.ArrowExtensionArray):\n values = pyarrow.compute.take(pyarrow.array(v.array), pyarrow.array([], type=\"int32\"))\n out[k] = v._constructor(pd.array(values, dtype=v.array.dtype), index=v.index, name=v.name)\n```\n\nSo I'd probably be fine with leaving the behavior as is (not having to copy the data is nice, in many cases).",
"I would currently lean towards maintaining the current behavior of `copy` being a shallow copy because the underlying array being immutable, but open to other thoughts cc @jbrockmendel @jorisvandenbossche \n\nAnother solution, if you don't care about preserving the original chunking layout like in `take` method, is to just `combine_chunks()` of the underlying array in the `pd.arrays.ArrowExtensionArray` i.e. `values = pa.array(v.array).combine_chunks()`",
"Do we expect end-users to have issues with this or just libraries like dask (who we can trust to handle this on their own)?",
"> Another solution, if you don't care about preserving the original chunking layout like in take method, is to just combine_chunks() of the underlying array \n\nThanks.\n\n> Do we expect end-users to have issues with this\n\nI'm not sure, but my guess is that this that the vast majority of users are better off with the current behavior (deep copy not actually copying): The only place this is really (negatively) observable is when you have a copy of a slice of a DataFrame that outlives the original DataFrame. Dask does this a lot (for its schemas). I'm guessing it's not too common.\n\nI'll plan to close this later today if no one pushes strongly for changing the behavior.",
"I think we should maybe reconsider this. Certainly now we try to make much less copies within pandas (with CoW), I do think that there is value in an expected / gauranteed \"deep copy\" behaviour. ",
"Either works for me. The [workaround in dask](https://github.com/dask/dask/pull/12025/files#diff-d805dac0ba6fd4e90a18a453851ab73e8257c52b88b20b1ae1fd4c0aa46a0fe3R201-R205) isn't free, but it ends up being relatively cheap since we're declining with size-0 arrays.\n\nThis is pretty subtle. I'm not sure how many people are using `.copy()` to break references vs. `.copy()` to get a mutable copy of the data (and if we're being honest, the majority of the `.copy()` calls in the wild are probably attempts to avoid a SettingWithCopy warning, who won't be so performance / memory sensitive).",
"Another use case that comes to mind for wanting an actual copy is to de-chunk a highly fragmented ChunkedArray.",
"> Another use case that comes to mind for wanting an actual copy is to de-chunk a highly fragmented ChunkedArray.\n\nIndeed, we just hit that [here](https://github.com/dask/dask/pull/12025).\n\n> Another solution [...] values = pa.array(v.array).combine_chunks()\n\nThis ended up being\n\n```\n values = pa.chunked_array([v.array]).combine_chunks()\n v._constructor(\n pd.array(values, dtype=v.array.dtype), index=v.index, name=v.name\n )\n```\n\ni.e. using the `pa.chunked_array` function with a (length-1) list of arrays."
] |
3,256,028,981
| 61,929
|
Updating error message to include more details about dataframe
|
open
| 2025-07-23T11:39:47
| 2025-08-07T19:53:30
| null |
https://github.com/pandas-dev/pandas/pull/61929
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61929
|
https://github.com/pandas-dev/pandas/pull/61929
|
hasanrashid
| 3
|
- [x] closes #61834 .
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) test_indexing.py passed
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). pre-commit run --hook-stage manual --all-files passed
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Updated error message from
`"Must have equal len keys and value when setting with an iterable" `
to
`f"Length mismatch when setting Dataframe with an iterable: "
f"{len(pi)} keys vs {len(value)} values. "
f"Keys: {pi}, Values: {value}. "
f"Both must have the same length."
`
|
[
"Error Reporting"
] | 0
| 0
| 0
| 0
| 0
| 1
| 0
| 0
|
[
"Actually, there are some failing test-indexing.py. Forgot to update the message in the testcase",
"Thank you! I am sure this will make it much easier for me to figure out what I need to change!",
"Is there anything I need to do before this is reviewed?"
] |
3,255,574,178
| 61,928
|
indexing with a `Categorical` of `Interval`s is inefficient
|
open
| 2025-07-23T09:10:02
| 2025-07-24T16:25:37
| null |
https://github.com/pandas-dev/pandas/issues/61928
| true
| null | null |
flying-sheep
| 3
|
This line converts the `IntervalIndex` into a numpy object array:
https://github.com/pandas-dev/pandas/blob/faf3bbb1d7831f7db8fc72b36f3e83e7179bb3f9/pandas/core/dtypes/dtypes.py#L520
then in this block, a TypeError is raised and causes that object array to be converted into strings:
> TypeError: (-0.00872, 0.439] of type <class 'pandas._libs.interval.Interval'> is not a valid type for hashing, must be string or null
https://github.com/pandas-dev/pandas/blob/faf3bbb1d7831f7db8fc72b36f3e83e7179bb3f9/pandas/core/util/hashing.py#L333-L339
|
[
"Performance",
"Categorical",
"Interval"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Does hash array get called in indexing?",
"Yeah, when the indexed data frame’s `.index` is unique:\n\n```py\nimport pandas as pd\n\ndf = pd.DataFrame(dict(a=range(3)), pd.cut(range(3), 3))\nassert df.index.is_unique # bug only triggers if this is the case\n\ndf.loc[df.index.categories[:2]]\n```\n\nset a breakpoint in the `except TypeError` branch in `_hash_ndarray` and execute the above in a debugger, and the breakpoint will be hit.\n\nI discovered this because in some older versions of pandas or numpy, the `vals.astype(str).astype(object)` raises a RuntimeWarning about “invalid values encountered in cast”. This no longer happens, but I think the casting should probably not happen here.",
"Looks like in a .equals check we go through categories_match_up_to_permutation, which checks the hash of each dtype, which goes through path in the OP."
] |
3,254,677,663
| 61,927
|
DOC: Add tzdata to dependencies section in README
|
closed
| 2025-07-23T03:11:10
| 2025-07-25T16:37:25
| 2025-07-25T16:37:25
|
https://github.com/pandas-dev/pandas/issues/61927
| true
| null | null |
n-dett
| 1
|
### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
[https://github.com/pandas-dev/pandas/blob/main/README.md](https://github.com/pandas-dev/pandas/blob/main/README.md)
### Documentation problem
tzdata is listed as a required dependency in the [installation documentation](https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html#dependencies), but is not listed with the other required dependencies in the README.
### Suggested fix for documentation
Add tzdata to the list of dependencies in the README so that the README matches the most current and accurate documentation. I intend to work on this issue.
|
[
"Docs"
] | 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"take"
] |
3,254,654,777
| 61,926
|
BUG: Merge fails on pyarrow datetime columns
|
open
| 2025-07-23T02:57:47
| 2025-07-31T02:20:23
| null |
https://github.com/pandas-dev/pandas/issues/61926
| true
| null | null |
Meet1995
| 6
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
# Create a datetime index
t = pd.date_range("2025-07-06", periods=3, freq="h")
# Left dataframe: one row per timestamp
df1 = pd.DataFrame({"time": t, "val1": [1, 2, 3]})
# Right dataframe: two rows per timestamp (duplicates)
df2 = pd.DataFrame({"time": t.repeat(2), "val2": [10, 20, 30, 40, 50, 60]})
# This works
print(pd.merge(df1, df2, on="time", how="left"))
# This fails
print(
pd.merge(
df1.convert_dtypes(dtype_backend="pyarrow"),
df2.convert_dtypes(dtype_backend="pyarrow"),
on="time", # pyarrow datetime column causes error
how="left",
)
)
```
### Issue Description
Error message:
`ValueError: Length mismatch: Expected axis has 6 elements, new values have 3 elements`
### Expected Behavior
The merge should succeed and return 6 rows, like it does when not using `dtype_backend="pyarrow"`.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6
python : 3.12.11
python-bits : 64
OS : Darwin
OS-release : 24.5.0
Version : Darwin Kernel Version 24.5.0: Tue Apr 22 19:54:29 PDT 2025; root:xnu-11417.121.6~2/RELEASE_ARM64_T6030
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : None
LOCALE : C.UTF-8
pandas : 2.3.1
numpy : 2.3.1
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.1.1
Cython : None
sphinx : None
IPython : 9.4.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.5.1
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : 3.10.3
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 20.0.0
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.16.0
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Datetime",
"Reshaping",
"Arrow"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the report! This is an issue in `pandas.core.indexes.base.Index._get_join_target`. There we convert to NumPy for PyArrow-backed data, but do not view as `i8`. However for NumPy-backed datetimes, we view as `i8` in `DatetimeIndexOpsMixin._get_engine_target`. \n\n@jbrockmendel - any suggested design for solving this? It seems we could either add logic specifically in `_get_join_target` or perhaps add/use a method on the Arrow EAs.",
"Hi @rhshadrach ,\nThanks for identifying the root cause! I’d like to help with this issue. I'm happy to implement the fix in _get_join_target or via Arrow EA method, depending on which design is preferred.",
"I think you're right. in Index.join we have a try/except for self._join_monotonic. That raises bc we don't cast to i8 and so falls through to self._join_via_get_indexer, which returns a result with only 3 elements.\n\nPatching _get_join_target fixes the OP example, but I'm confused by join_via_get_indexer. The 3 elements it returns match what i expect a left-join to look like. Is my \"join\" intuition off? Or do I need more caffeine?",
"@jbrockmendel - \n\n> The 3 elements it returns match what i expect a left-join to look like. Is my \"join\" intuition off? Or do I need more caffeine?\n\nDuplicates on the right will cause there to be more rows.",
"My understanding is that join_monotonic is a fastpath but shouldn't actually have different behavior than join_via_get_indexer. ",
"@jbrockmendel - I haven't checked the history here, but my guess is that `_join_via_get_indexer` was only meant to be called when both `self` and `other` are unique. From\n\nhttps://github.com/pandas-dev/pandas/blob/d4ae6494f2c4489334be963e1bdc371af7379cd5/pandas/core/indexes/base.py#L4435-L4450\n\nI suspect we should use `_join_non_unique` in this case when `_join_monotonic` fails."
] |
3,254,290,694
| 61,925
|
CoW: add readonly flag to ExtensionArrays, return read-only EA/ndarray in .array/EA.to_numpy()
|
open
| 2025-07-22T22:51:04
| 2025-08-19T15:28:43
| null |
https://github.com/pandas-dev/pandas/pull/61925
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61925
|
https://github.com/pandas-dev/pandas/pull/61925
|
jorisvandenbossche
| 4
|
Addresses one of the remaining TODO items from https://github.com/pandas-dev/pandas/issues/48998
Similar as https://github.com/pandas-dev/pandas/pull/51082 and some follow-up PRs, ensuring we also mark EAs as read-only like we do for numpy arrays, when the user gets the underlying EA from a pandas object.
For that purpose, added a `_readonly` attribute to the EA class that is False by default.
Still need to add more tests and fix a bunch of tests
|
[
"Copy / view semantics"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"i get why .values and .array are made read-only, but why are we bothering with to_numpy?",
"That's a good question, I didn't really think about it deeply .. But so for the non-extension dtypes, we also did it for `.values` / `__array__` and `to_numpy()` (https://github.com/pandas-dev/pandas/pull/51082), and so followed along here.\r\n\r\nI do think there is value in being consistent in those different ways to get a numpy array from the pandas object. So could also ask, why _not_ for `to_numpy()`? And then compared to `.values`, `to_numpy()` actually gives you more control with the ability to ask for a copy. \r\n(in practice the implementation of `__array__` and `to_numpy()` are also quite overlapping for the EAs.\r\n",
"> So could also ask, why not for to_numpy()? \r\n\r\nI don't feel strongly about this, but asked in the first place because it seems most of the code complexity in this PR is driven by to_numpy changes. Without that, most of this is just boilerplate edits to `__getitem__` methods.\r\n\r\nThe main reason i can think of to treat to_numpy different from .array and .values is that it has an explicit `copy` keyword. With copy=False, the user ideally understands that they are getting a view on existing data.",
"> asked in the first place because it seems most of the code complexity in this PR is driven by to_numpy changes.\r\n\r\nLooking at the diff again, I think it is a bit 50/50 between `to_numpy()` and `__array__`. But `to_numpy()` also reuses the result from `__array__` in some cases, so if we would then want to have `to_numpy()` consistently not return readonly data, that would also requires some changes in `to_numpy()`. So regarding the implementation, not entirely sure this would be a lot simpler (but didn't look in detail).\r\n\r\n> The main reason i can think of to treat to_numpy different from .array and .values is that it has an explicit `copy` keyword. With copy=False, the user ideally understands that they are getting a view on existing data.\r\n\r\nYeah, we could potentially also make the default of `copy` to be None instead of False, with the same meaning (i.e. avoid a copy if possible), and so then if someone explicitly passes `copy=False`, then we wouldn't set the readonly flag. \r\n\r\nFrom previous discussions (maybe https://github.com/pandas-dev/pandas/issues/52823), I seem to remember that we at some point did bring up whether it would be worth having a keyword to control this behaviour, i.e. so there would be a way that you could ask for a numpy array that was guaranteed to be mutable. Of course you could do `to_numpy(copy=True)` which also guarantees that, but that doesn't cover the case where you want to get the data zero-copy if possible, and you know that mutating it is fine (for example because the holding dataframe or series is dismissed after converting). \r\nAt the moment, the documentation (https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html#read-only-numpy-arrays) suggests to manually reset the readonly flag:\r\n\r\n```\r\narr = ser.to_numpy()\r\narr.flags.writeable = True\r\n```\r\n\r\ninstead of adding a keyword like `arr = ser.to_numpy(ensure_writable=True)`. But so in theory `copy=False` could also cover that.\r\n\r\n(but this is probably a discussion for https://github.com/pandas-dev/pandas/issues/52823)"
] |
3,254,048,673
| 61,924
|
BUG: fix indexing with ArrowExtensionArray in .iloc
|
closed
| 2025-07-22T20:52:12
| 2025-08-15T16:28:22
| 2025-08-05T17:21:47
|
https://github.com/pandas-dev/pandas/pull/61924
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61924
|
https://github.com/pandas-dev/pandas/pull/61924
|
skonda29
| 4
|
- [X] closes #61311
- [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
The core problem is that when using .iloc with PyArrow-backed DataFrames, pandas' indexing validation calls min() and max() methods on the ArrowExtensionArray for bounds checking, but these methods were not implemented, resulting in AttributeError: 'ArrowExtensionArray' object has no attribute 'max'. This breaks basic indexing functionality that works with regular pandas DataFrames, creating an inconsistency in the PyArrow backend experience.
Proposed Solution -
My proposed solution addresses the issue by modifying _validate_key in pandas/core/indexing.py to detect ExtensionArrays and convert them to numpy arrays using to_numpy() or np.asarray(). Included a test case in the file pandas/tests/indexing/test_iloc.py that reproduces the issue to verify the implementation.
|
[
"Bug",
"Indexing",
"Arrow"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@mroeschke Thank you for your suggestion. I will rework on this PR to implement the _validate_key fix instead.\r\n",
"@mroeschke Would you mind taking a look at this PR when you get a chance? I've added a conversion to NumPy in _validate_key, and included a test case.\r\n\r\nFeedback is appreciated!",
"@mroeschke Please take a look at this implementation. I've implemented your suggestions",
"Thanks @skonda29 "
] |
3,252,361,944
| 61,923
|
fix: list numbering in roadmap.md
|
closed
| 2025-07-22T11:52:06
| 2025-07-22T16:28:28
| 2025-07-22T16:28:28
|
https://github.com/pandas-dev/pandas/pull/61923
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61923
|
https://github.com/pandas-dev/pandas/pull/61923
|
michellesweering
| 1
|
- [x] closes #60913
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the PR, but we recently removed the contents of the roadmap as the points were stale in https://github.com/pandas-dev/pandas/pull/61892\r\n\r\nClosing, but happy to have contributions to other issues labeled `good first issue`"
] |
3,252,117,914
| 61,922
|
BUG: `date_range` gives different output ends for fixed `end` and varying `start` when `freq="B"`
|
closed
| 2025-07-22T10:30:51
| 2025-07-22T20:28:05
| 2025-07-22T20:27:57
|
https://github.com/pandas-dev/pandas/issues/61922
| true
| null | null |
torext
| 2
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import datetime as dt
import pandas as pd
end = dt.datetime(2025, 7, 26, 6) # Saturday early morning
for d in range(18, 22): # Friday, Saturday, Sunday & Monday
start = dt.datetime(2025, 7, d, 10)
print(f"start={start}: {pd.date_range(start, end, freq="B")}\n")
```
### Issue Description
Running `date_range` with varying `start` and fixed `end` gives outputs differing final entries when `freq="B"`. Specifically, in the above example, the first and last iteration are missing an entry for `2025-07-25 10:00:00`.
### Expected Behavior
The last three iterations in the above code should produce the same output, and the first iteration should differ from the other three only in that it additionally includes `2025-07-18 10:00:00` as a first entry. Instead the output is the following:
```
start=2025-07-18 10:00:00: DatetimeIndex(['2025-07-18 10:00:00', '2025-07-21 10:00:00',
'2025-07-22 10:00:00', '2025-07-23 10:00:00',
'2025-07-24 10:00:00'],
dtype='datetime64[ns]', freq='B')
start=2025-07-19 10:00:00: DatetimeIndex(['2025-07-21 10:00:00', '2025-07-22 10:00:00',
'2025-07-23 10:00:00', '2025-07-24 10:00:00',
'2025-07-25 10:00:00'],
dtype='datetime64[ns]', freq='B')
start=2025-07-20 10:00:00: DatetimeIndex(['2025-07-21 10:00:00', '2025-07-22 10:00:00',
'2025-07-23 10:00:00', '2025-07-24 10:00:00',
'2025-07-25 10:00:00'],
dtype='datetime64[ns]', freq='B')
start=2025-07-21 10:00:00: DatetimeIndex(['2025-07-21 10:00:00', '2025-07-22 10:00:00',
'2025-07-23 10:00:00', '2025-07-24 10:00:00'],
dtype='datetime64[ns]', freq='B')
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.9
python-bits : 64
OS : Linux
OS-release : 4.18.0-553.36.1.el8_10.x86_64
Version : #1 SMP Wed Jan 22 03:07:54 EST 2025
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.1.3
pytz : 2024.1
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : 9.1.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.3.2
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 5.3.2
matplotlib : 3.10.1
numba : 0.61.0
numexpr : 2.10.2
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : 2.9.9
pymysql : None
pyarrow : 17.0.0
pyreadstat : None
pytest : 8.3.5
python-calamine : None
pyxlsb : None
s3fs : 2025.3.2
scipy : 1.15.2
sqlalchemy : 2.0.40
tables : 3.10.2
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : None
zstandard : 0.23.0
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Datetime"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"It looks like this very old issue https://github.com/pandas-dev/pandas/issues/35342 might be related to this, as in addition to the example I gave above I've observed other variations of it where the output is expected etc.",
"Thanks for the report! I am seeing all end on `2025-07-25 10:00:00` on main. There are a number of fixes in this regard coming in 3.0; closing."
] |
3,251,462,170
| 61,921
|
DOC: explicitly mention new str dtype is no longer a numpy dtype in migration guide
|
closed
| 2025-07-22T07:28:09
| 2025-07-26T09:19:34
| 2025-07-26T09:19:22
|
https://github.com/pandas-dev/pandas/pull/61921
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61921
|
https://github.com/pandas-dev/pandas/pull/61921
|
jorisvandenbossche
| 1
|
Triggered by https://github.com/pandas-dev/pandas/issues/61915, making it more explicit that the dtype is no longer a np.dtype object.
|
[
"Docs"
] | 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@jorisvandenbossche this will close https://github.com/pandas-dev/pandas/issues/61915?"
] |
3,251,415,957
| 61,920
|
BUG: IntervalIndex.unique() only contains the first interval if all interval borders are negative
|
open
| 2025-07-22T07:12:08
| 2025-08-01T07:30:58
| null |
https://github.com/pandas-dev/pandas/pull/61920
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61920
|
https://github.com/pandas-dev/pandas/pull/61920
|
khemkaran10
| 2
|
Fixes: #61917
Before FIx ❌:
```python
idx_neg = pd.IntervalIndex.from_tuples([(-4, -3), (-4, -3), (-3, -2), (-3, -2), (-2, -1), (-2, -1)])
print(idx_neg.unique())
# Output:
# IntervalIndex([(-4, -3]], dtype='interval[int64, right]')
```
After Fix ✅:
```python
idx_neg = pd.IntervalIndex.from_tuples([(-4, -3), (-4, -3), (-3, -2), (-3, -2), (-2, -1), (-2, -1)])
print(idx_neg.unique())
# output:
# IntervalIndex([(-4, -3], (-3, -2], (-2, -1]], dtype='interval[int64, right]')
```
- Fixes incorrect deduplication of negative-valued intervals when using .unique().
- Previously used .view("complex128"), which failed for negative floats/ints.
- [x] closes #61917
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
|
[
"Bug",
"Algos",
"Interval"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Hello @khemkaran10, I have tried your fix and I have the following issue. When i run with this branch this code:\r\n`import pandas as pd\r\nidx_neg = pd.IntervalIndex.from_tuples([(-4, -3), (-4, -3), (-3, -2), (-3, -2), (-2, -1), (-2, -1)])\r\nprint(idx_neg.unique())`\r\nI get from the last line: E ValueError: left side of interval must be <= right side\r\nI have tried to debug the problem and I found that:\r\nunique() from interval.py calls at the return self._from_combined(nc). the _from_combined() method from interval.py sets nc to nc = combined.view(\"i8\").reshape(-1, 2). Here is the problem, because:\r\narray([[-4.-3.j],\r\n [-3.-2.j],\r\n [-2.-1.j]])\r\ngets transformed to:\r\narray([[-4607182418800017408, -4609434218613702656],\r\n [-4609434218613702656, -4611686018427387904],\r\n [-4611686018427387904, -4616189618054758400]])\r\nand then the method delivers an invalid Intervalindex, where left side is greater then the right side.\r\n\r\nGould you try to run the code again and assure, that idx_neg.unique() really returns \"IntervalIndex([(-4, -3], (-3, -2], (-2, -1]], dtype='interval[int64, right]')\"? In my tests the return is \"IntervalIndex([([-4607182418800017408, -4609434218613702656], (-4609434218613702656, -4611686018427387904], (-4611686018427387904, -4616189618054758400]], dtype='interval[int64, right]')\" which triggers the error: \"E ValueError: left side of interval must be <= right side\"\r\n",
"@il1sf4 Thanks for pointing out. I have update the PR."
] |
3,250,175,829
| 61,919
|
Update v3.0.0.rst to use Month XX, 2025 instead of 2024
|
closed
| 2025-07-21T22:12:06
| 2025-07-22T02:14:18
| 2025-07-22T02:14:12
|
https://github.com/pandas-dev/pandas/pull/61919
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61919
|
https://github.com/pandas-dev/pandas/pull/61919
|
corneliusroemer
| 1
|
2024 is over and pandas v3 hasn't happened yet, so update year to 2025
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @corneliusroemer "
] |
3,250,024,508
| 61,918
|
QST: Global future flag
|
closed
| 2025-07-21T20:55:32
| 2025-07-21T21:27:25
| 2025-07-21T21:27:25
|
https://github.com/pandas-dev/pandas/issues/61918
| true
| null | null |
eicchen
| 1
|
### Research
- [x] I have searched the [[pandas] tag](https://stackoverflow.com/questions/tagged/pandas) on StackOverflow for similar questions.
- [x] I have asked my usage related question on [StackOverflow](https://stackoverflow.com).
### Link to question on StackOverflow
N/A
### Question about pandas
I remember talking about it during the last contributor meeting, but have unfortunately forgotten the specifics.
I know there is currently a 'future_stack' flag for future features to be activated in the future for the stack function. During the discussion, there was a consensus to use a global flag instead of function-specific variables to implement new functionalities to existing functions that would warrant a deprecation warning first.
Is that implemented yet? I can't seem to find anything currently in the code but could've sworn we talked about one existing.
|
[
"Usage Question"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"The decision was to implement a global flag on a per-change basis, not a single global flag. Here are some examples:\n\nhttps://github.com/pandas-dev/pandas/blob/bd9f060999c282907861a3f077513eef7f374579/pandas/core/config_init.py#L880-L899\n\nClosing for now. Just comment here if you have any follow up questions."
] |
3,249,292,932
| 61,917
|
BUG: `IntervalIndex.unique()` only contains the first interval if all interval borders are negative
|
open
| 2025-07-21T16:29:54
| 2025-07-26T11:39:54
| null |
https://github.com/pandas-dev/pandas/issues/61917
| true
| null | null |
johannes-mueller
| 3
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
print(pd.__version__)
idx_pos = pd.IntervalIndex.from_tuples([(3, 4), (3, 4), (2, 3), (2, 3), (1, 2), (1, 2)])
print(idx_pos.unique())
assert idx_pos.unique().shape == (3,) # succeeds
idx_neg = pd.IntervalIndex.from_tuples([(-4, -3), (-4, -3), (-3, -2), (-3, -2), (-2, -1), (-2, -1)])
print(idx_neg.unique())
assert idx_neg.unique().shape == (3,), f"Actual shape: {idx_neg.unique().shape}"
```
### Issue Description
Output with current main:
```
3.0.0.dev0+2250.g13f7b8b7e3
IntervalIndex([(3, 4], (2, 3], (1, 2]], dtype='interval[int64, right]')
IntervalIndex([(-4, -3]], dtype='interval[int64, right]')
Traceback (most recent call last):
File "/home/jmu3si/tmp/pd-demo.py", line 12, in <module>
assert idx_neg.unique().shape == (3,), f"Actual shape: {idx_neg.unique().shape}"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: Actual shape: (1,)
```
Only the interval `(-4, 3]` appears in the uniqued index.
A couple of other observations:
* The same result occurs with `closed="left"`
* Intervals that are not fully negative, e.g. `(-2, 0]` also appear in the uniqued index
* This does not seem to be a regression. I reproduced it all the way back to pandas-1.4.3
### Expected Behavior
Expect correct unique index for `index_neg` to be
`IntervalIndex([(-4, -3], (-3, -2], (-2, -1]], dtype='interval[int64, right]')` as it correctly did with the positive interval index.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 13f7b8b7e3dc6695c4e4b00afd0cccbd754210bd
python : 3.13.2
python-bits : 64
OS : Linux
OS-release : 6.8.0-60-generic
Version : #63~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Apr 22 19:00:15 UTC 2
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : de_DE.UTF-8
LOCALE : de_DE.UTF-8
pandas : 3.0.0.dev0+2250.g13f7b8b7e3
numpy : 2.2.6
dateutil : 2.9.0.post0
pip : 25.1.1
Cython : 3.1.2
sphinx : 8.2.3
IPython : 9.4.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
bottleneck : 1.5.0
fastparquet : 2024.11.0
fsspec : 2025.7.0
html5lib : 1.1
hypothesis : 6.136.1
gcsfs : 2025.7.0
jinja2 : 3.1.6
lxml.etree : 6.0.0
matplotlib : 3.10.3
numba : 0.61.2
numexpr : 2.11.0
odfpy : None
openpyxl : 3.1.5
psycopg2 : 2.9.10
pymysql : 1.4.6
pyarrow : 21.0.0
pyiceberg : 0.9.1
pyreadstat : 1.3.0
pytest : 8.4.1
python-calamine : None
pytz : 2025.2
pyxlsb : 1.0.10
s3fs : 2025.7.0
scipy : 1.16.0
sqlalchemy : 2.0.41
tables : 3.10.2
tabulate : 0.9.0
xarray : 2025.7.1
xlrd : 2.0.2
xlsxwriter : 3.2.5
zstandard : 0.23.0
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Algos",
"Interval"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"I think the issue is with this logic:\n\nhttps://github.com/pandas-dev/pandas/blob/073710f6be25ff7b402314be40af4e5c80e522d3/pandas/core/arrays/interval.py#L1992-L2000\n\nTest Script:\n```python\nimport pandas as pd\nidx_pos = pd.IntervalIndex.from_tuples([(3, 4), (3, 4), (2, 3), (2, 3), (1, 2), (1, 2)])\nprint(idx_pos)\n\nia = idx_pos._data\n\nprint(\"\\nCombined (ia._combined):\")\nprint(ia._combined)\n\ncombined_view = ia._combined.view(\"complex128\")\nprint(\"\\ncombined_view (as complex128):\")\nprint(combined_view)\n\nprint(\"idx_pos unique()\")\nprint(idx_pos.unique())\n\nprint(\"-------------------------------------------------------------------\")\n\nidx_neg = pd.IntervalIndex.from_tuples([(-4, -3), (-4, -3), (-3, -2), (-3, -2), (-2, -1), (-2, -1)])\nprint(idx_neg)\nia = idx_neg._data\n\nprint(\"\\nCombined (ia._combined):\")\nprint(ia._combined)\n\ncombined_view = ia._combined.view(\"complex128\")\nprint(\"\\ncombined_view (as complex128):\")\nprint(combined_view)\nprint(\"idx_neg unique()\")\nprint(idx_neg.unique())\n\n```\n\nOutput:\n\n```\nIntervalIndex([(3, 4], (3, 4], (2, 3], (2, 3], (1, 2], (1, 2]], dtype='interval[int64, right]')\n\nCombined (ia._combined):\n[[3 4]\n [3 4]\n [2 3]\n [2 3]\n [1 2]\n [1 2]]\n\ncombined_view (as complex128):\n[[1.5e-323+2.0e-323j]\n [1.5e-323+2.0e-323j]\n [9.9e-324+1.5e-323j]\n [9.9e-324+1.5e-323j]\n [4.9e-324+9.9e-324j]\n [4.9e-324+9.9e-324j]]\n\nidx_pos unique()\nIntervalIndex([(3, 4], (2, 3], (1, 2]], dtype='interval[int64, right]')\n-------------------------------------------------------------------\nIntervalIndex([(-4, -3], (-4, -3], (-3, -2], (-3, -2], (-2, -1], (-2, -1]], dtype='interval[int64, right]')\n\nCombined (ia._combined):\n[[-4 -3]\n [-4 -3]\n [-3 -2]\n [-3 -2]\n [-2 -1]\n [-2 -1]]\n\ncombined_view (as complex128):\n[[nan+nanj]\n [nan+nanj]\n [nan+nanj]\n [nan+nanj]\n [nan+nanj]\n [nan+nanj]]\n\nidx_neg unique()\nIntervalIndex([(-4, -3]], dtype='interval[int64, right]')\n```",
"Thanks for the report. Confirmed on main. Further investigations and PRs to fix are welcome.",
"@jbrockmendel @mroeschke can you please re-review this [PR](https://github.com/pandas-dev/pandas/pull/61920) and let me know if any changes are required."
] |
3,248,660,362
| 61,916
|
String dtype: backwards compatibility of selecting "object" vs "str" columns in `select_dtypes`
|
open
| 2025-07-21T13:24:05
| 2025-08-21T07:19:47
| null |
https://github.com/pandas-dev/pandas/issues/61916
| true
| null | null |
jorisvandenbossche
| 2
|
We provide the `DataFrame.select_dtypes()` method to easily subset columns based on data types (groups). See https://pandas.pydata.org/pandas-docs/version/2.3/user_guide/basics.html#selecting-columns-based-on-dtype
At the moment, as documented, the select string columns you must use the `object` dtype:
```python
>>> pd.options.future.infer_string = False
>>> df = pd.DataFrame(
... {
... "string": list("abc"),
... "int64": list(range(1, 4)),
... }
... )
>>> df.dtypes
string object
int64 int64
dtype: object
>>> df.select_dtypes(include=[object])
string
0 a
1 b
2 c
```
On current main, with the string dtype enabled, the above dataframe now has a `str` column, and so selecting `object` dtype columns gives an empty result. One can use `str` instead:
```python
>>> pd.options.future.infer_string = True
>>> df = pd.DataFrame(
... {
... "string": list("abc"),
... "int64": list(range(1, 4)),
... }
... )
>>> df.dtypes
string str
int64 int64
dtype: object
>>> df.select_dtypes(include=[object])
Empty DataFrame
Columns: []
Index: [0, 1, 2]
>>> df.select_dtypes(include=[str])
string
0 a
1 b
2 c
```
On the one hand, that is an "obvious" behaviour change as a consequence of the column now having a different dtype. But on the other hand, this will also break all code currently using `select_dtypes` to select string columns (and potentially silently, since it just no longer select them).
**How to write compatible code?**
One can select both object and string dtypes, so you select those columns in both older and newer pandas. One gotcha is that `df.select_dtypes(include=[str])` is not allowed in pandas<=2.3 (_"string dtypes are not allowed, use 'object' instead"_), and has to use "string" instead of "str" (although the default dtype is `str` ..). This will select opt-in nullable string columns as well, but so also the new default str dtype:
```python
# this gives the same result in both infer_string=True or False
>>> df.select_dtypes(include=[object, "string"])
string
0 a
1 b
2 c
```
TODO: this should be added to the migration guide in https://pandas.pydata.org/docs/dev/user_guide/migration-3-strings.html#the-dtype-is-no-longer-object-dtype
**Can we make this upgrade experience smoother?**
Given that this will essentially break every use case of `select_dtypes` that involves selecting string columns (and given the fact this is a method, so we are more flexible compared to `ser.dtype == object`), I am wondering if we should provide some better upgrading behaviour. Some options:
- For now let `select_dtypes(include=[object])` keep selecting string columns as well, for backwards compatibility (and we can (later) add a warning we will stop doing that in the future)
- When a user does `select_dtypes(include=[object])` in pandas 3.0, and we see that there are `str` columns, raise a warning mentioning to the user they likely want to do `include=[str]` instead.
For both cases, it gets annoying if you _actually_ want to select `object` columns, because then you have a (false positive) warning that you can't really do anything about (except ignoring/suppressing)
And in any case, we should probably still add a warning to pandas 2.3 about this when the string mode is enabled (for if we do a 2.3.2 release)
|
[
"Strings"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Since a lot of systems likely rely on `select_dtypes(include=[object])` returning string columns, I think we should maintain backwards compatibility in 3.0, but emit a `FutureWarning` when `str` columns are implicitly selected. That avoids silent breakage while giving users time to update. In future versions, we can deprecate this behavior cleanly.",
"Briefly discussed this at the dev meeting as well, and general agreement with your suggestion. Let's keep the behaviour of `object` selecting string columns, and warn if that happens"
] |
3,247,701,580
| 61,915
|
BUG: Cannot interpret string dtype as a valid data type
|
closed
| 2025-07-21T08:31:47
| 2025-07-26T09:20:33
| 2025-07-26T09:20:33
|
https://github.com/pandas-dev/pandas/issues/61915
| true
| null | null |
jeremiedbb
| 3
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import numpy as np
df = pd.DataFrame([[1,2], [3,4]], columns=["first", "second"])
np.dtype(df.columns.dtype)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[10], line 1
----> 1 np.dtype(df.columns.dtype)
TypeError: Cannot interpret '<StringDtype(storage='python', na_value=nan)>' as a data type
```
### Issue Description
Hi, this issue was caught in scikit-learn's CI (for instance [here](https://dev.azure.com/scikit-learn/scikit-learn/_build/results?buildId=78378&view=logs&jobId=dfe99b15-50db-5d7b-b1e9-4105c42527cf&j=dfe99b15-50db-5d7b-b1e9-4105c42527cf&t=ef785ae2-496b-5b02-9f0e-07a6c3ab3081)) a couple of days ago and only involves the dev version of pandas. It looks like there was a recent change in pandas string dtypes that make them not recognized as numpy dtypes.
ping @jorisvandenbossche, I saw that you merged several PRs about pandas strings last week :)
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 930c8a479d3e4644cb71de34770271f49f4862ff
python : 3.13.5
python-bits : 64
OS : Linux
OS-release : 6.8.0-60-generic
Version : #63~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Apr 22 19:00:15 UTC 2
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : fr_FR.UTF-8
LOCALE : fr_FR.UTF-8
pandas : 3.0.0.dev0+2249.g930c8a479d
numpy : 2.3.1
dateutil : 2.9.0.post0
pip : 25.1.1
Cython : 3.1.2
sphinx : None
IPython : 9.4.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
bottleneck : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
psycopg2 : None
pymysql : None
pyarrow : None
pyiceberg : None
pyreadstat : None
pytest : 8.4.1
python-calamine : None
pytz : None
pyxlsb : None
s3fs : None
scipy : 1.16.0
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
qtpy : None
pyqt5 : None
</details>
|
[
"Strings"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"This is actually an intentional change (see https://pandas.pydata.org/docs/dev/whatsnew/v2.3.0.html#upcoming-changes-in-pandas-3-0), and with the default string dtype being a pandas extension dtype, it is then expected behaviour that you get this error in numpy functions that are expecting a `np.dtype`.\n\nIn the migration guide, there is a section about the consequence of the dtype no longer being object dtype (https://pandas.pydata.org/docs/dev/user_guide/migration-3-strings.html#the-dtype-is-no-longer-object-dtype), but probably we should make it more explicit there that it is also no longer a _numpy_ dtype object.\n\nWe already have several other data types in pandas that are pandas extension dtypes (eg categorical), so in general sklearn is already equipped to handle those, but I certainly expect there will be some corner cases where this is not yet handled (and specifically the error in the column transformer is for the dtype of the `columns`, where it is currently much less likely to have a pandas dtype).\n\nWill open a PR with some fixes on the sklearn side",
"> In the migration guide, there is a section about the consequence of the dtype no longer being object dtype (https://pandas.pydata.org/docs/dev/user_guide/migration-3-strings.html#the-dtype-is-no-longer-object-dtype), but probably we should make it more explicit there that it is also no longer a _numpy_ dtype object.\n\n-> https://github.com/pandas-dev/pandas/pull/61921",
"Going to close this as there is further no action to be taken on the pandas side (apart from better documenting, which is one in https://github.com/pandas-dev/pandas/pull/61921)"
] |
3,246,738,525
| 61,914
|
BUG: Convertors not running when reading a dataframe from excel using calamine engine
|
open
| 2025-07-20T22:18:12
| 2025-07-21T16:09:49
| null |
https://github.com/pandas-dev/pandas/issues/61914
| true
| null | null |
ramadanomar
| 2
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
#### Warning an external xlsx file is downloaded I'm not sure how to provide a minimal reproduceable example without an excel file.
```python
import pandas as pd
converters = {
0: lambda v: str(v),
1: lambda v: str(v),
}
df = pd.read_excel("https://finlearn.ro/wp-content/uploads/2025/07/bugreport.xlsx",
header=None,
converters=converters,
engine="calamine")
df.iloc[130:140]
```
### Issue Description
Both convertors and dtype fields are ignored when reading an excel file using the `python-calamine` engine
### Expected Behavior
convertors / data types should be preserved.
### Installed Versions
<details>
/usr/local/lib/python3.11/dist-packages/_distutils_hack/__init__.py:31: UserWarning: Setuptools is replacing distutils. Support for replacing an already imported distutils is deprecated. In the future, this condition will fail. Register concerns at https://github.com/pypa/setuptools/issues/new?template=distutils-deprecation.yml
warnings.warn(
INSTALLED VERSIONS
------------------
commit : d9cdd2ee5a58015ef6f4d15c7226110c9aab8140
python : 3.11.13.final.0
python-bits : 64
OS : Linux
OS-release : 6.1.123+
Version : #1 SMP PREEMPT_DYNAMIC Sun Mar 30 16:01:29 UTC 2025
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : en_US.UTF-8
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.2
numpy : 2.0.2
pytz : 2025.2
dateutil : 2.9.0.post0
setuptools : 75.2.0
pip : 24.1.2
Cython : 3.0.12
pytest : 8.3.5
hypothesis : None
sphinx : 8.2.3
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 5.4.0
html5lib : 1.1
pymysql : None
psycopg2 : 2.9.10
jinja2 : 3.1.6
IPython : 7.34.0
pandas_datareader : 0.10.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
bottleneck : 1.4.2
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.3.2
gcsfs : 2025.3.2
matplotlib : 3.10.0
numba : 0.60.0
numexpr : 2.11.0
odfpy : None
openpyxl : 3.1.5
pandas_gbq : 0.29.2
pyarrow : 18.1.0
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.3
sqlalchemy : 2.0.41
tables : 3.10.2
tabulate : 0.9.0
xarray : 2025.3.1
xlrd : 2.0.2
zstandard : 0.23.0
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
<details>
INSTALLED VERSIONS
------------------
commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6
python : 3.11.10
python-bits : 64
OS : Linux
OS-release : 6.14.0-24-generic
Version : #24~24.04.3-Ubuntu SMP PREEMPT_DYNAMIC Mon Jul 7 16:39:17 UTC 2
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.3.1
numpy : 2.3.1
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 24.0
Cython : None
sphinx : None
IPython : 9.4.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : 8.4.1
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : 2.0.2
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@ramadanomar Converters are not applied to missing values (NaN in this case), so the data from _read_excel_ converts non-missing values to strings but does not change the dtype of NaN. IMO this is the expected behavior in pandas.\n\n```python\nimport pandas as pd\n\nconverters = {\n 0: lambda v: str(v),\n 1: lambda v: str(v),\n}\n\ndf = pd.read_excel(\"https://finlearn.ro/wp-content/uploads/2025/07/bugreport.xlsx\",\n header=None,\n converters=converters,\n engine=\"calamine\")\ndf.iloc[130:140].map(type)\n```\nOutput:\n\n0 | 1\n-- | --\n<class 'str'> | <class 'str'>\n<class 'float'> | <class 'str'>\n<class 'float'> | <class 'str'>\n<class 'float'> | <class 'str'>\n<class 'str'> | <class 'str'>\n<class 'float'> | <class 'str'>\n<class 'float'> | <class 'str'>\n<class 'str'> | <class 'str'>\n<class 'str'> | <class 'float'>\n<class 'str'> | <class 'float'>\n\n",
"It's about not applying the convertors at all:\n\n<img width=\"437\" height=\"156\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/1018749b-3f7f-411a-843a-3d4f5d79704b\" />\n\nThis row is being read as \n\n<img width=\"663\" height=\"885\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/3d453da6-76a5-4801-a865-2a7cbb42b81d\" />\n\nI want to force pandas to read the cell value as a string, without any room for interpretation. If convertors or dtype is not the intended way of achiving this result let me know.\n\nThanks for the help!"
] |
3,246,261,681
| 61,913
|
CI: properly enable the string dtype also for custom CI builds
|
closed
| 2025-07-20T10:57:02
| 2025-07-21T19:43:58
| 2025-07-21T16:34:21
|
https://github.com/pandas-dev/pandas/pull/61913
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61913
|
https://github.com/pandas-dev/pandas/pull/61913
|
jorisvandenbossche
| 1
|
Small follow-up on https://github.com/pandas-dev/pandas/pull/61722, where I forgot to set the default of the env variable to 1 if not specified in the matrix (I included it in the main matrix, but then all custom builds that were explicitly listed in the `include: ` section of the matrix would not have the variable, and therefore still default to turn off the string dtype)
|
[
"CI",
"Strings"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jorisvandenbossche "
] |
3,246,237,183
| 61,912
|
DOC: mention .attrs are preserved in Parquet IO for pyarrow engine
|
open
| 2025-07-20T10:19:28
| 2025-08-06T12:16:40
| null |
https://github.com/pandas-dev/pandas/pull/61912
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61912
|
https://github.com/pandas-dev/pandas/pull/61912
|
imramraja
| 3
|
This PR adds documentation to `DataFrame.to_parquet` and `pandas.read_parquet` highlighting that `DataFrame.attrs` are preserved when using the "pyarrow" engine.
This behavior is already implemented in `pandas/io/parquet/pyarrow.py`, but was undocumented. This PR improves discoverability for users.
- [x] Added `Notes` section in both docstrings
- [ ] (Optional) Will add test in follow-up if needed
First-time contributor 😊
|
[
"Docs",
"IO Parquet"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the PR, but I don't think we need to necessarily document this so closing. Happy to have your contributions on other issues labeled `good first issue`",
"> I don't think we need to necessarily document this\r\n\r\nGiven that this is a behaviour that is rather unique to the parquet format (most other IO methods in pandas don't preserve attrs, I think?), and also something that differs between both engines, this seems worth mentioning in the docs?",
"I’m fine with this if the author can get the CI green"
] |
3,246,075,387
| 61,911
|
DOC: fix doctests for datetimelike.py files for the new string dtype
|
closed
| 2025-07-20T06:50:55
| 2025-07-20T13:59:37
| 2025-07-20T11:03:14
|
https://github.com/pandas-dev/pandas/pull/61911
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61911
|
https://github.com/pandas-dev/pandas/pull/61911
|
DevastatingRPG
| 0
|
~~- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)~~
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
~~- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~~
~~- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.~~
Part of #61886
|
[
"Docs",
"Strings"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,246,036,129
| 61,910
|
BUG: .describe() doesn't work for EAs #61707
|
open
| 2025-07-20T06:10:36
| 2025-08-20T16:57:21
| null |
https://github.com/pandas-dev/pandas/pull/61910
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61910
|
https://github.com/pandas-dev/pandas/pull/61910
|
kernelism
| 3
|
This PR fixes a bug where Series.describe() fails on certain `ExtensionArray` dtypes such as `pint[kg]`, due to attempting to cast the result to `Float64Dtype`. This is because some of the produced statistics are not castable to float, which raises errors like DimensionalityError.
We now avoid forcing a Float64Dtype return dtype when the EA’s scalar values cannot be safely cast. Instead:
If the EA produces outputs with mixed dtypes, the result is returned with `dtype=None`.
- [x] closes #61707
- [x] Adds a regression test.
- [x] pre-commit checks passed
- [x] Adds type annotations
- [x] Adds a whatsnew entry
@jbrockmendel This is a new PR with the comments from [61760](https://github.com/pandas-dev/pandas/pull/61760) addressed. Apologies for the duplication.
|
[
"Bug",
"ExtensionArray"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Looks like this breaks some other tests",
"I think the EA needs an attribute to tell describe what the dtype should be to avoid using object",
"> I think the EA needs an attribute to tell describe what the dtype should be to avoid using object\r\n\r\ni suspect this _may_ be a use case for the just-introduced EA._cast_pointwise_result"
] |
3,245,538,156
| 61,909
|
BUG: fix fill value for gouped sum in case of unobserved categories for string dtype (empty string instead of 0)
|
closed
| 2025-07-19T18:28:34
| 2025-08-21T07:18:12
| 2025-07-22T07:20:02
|
https://github.com/pandas-dev/pandas/pull/61909
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61909
|
https://github.com/pandas-dev/pandas/pull/61909
|
jorisvandenbossche
| 2
|
I ran into one more case of the sum of empty / all-NaN to use "0" instead of empty string (https://github.com/pandas-dev/pandas/issues/60229), specifically when effectively introducing empty groups with categorical data with observed=False.
Follow-up on https://github.com/pandas-dev/pandas/pull/60936
The passing through of `is_string` through several layers is a bit annoying, but effectively is needed to for now only changes this for string dtype, and not for object dtype in general (which in the other PR related to this, we did for now)
|
[
"Bug",
"Groupby",
"Strings"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 27928edc61f5b01e933036a99549636425e5a557\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #61909: BUG: fix fill value for gouped sum in case of unobserved categories for string dtype (empty string instead of 0)'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61909-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61909 on branch 2.3.x (BUG: fix fill value for gouped sum in case of unobserved categories for string dtype (empty string instead of 0))\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ",
"Manual backport -> https://github.com/pandas-dev/pandas/pull/61963"
] |
3,245,525,510
| 61,908
|
DOC: fix doctests for pandas/core/strings/accessor.py for new string dtype
|
closed
| 2025-07-19T18:17:39
| 2025-07-26T09:05:40
| 2025-07-25T13:28:58
|
https://github.com/pandas-dev/pandas/pull/61908
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61908
|
https://github.com/pandas-dev/pandas/pull/61908
|
arthurlw
| 5
|
- [ ] ~closes #xxxx (Replace xxxx with the GitHub issue number)~
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] ~Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~
- [ ] ~Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.~
Part of #61886
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"cc @jorisvandenbossche \r\n\r\nI noticed that some of the docstrings highlight differences between returning `NaN` and `False` for boolean operations (e.g., [here](https://github.com/arthurlw/pandas/blob/1613ce992f0ab4a34ab4b0db1a421efd63b2fd54/pandas/core/strings/accessor.py#L1296)), which are now outdated. These should be updated, though it might be better to open a separate issue for that.\r\n\r\nOther files may have similar cases, though I haven't done a full check yet.",
"> I noticed that some of the docstrings highlight differences between returning `NaN` and `False` for boolean operations (e.g., [here](https://github.com/arthurlw/pandas/blob/1613ce992f0ab4a34ab4b0db1a421efd63b2fd54/pandas/core/strings/accessor.py#L1296)), which are now outdated. These should be updated, though it might be better to open a separate issue for that.\r\n\r\nGood catch. That was an intentional change, see https://github.com/pandas-dev/pandas/issues/54805 / https://github.com/pandas-dev/pandas/pull/59616, so it is fine to update the docstrings here while updating them to use the string dtype.",
"@arthurlw thanks for the PR!",
"Going to merge this, so we can enable the doctests again. ~Will open an issue for the remaining follow-up task~ (not actually an issue, see inline comment above)",
"Thanks @arthurlw!"
] |
3,245,497,141
| 61,907
|
DOC: fix doctests for pandas/core/generic.py for new string dtype
|
closed
| 2025-07-19T17:49:17
| 2025-07-21T11:29:09
| 2025-07-21T11:10:44
|
https://github.com/pandas-dev/pandas/pull/61907
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61907
|
https://github.com/pandas-dev/pandas/pull/61907
|
arthurlw
| 1
|
- [ ] ~closes #xxxx (Replace xxxx with the GitHub issue number)~
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] ~Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~
- [ ] ~Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.~
Part of #61886
|
[
"Docs",
"Strings"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks!"
] |
3,245,171,453
| 61,906
|
ENH: Make attributes saved by default
|
closed
| 2025-07-19T10:03:08
| 2025-07-21T03:04:33
| 2025-07-21T03:02:18
|
https://github.com/pandas-dev/pandas/issues/61906
| true
| null | null |
acampove
| 3
|
### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
Hi,
When doing
```python
df.attrs['array'] = array
df.to_parquet('file.parquet')
```
I see that I am not saving the array, could this be implemented?
Cheers
### Feature Description
The code above would safe `array` and it would load it when loading the parquet file into a dataframe.
### Alternative Solutions
I guess doing it myself separately with some helper function
### Additional Context
_No response_
|
[
"Enhancement",
"IO Parquet",
"metadata",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Hi! I’d like to work on this issue as my first contribution to pandas. Please assign it to me.\n\nI’ve already started exploring the codebase and implemented a prototype that stores `DataFrame.attrs` in Parquet file metadata using `pyarrow`. I plan to support restoring it in `read_parquet()` as well.\nLooking forward to your feedback and guidance!\n",
"> Hi! I’d like to work on this issue as my first contribution to pandas. Please assign it to me.\n> \n> I’ve already started exploring the codebase and implemented a prototype that stores `DataFrame.attrs` in Parquet file metadata using `pyarrow`. I plan to support restoring it in `read_parquet()` as well. Looking forward to your feedback and guidance!\n\nIm not a pandas maintainer, but you might want to also implement it in other formats. One can save to JSON, CSV, etc. Saving the extra attributes to parquet should not be hard. However I am not sure if there is an easy maintainable way to put it in the other formats such that it does not break anything. The way I see this, the `attrs` are metadata and I would add a metadata field in the JSON file. For CSV, I do not know how it can be done.",
"xref #54321 \n\nHi thanks for raising this! Saving `.attrs` attributes to parquet files is already supported in pandas 2.1.0 and above (See the issue linked above).\n\nClosing this for now, but feel free to open another issue if you still encounter issues!"
] |
3,245,115,855
| 61,905
|
DOC: fix doctests for pandas/core/base.py for new string dtype
|
closed
| 2025-07-19T08:38:03
| 2025-07-22T22:55:28
| 2025-07-22T22:55:27
|
https://github.com/pandas-dev/pandas/pull/61905
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61905
|
https://github.com/pandas-dev/pandas/pull/61905
|
paulbehrisch
| 0
|
This is an attempt to fixe the failing doctests for `pandas/core/base.py` referenced in issue #61886
Test run
```
================================================================================================================================ test session starts =================================================================================================================================
platform darwin -- Python 3.13.5, pytest-8.4.1, pluggy-1.6.0
PyQt5 5.15.11 -- Qt runtime 5.15.17 -- Qt compiled 5.15.14
rootdir: /Users/paul/code/a/pandas
configfile: pyproject.toml
plugins: anyio-4.9.0, xdist-3.8.0, localserver-0.9.0.post0, cov-6.2.1, qt-4.5.0, hypothesis-6.135.32, cython-0.3.1
collected 21 items
pandas/core/base.py .....................
------------------------------------------------------------------------------------------------------------ generated xml file: /Users/paul/code/a/pandas/test-data.xml -------------------------------------------------------------------------------------------------------------
================================================================================================================================ slowest 30 durations ================================================================================================================================
(30 durations < 0.005s hidden. Use -vv to show these durations.)
================================================================================================================================= 21 passed in 0.03s =================================================================================================================================
```
|
[
"Docs",
"Strings"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,245,006,696
| 61,904
|
DOC: Clarify to_numeric behavior for numeric dtypes
|
closed
| 2025-07-19T06:46:56
| 2025-07-21T20:53:56
| 2025-07-21T20:53:49
|
https://github.com/pandas-dev/pandas/pull/61904
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61904
|
https://github.com/pandas-dev/pandas/pull/61904
|
chilin0525
| 1
|
- [x] closes #61903
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Docs",
"Downcasting"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @chilin0525!"
] |
3,244,997,539
| 61,903
|
DOC: Clarify to_numeric behavior for numeric dtypes
|
closed
| 2025-07-19T06:39:17
| 2025-07-21T20:53:50
| 2025-07-21T20:53:50
|
https://github.com/pandas-dev/pandas/issues/61903
| true
| null | null |
chilin0525
| 2
|
### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/dev/reference/api/pandas.to_numeric.html#pandas-to-numeric
### Documentation problem
The docstring for the `to_numeric` function needs to be improved for clarity and accuracy. The current documentation states, "The default return dtype is float64 or int64," which can be misleading. This statement doesn't account for cases where the input data is already of a numeric ExtensionDtype (e.g., Int32, Float32, or Arrow dtypes where `_is_numeric` is `True`). In these instances, `to_numeric` correctly preserves the original dtype rather than converting it, making the current documentation incomplete.
### Suggested fix for documentation
1. If the input is already of a numeric dtype, its dtype is preserved.
2. The conversion to a default float64 or int64 dtype primarily applies to non-numeric inputs.
|
[
"Docs",
"Downcasting"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Related to https://github.com/pandas-dev/pandas/issues/61641#issuecomment-3013740740",
"take"
] |
3,244,994,973
| 61,902
|
BUG: Raise ParserWarning when on_bad_lines is callable and index_col is set (GH#61882)
|
open
| 2025-07-19T06:36:25
| 2025-08-22T00:08:08
| null |
https://github.com/pandas-dev/pandas/pull/61902
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61902
|
https://github.com/pandas-dev/pandas/pull/61902
|
AbhijitPatill
| 4
|
This PR fixes issue #61882. It raises a `ParserWarning` when `on_bad_lines` is a callable and `index_col` is set, since this combination can lead to unexpected behavior.
A test was added in `test_read_csv_warn.py` to ensure the warning is raised appropriately.
- [x] closes #61882
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests)
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Stale"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Hi, this PR fixes Issue #[61882]. All pre-commit checks pass locally. The CI failures appear unrelated to this change (macOS/Windows builds, Numpy dev, etc.). Please let me know if anything else is needed.\r\n\r\n",
"@AbhijitPatill The PR includes over 5,000 file changes — could you double-check if the branch was created from main correctly?\r\n\r\nAlso, I suggest waiting for confirmation from a core member before opening a PR for any issue.",
"> This PR fixes issue #61882.\r\n\r\nthat is a merged PR relating to datetime. What's the issue with csv parsing?",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this."
] |
3,243,915,646
| 61,901
|
DEPR: maybe_infer_ndim
|
closed
| 2025-07-18T17:41:21
| 2025-07-21T17:09:43
| 2025-07-21T16:44:14
|
https://github.com/pandas-dev/pandas/pull/61901
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61901
|
https://github.com/pandas-dev/pandas/pull/61901
|
jbrockmendel
| 1
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Deprecate"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jbrockmendel "
] |
3,243,872,893
| 61,900
|
BUG: fix to_json() with JSON Table Schema work correctly with string dtype
|
closed
| 2025-07-18T17:23:32
| 2025-07-26T19:09:03
| 2025-07-26T11:27:37
|
https://github.com/pandas-dev/pandas/pull/61900
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61900
|
https://github.com/pandas-dev/pandas/pull/61900
|
khemkaran10
| 3
|
Fixes: #61889
To ensure consistent behavior for to_json(), when dtype="str" is used, it will now output _"type": "string"_ instead of _"type": "any"_.
Before Fix:
```
>>> pd.Series(["a", "b", None], dtype="str").to_json(orient="table", index=False)
'{"schema":{"fields":[{"name":"values","type":"any","extDtype":"str"}],"pandas_version":"1.4.0"},"data":[{"values":"a"},{"values":"b"},{"values":null}]}'
```
After Fix:
```
>>> pd.Series(["a", "b", None], dtype="str").to_json(orient="table", index=False)
'{"schema":{"fields":[{"name":"values","type":"string","extDtype":"str"}],"pandas_version":"1.4.0"},"data":[{"values":"a"},{"values":"b"},{"values":null}]}'
```
- [x] closes #61889
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Bug",
"IO JSON",
"Strings"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 a067fff92540118cd784349b6e79021087347768\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #61900: BUG: fix to_json() with JSON Table Schema work correctly with string dtype'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61900-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61900 on branch 2.3.x (BUG: fix to_json() with JSON Table Schema work correctly with string dtype)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ",
"Thanks @khemkaran10!",
"Manual backport -> https://github.com/pandas-dev/pandas/pull/61965"
] |
3,243,815,941
| 61,899
|
FIX: Correct clip behavior test for lower > upper case
|
closed
| 2025-07-18T16:58:32
| 2025-07-19T03:35:22
| 2025-07-19T03:35:22
|
https://github.com/pandas-dev/pandas/pull/61899
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61899
|
https://github.com/pandas-dev/pandas/pull/61899
|
utsab345
| 0
|
## Description:
This PR adds a test case to verify the behavior of DataFrame.clip() when the lower bound is greater than the upper bound. The test confirms that the method applies clipping boundaries as-is without swapping them, which results in partially clipped values consistent with current pandas behavior.
## Issue:
This fixes the test logic related to issue [#61856](https://github.com/pandas-dev/pandas/issues/61856)
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,243,700,044
| 61,898
|
BUG: `AttributeError` in `pd.eval()` when calling attribute after binary operation
|
closed
| 2025-07-18T16:08:05
| 2025-07-18T16:13:34
| 2025-07-18T16:13:34
|
https://github.com/pandas-dev/pandas/issues/61898
| true
| null | null |
ryantam626
| 2
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
series1 = pd.Series([1,2,3,4,5])
series2 = pd.Series([2,3,5,1,2])
pd.eval(
"(a / b).cumsum()",
local_dict={"a": series1, "b": series2}
)
```
### Issue Description
```
AttributeError: 'BinOp' object has no attribute 'value'
```
raised.
### Expected Behavior
```
0 0.500000
1 1.166667
2 1.766667
3 5.766667
4 8.266667
dtype: float64
```
should yield this result.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6
python : 3.11.10
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : Intel64 Family 6 Model 183 Stepping 1, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 2.3.1
numpy : 2.3.1
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.1.1
Cython : None
sphinx : None
IPython : 9.4.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : 2.11.0
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Also an issue for \n```\nimport pandas as pd\n\n\nseries1 = pd.Series([0,1,0,2,3,4,5],dtype=bool)\n\npd.eval(\n \"(~a).cumsum()\",\n local_dict={\"a\": series1}\n)\n```\n\n\n```\nAttributeError: 'UnaryOp' object has no attribute 'value'\n```\nraised \n\nExcepted\n```\n0 1\n1 1\n2 2\n3 2\n4 2\n5 2\n6 2\ndtype: int32\n```",
"Actually fixed in a82307f15630690fc288afeed27c549a028d94e9 - sorry for the noise."
] |
3,243,647,605
| 61,897
|
DEPR: PeriodDtype.freq
|
open
| 2025-07-18T15:47:52
| 2025-08-21T00:07:09
| null |
https://github.com/pandas-dev/pandas/pull/61897
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61897
|
https://github.com/pandas-dev/pandas/pull/61897
|
jbrockmendel
| 6
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Motivated by #47227.
First (and by a wide margin smallest) of several PRs to deprecate using `freq` to refer to a Period's resolution. If we go down this route, the other PRs will incrementally do the same for `freq` in the Period constructor, then `freq` as a Period attribute, then the same for PeriodArray/Index. We shouldn't merge this until we're agreed to do this across the board.
This uses "unit" as the replacement, as that is also what we use for Timestamp/Timedelta. _But_ while it has roughly the same semantic meaning, Period.unit does return a different type from Timestamp.unit, which might be a reason to use a third term? Also "unit" is overloaded in to_datetime/to_timedelta. So while "unit" is the best idea I've had, I'll understand if people want to bikeshed.
cc @mroeschke @jorisvandenbossche
|
[
"Period",
"Deprecate",
"Stale"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Could do \"offset\" for the BaseOffset object and \"unit\" for the string?",
"> Period.unit does return a different type from Timestamp.unit, which might be a reason to use a third term?\r\n\r\nI think this is a good reason to use a different term. I think \"offset\" or \"interval\" would be good terms",
"> I think this is a good reason to use a different term. I think \"offset\" or \"interval\" would be good terms\r\n\r\nI'd like to avoid \"interval\" since that already has a meaning in pandas. offset im happy with.\r\n\r\n@jorisvandenbossche are you on board with this plan?",
"Agreed that we should ideally avoid overloading `unit`.\r\n\r\nWhile I definitely understand the issue for DatetimeIndex vs PeriodIndex `freq` attribute, and that it would be nice that those two cases would have the same meaning, I am not entirely sure that I feel that this is worth deprecating `freq` in all the period-related places (and the code churn for user of it). Certainly if we think the alternative we can come up with is not necessarily a better name (I don't know if that is the case though, `freq` is also not the ideal term) \r\n(if we do this, IMO we could start with adding the alternatives before actually deprecating)\r\n\r\nAn alternative would be to add a new method to DTI/PI to avoid the conflict? \r\n\r\n---\r\n\r\nOn alternative names for Period's `freq`: while \"offset\" is an obvious choice given that we use this term in our implementation, and therefore I am also used to hearing this term, I am not actually sure this is a very clear term for newcomers? Outside of the context of pandas, I wouldn't directly think about those kind of periods when hearing \"offset\" (and for \"time offset\" I would mostly thinking about the +/-HH:SS offset for tz ware timestamps)\r\n\r\nLooking at R's lubridate / Java's JodaTime, they have the concept of durations, periods and intervals, where the duration is essentially our timedelta (absolute length of time in seconds), a period is the calendar time length, and interval is span with specific start/stop instant (so like our Interval but specific to timestamps). \r\nSo in that sense, you could say that the `pandas.Period` represents a \"period\" (day, month, etc) at a certain point in time. But using `pd.Period.period` is probably not going to be less confusing ..? \r\n\r\nArrow (and some SQL systems) calls the period from JodaTime/lubridate an \"interval\" , so that could also be an option as mentioned above, but indeed that then conflicts with our `pd.Interval` ..\r\n\r\nAssume we would add a pandas extension dtype for Arrow's interval type of data (i.e. what you can now store as `pd.ArrowDtype(pa.month_day_nano_interval())`), how would we call that?\r\n",
"I think I am also coming around to liking `unit` .. (\"the unit of time that each period represents\") \r\nWhile it is annoying that the _type_ is different (str vs object), it does match in meaning. Timestamps just represents an instant point in time and only supports small (fixed-size) units, and periods represents the span for the same units, and then additionally also supports larger (and relative) units. \r\n\r\nBut as Brock mentioned in the top post, `unit` is then also already overloaded in `to_datetime`\r\n\r\nYet another idea: \"span\", indicating the time span of the Period (although I don't know if people's first connotation for \"time span\" is the absolute vs the relative version .., while here it would of course represent a relative time span)\r\n\r\n(sorry, not being very helpful here in coming closer to a decision ..)",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this."
] |
3,242,624,544
| 61,896
|
BUG: `rank` gives incorrect outputs on Windows with PyArrow backend
|
open
| 2025-07-18T09:46:04
| 2025-07-19T17:13:31
| null |
https://github.com/pandas-dev/pandas/issues/61896
| true
| null | null |
MarcoGorelli
| 3
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df = pd.DataFrame({'a': [3,6,1,1,None,6]}, dtype='Int64[pyarrow]')
df['a_mask'] = df['a'].isna()
print(df.groupby('a_mask').rank(method='min'))
```
### Issue Description
On Windows, this outputs
```
a
0 4.0
1 1.0
2 1.0
3 1.0
4 <NA>
5 5.0
```
### Expected Behavior
On Linux, it outputs
```
a
0 3.0
1 4.0
2 1.0
3 1.0
4 <NA>
5 4.0
```
### Installed Versions
Windows:
<details>
INSTALLED VERSIONS
------------------
commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6
python : 3.11.9
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.26100
machine : AMD64
processor : Intel64 Family 6 Model 141 Stepping 1, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United Kingdom.1252
pandas : 2.3.1
numpy : 2.3.1
pytz : 2025.2
dateutil : 2.9.0.post0
pip : None
Cython : None
sphinx : None
IPython : 9.4.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : 6.135.32
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 21.0.0
pyreadstat : None
pytest : 8.4.1
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
Linux:
<details>
INSTALLED VERSIONS
------------------
commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6
python : 3.12.8
python-bits : 64
OS : Linux
OS-release : 6.6.87.2-microsoft-standard-WSL2
Version : #1 SMP PREEMPT_DYNAMIC Thu Jun 5 18:30:46 UTC 2025
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : C.UTF-8
pandas : 2.3.1
numpy : 2.3.1
pytz : 2025.2
dateutil : 2.9.0.post0
pip : None
Cython : None
sphinx : None
IPython : 9.1.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.7.0
html5lib : None
hypothesis : 6.135.32
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : 3.10.3
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 21.0.0
pyreadstat : None
pytest : 8.4.1
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.16.0
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Groupby",
"Upstream issue",
"Transformations"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Note that downgrading PyArrow to version 20 resolves this\n\nI think this may be related to the PyArrow 21 release from 9 hours ago? https://pypi.org/project/pyarrow/21.0.0/",
"Took a quick look over the changelog for PyArrow 21, nothing jumped out. It'd probably be helpful to narrow down what functionality caused this. From Windows, are you able to post the output of:\n\n```python\ndf = pd.DataFrame({'a': [3,6,1,1,None,6]}, dtype='Int64[pyarrow]')\ndf['a_mask'] = df['a'].isna()\ngb = df.groupby('a_mask')\nprint(df['a_mask'])\nprint(gb._grouper.result_index)\nprint(gb._grouper.codes)\nprint(gb._grouper.groupings[0].codes)\n```\n\nIf the result_index / codes look right, my best guess it's something `ArrowExtensionArray._groupby_op`.",
"sure, here you go\n```python\n0 False\n1 False\n2 False\n3 False\n4 True\n5 False\nName: a_mask, dtype: bool\nIndex([False, True], dtype='bool', name='a_mask')\n[array([0, 0, 0, 0, 1, 0])]\n[0 0 0 0 1 0]\n```"
] |
3,241,608,318
| 61,895
|
Adding examples _typing.py
|
closed
| 2025-07-18T02:18:23
| 2025-07-18T16:40:14
| 2025-07-18T16:40:14
|
https://github.com/pandas-dev/pandas/pull/61895
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61895
|
https://github.com/pandas-dev/pandas/pull/61895
|
hiyuk1
| 1
|
Added simple examples to Renamer
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the PR but this doesn't add much. Closing, but feel free to tackle issues labeled `good first issue`"
] |
3,240,626,835
| 61,894
|
BUG: fix padding for string categories in CategoricalIndex repr
|
closed
| 2025-07-17T19:10:04
| 2025-07-19T10:34:36
| 2025-07-19T10:34:28
|
https://github.com/pandas-dev/pandas/pull/61894
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61894
|
https://github.com/pandas-dev/pandas/pull/61894
|
jorisvandenbossche
| 2
|
Resolving some xfails: getting back the same padding as we had before.
On current main with string dtype:
```
>>> pd.CategoricalIndex(["a", "bb", "ccc"] * 10)
CategoricalIndex([ 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb',
'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a',
'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc',
'a', 'bb', 'ccc', 'a', 'bb', 'ccc'],
categories=['a', 'bb', 'ccc'], ordered=False, dtype='category')
```
With this PR and what it looks like with object dtype:
```
>>> pd.CategoricalIndex(["a", "bb", "ccc"] * 10)
CategoricalIndex(['a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a',
'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb',
'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc'],
categories=['a', 'bb', 'ccc'], ordered=False, dtype='category')
```
|
[
"Bug",
"Output-Formatting"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"On second look, I retract my claim that the old padding is nicer. No preference.",
"I think the non-aligned version (so how it was before / is with object dtype) is better, especially for cases where your categories have different length. The example here only has 1 vs 3 characters, but for example:\r\n\r\n```\r\n# on main with str dtype / without this PR\r\n>>> pd.CategoricalIndex([\"low\", \"intermediate\", \"high\", \"low\"] * 10)\r\nCategoricalIndex([ 'low', 'intermediate', 'high',\r\n 'low', 'low', 'intermediate',\r\n 'high', 'low', 'low',\r\n 'intermediate', 'high', 'low',\r\n 'low', 'intermediate', 'high',\r\n 'low', 'low', 'intermediate',\r\n 'high', 'low', 'low',\r\n 'intermediate', 'high', 'low',\r\n 'low', 'intermediate', 'high',\r\n 'low', 'low', 'intermediate',\r\n 'high', 'low', 'low',\r\n 'intermediate', 'high', 'low',\r\n 'low', 'intermediate', 'high',\r\n 'low'],\r\n categories=[high, intermediate, low], ordered=False, dtype='category')\r\n\r\n```\r\n\r\nvs \r\n\r\n```\r\n# with object dtype / with str dtype with this PR\r\n>>> pd.CategoricalIndex([\"low\", \"intermediate\", \"high\", \"low\"] * 10)\r\nCategoricalIndex(['low', 'intermediate', 'high', 'low', 'low', 'intermediate',\r\n 'high', 'low', 'low', 'intermediate', 'high', 'low', 'low',\r\n 'intermediate', 'high', 'low', 'low', 'intermediate', 'high',\r\n 'low', 'low', 'intermediate', 'high', 'low', 'low',\r\n 'intermediate', 'high', 'low', 'low', 'intermediate', 'high',\r\n 'low', 'low', 'intermediate', 'high', 'low', 'low',\r\n 'intermediate', 'high', 'low'],\r\n categories=['high', 'intermediate', 'low'], ordered=False, dtype='category')\r\n```\r\n\r\nOf course this can also happen with non-strings like integers, but I think it is a lot less common "
] |
3,240,570,216
| 61,893
|
BUG: Fix concat dtype preservation through concat
|
closed
| 2025-07-17T18:53:02
| 2025-07-22T15:32:00
| 2025-07-22T15:32:00
|
https://github.com/pandas-dev/pandas/pull/61893
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61893
|
https://github.com/pandas-dev/pandas/pull/61893
|
adrienpacifico
| 2
|
- [x] closes #51362
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
---
Expected behavior:
- An empty categorical, and a categorical should concat as a categorical.
---
Linked issues:
#13524 see https://github.com/pandas-dev/pandas/issues/13524#issuecomment-233792648
https://github.com/pandas-dev/pandas/issues/39443
#14177 (seems a dev call agreed on putting this as an option in `concat`)
https://github.com/pandas-dev/pandas/pull/13767
|
[
"Bug",
"Reshaping",
"Categorical"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"As mentioned in #51362, this is not a bug, but we are willing to support this via a keyword in pd.concat.\r\n\r\nI think there are also some older related issues this would close. Can you make sure those are listed in the OP.",
"Closing this for now — I ran into more complexity than expected. Might revisit later with a cleaner approach."
] |
3,239,767,398
| 61,892
|
WEB: Remove Roadmap points pending a PDEP section from Roadmap
|
closed
| 2025-07-17T14:16:07
| 2025-07-22T17:11:00
| 2025-07-22T16:27:33
|
https://github.com/pandas-dev/pandas/pull/61892
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61892
|
https://github.com/pandas-dev/pandas/pull/61892
|
simonjayhawkins
| 2
|
closes https://github.com/pandas-dev/pandas/issues/60913
In https://github.com/pandas-dev/pandas/issues/51471#issuecomment-3075831365 @jbrockmendel wrote
> No evidence the ecosystem wants this, closing.
in https://pandas.pydata.org/about/roadmap.html#roadmap-points-pending-a-pdep
> pandas is in the process of moving roadmap points to PDEPs (implemented in August 2022). During the transition, some roadmap points will exist as PDEPs, while others will exist as sections below.
If we don't have actionable plans than can be written up as a PDEP, let's remove this from the roadmap.
---
In https://github.com/pandas-dev/pandas/pull/27478 @TomAugspurger wrote
> Do we want this? Roadmaps tend to go stale. How can we keep this up to date?
So maybe there are other sections that we also want to remove now?
@pandas-dev/pandas-core
|
[
"Web",
"Roadmap"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"following suggestion by @mroeschke and thumbs up from @jbrockmendel i've removed all Roadmap points pending a PDEP.\r\n",
"Thanks @simonjayhawkins "
] |
3,238,615,955
| 61,891
|
Output formatting: preserve quoting for string categories
|
closed
| 2025-07-17T08:18:05
| 2025-07-26T19:14:46
| 2025-07-25T13:26:36
|
https://github.com/pandas-dev/pandas/pull/61891
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61891
|
https://github.com/pandas-dev/pandas/pull/61891
|
jorisvandenbossche
| 3
|
POC for https://github.com/pandas-dev/pandas/issues/61890 if we want to keep the quoting behaviour
|
[
"Output-Formatting",
"Strings",
"Categorical"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"It's an easy change, so let's add to 2.3.x (I might just not backport all test fixes)",
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 c849d39c4c956495c0c86b85ab561bf74bf5df8d\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #61891: Output formatting: preserve quoting for string categories'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61891-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61891 on branch 2.3.x (Output formatting: preserve quoting for string categories)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ",
"Manual backport -> https://github.com/pandas-dev/pandas/pull/61966"
] |
3,238,501,704
| 61,890
|
Output formatting: the repr of the Categorical categories (quoted or unquoted strings?)
|
closed
| 2025-07-17T07:41:48
| 2025-08-13T19:57:15
| 2025-08-13T19:57:02
|
https://github.com/pandas-dev/pandas/issues/61890
| true
| null | null |
jorisvandenbossche
| 7
|
Because of the new string dtype, we also implicitly changes the representation of the unique categories in the Categorical dtype repr (aside the `object` -> `str` change for the dtype):
```python
>>> pd.options.future.infer_string = False
>>> pd.Categorical(list("abca"))
['a', 'b', 'c', 'a']
Categories (3, object): ['a', 'b', 'c']
>>> pd.options.future.infer_string = True
>>> pd.Categorical(list("abca"))
['a', 'b', 'c', 'a']
Categories (3, str): [a, b, c]
```
So the actual array values are always quotes, but the list of unique categories in the dtype repr goes from `['a', 'b', 'c']` to `[a, b, c]`.
Brock already fixed a bunch of xfails in the tests because of this in https://github.com/pandas-dev/pandas/pull/61727. And we also run into this issue for the failing doctests (https://github.com/pandas-dev/pandas/issues/61886).
@jbrockmendel mentioned there:
> It isn't 100% obvious that the new repr for Categoricals is an improvement, but it's non-crazy.
With which I agree, also no strong opinion either way.
But before we also go fixing doctests, let's confirm that we are OK with this change. Because if we don't have a strong opinion that it is an improvement, we could also leave it how it was originally (and avoiding _some_ breakage because of this for downstream projects or users (eg who also have doctests))
|
[
"Output-Formatting",
"Categorical"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"The technical explanation of this change is that for `Categorical.__repr__`, we have a `Categorical._repr_categories` helper method that creates this data (called from `Categorical._get_repr_footer`, which is used in the categorical repr but also in the Series repr if the dtype is categorical).\n\n\nThis function calls `format_array` with `QUOTE_NONNUMERIC`:\n\n```python\n>>> from pandas.io.formats import format as fmt\n>>> from csv import QUOTE_NONNUMERIC\n \n>>> fmt.format_array(np.array([\"a\", \"b\"], dtype=object), formatter=None, quoting=QUOTE_NONNUMERIC)\n[\" 'a'\", \" 'b'\"]\n>>> fmt.format_array(pd.array([\"a\", \"b\"], dtype=\"str\"), formatter=None, quoting=QUOTE_NONNUMERIC)\n[' a', ' b']\n```\n\nBut in the case of the string dtype, being an extension dtype, this `format_array` uses the `values._formatter(boxed=True)` of the ExtensionArray, and in the case of strings, when `boxed=True`, those values are not quoted (eg as used in the Series repr, in contrast to the array repr). And so for extension dtypes, the `QUOTE_NONNUMERIC` is also ignored.\n\nSo given that we also don't quote (or do show the \"boxed\" repr) for other types, like we don't use quoted strings to represent timestamp categories, the new behaviour seems a little bit more consistent. \nBut right now we essentially already special case strings in the categorical repr by passing `QUOTE_NONNUMERIC`. Thus I think it is also perfectly reasonable to update that existing special case to cover the string dtype as well in addition to object dtype, to preserve the existing behaviour and minimize the repr changes.\n\n\n",
"There are some remaining CategoricalIndex repr tests that are xfailed bc the padding changes. Would re-enabling the special casing here also get us the old padding behavior? I think it is a little nicer.",
"I think that is still something else, because that is in formatting the data part of the array/index, not the dtype",
"> I think that is still something else, because that is in formatting the data part of the array/index, not the dtype\n\nBut it turned out to be a simple fix -> https://github.com/pandas-dev/pandas/pull/61894",
"And FWIW I also have a PR with the necessary small change to preserve the special-case quoting for string categories in https://github.com/pandas-dev/pandas/pull/61891 (only, if we want that, I have to update the tests again to get that PR green)",
"From the dev meeting: slight preference for the quoted version",
"Closed by https://github.com/pandas-dev/pandas/pull/61891"
] |
3,238,470,835
| 61,889
|
BUG: make to_json with JSON Table Schema work correctly with string dtype
|
closed
| 2025-07-17T07:30:55
| 2025-07-26T11:27:38
| 2025-07-26T11:27:38
|
https://github.com/pandas-dev/pandas/issues/61889
| true
| null | null |
jorisvandenbossche
| 4
|
(noticed because of some doctest failures cfr https://github.com/pandas-dev/pandas/issues/61886)
Currently, for the strings as object dtype, it seems that we assume that object dtype are actually strings, and encode that as such in the schema part of the JSON Table Schema output:
```python
>>> pd.Series(["a", "b", None], dtype=object).to_json(orient="table", index=False)
'{"schema":{"fields":[{"name":"values","type":"string"}],"pandas_version":"1.4.0"},"data":[{"values":"a"},{"values":"b"},{"values":null}]}'
```
But for the now-default string dtype, this is still seen as some custom extension dtype:
```python
>>> pd.Series(["a", "b", None], dtype="str").to_json(orient="table", index=False)
'{"schema":{"fields":[{"name":"values","type":"any","extDtype":"str"}],"pandas_version":"1.4.0"},"data":[{"values":"a"},{"values":"b"},{"values":null}]}'
```
(note the `"type":"string"` vs `"type":"any","extDtype":"str"`)
Given that the Table Schema spec has a "string" type, let's also use that when exporting our string dtype.
|
[
"Bug",
"IO JSON",
"Strings"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Changing the order in the [as_json_table_type](https://github.com/pandas-dev/pandas/blob/6537afe3701f832a0d29e1598a05c471d789f172/pandas/io/json/_table_schema.py#L53C5-L53C23) function (by moving the **_is_string_dtype_** check before the **_ExtensionDtype_** check):\n\n```python\nelif is_string_dtype(x):\n return \"string\"\nelif isinstance(x, ExtensionDtype):\n return \"any\"\nelse:\n return \"any\"\n```\nseems to fix the issue. but I am not sure this is the best fix. ",
"@khemkaran10 that looks like a good fix! Feel free to open a PR for this",
"take",
"@jorisvandenbossche can you please review the [PR](https://github.com/pandas-dev/pandas/pull/61900) and let me know if any changes are needed."
] |
3,238,455,184
| 61,888
|
ENH: Images embedded in cells. The DISPIMG function of WPS
|
closed
| 2025-07-17T07:26:02
| 2025-08-05T16:28:29
| 2025-08-05T16:28:29
|
https://github.com/pandas-dev/pandas/issues/61888
| true
| null | null |
lunavexxx
| 11
|
### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
Hi!
I found that there is an issue with the WPS image. The software allows images to be directly embedded into cells, and the format is similar to `=DISPIMG ("ID5BA4F81A0D674C7AA8849A79AC5645C8", 1)`.
<img width="691" height="431" alt="Image" src="https://github.com/user-attachments/assets/e32caa36-9729-44ca-8a46-477aec421e79" />
Therefore, it cannot be accessed through **worksheets. _images**
If we unzip Excel, we can find all the images under _xl/media_, and the image indexes are in _xl/-rels/cellimages.xml.rels_ and _xl/ellimages.xml_
This is a unique feature of WPS, at least I haven't found it in Office.
I found a similar [implementation](https://github.com/wangguanquan/eec/issues/363)
### Feature Description
This is my code, which will decompress Excel, read the file, and return an Id to address mapping
```pthon
def wps_embed_images(file_path, save_path) -> dict:
img_map = {}
with zipfile.ZipFile(file_path, "r") as zip_ref:
zip_ref.extractall(save_path)
id2target = {}
rels = os.path.join(save_path, "xl", "_rels", "cellimages.xml.rels")
tree = ET.parse(rels)
root = tree.getroot()
for child in root:
id2target[child.attrib.get("Id")] = os.path.join(save_path, "xl", child.attrib.get("Target"))
namespaces = {
'etc': 'http://www.wps.cn/officeDocument/2017/etCustomData',
'xdr': 'http://schemas.openxmlformats.org/drawingml/2006/spreadsheetDrawing',
'a': 'http://schemas.openxmlformats.org/drawingml/2006/main',
'r': 'http://schemas.openxmlformats.org/officeDocument/2006/relationships'
}
cellimages = os.path.join(save_path, "xl", "cellimages.xml")
tree = ET.parse(cellimages)
root = tree.getroot()
for cell_image in root.findall('etc:cellImage', namespaces):
c_nv_pr = cell_image.find('.//xdr:cNvPr', namespaces)
image_name = c_nv_pr.get('name') if c_nv_pr is not None else None
blip = cell_image.find('.//a:blip', namespaces)
embed_id = blip.get(f'{{{namespaces["r"]}}}embed') if blip is not None else None
if image_name and embed_id:
img_map[image_name] = id2target[embed_id]
return img_map
```
### Alternative Solutions
We leave it as it is and I continue using the solution shown above.
### Additional Context
_No response_
|
[
"Enhancement",
"Needs Info",
"Needs Triage",
"Closing Candidate"
] | 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"I’m all for adding WPS image‑in‑cell support to pandas.",
"I’m not clear on what you’re asking for in pandas. A new method?",
"> I’m not clear on what you’re asking for in pandas. A new method?\n\nYou can download WPS on this [website](https://www.wps.com/). Embedding images into tables is a unique feature of WPS",
"That doesn’t answer my question.",
"> That doesn’t answer my question.\n\nemmm. I think this should be a special method. Do you have any suggestions?",
"Without a clear ask there's not much I can do here. Is the idea that you want `print(dataframe)` to display an image inside a cell?",
"> Without a clear ask there's not much I can do here. Is the idea that you want `print(dataframe)` to display an image inside a cell?\n\nI want a function to help me get the image corresponding to `DISPIMG`.\n\neg:\nThe content of the cell is “=DISPIMG(\"ID_942E830D12C74E8EAAB0A16BAE896F7E\",1)”\n\n> def find_inside_image(e_id str) -> PILImage\n> \"\"\"find the image inside a cell\"\"\"\n\nThen I will use `find_inside_image(\"ID_942E830D12C74E8EAAB0A16BAE896F7E\")` to obtain PILImage",
"So you have a pd.DataFrame object and a cell containing a string `'=DISPIMG(\"ID_942E830D12C74E8EAAB0A16BAE896F7E\",1)'` and you want pandas to do [something] to give you a PILImage object corresponding to that string?",
"> So you have a pd.DataFrame object and a cell containing a string `'=DISPIMG(\"ID_942E830D12C74E8EAAB0A16BAE896F7E\",1)'` and you want pandas to do [something] to give you a PILImage object corresponding to that string?\n\nright",
"OK, I don't think that belongs in pandas, but can leave this open in case other maintainers want to weigh in.",
"Thanks, but also agreed that this is not practical to support in pandas natively so closing"
] |
3,238,410,906
| 61,887
|
DOC: fix doctests for string dtype changes (top-level)
|
closed
| 2025-07-17T07:08:32
| 2025-07-17T18:27:38
| 2025-07-17T14:59:03
|
https://github.com/pandas-dev/pandas/pull/61887
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61887
|
https://github.com/pandas-dev/pandas/pull/61887
|
jorisvandenbossche
| 1
|
Part of https://github.com/pandas-dev/pandas/issues/61886). Some first doctest fixes for the new string dtype, for things that pytest sees as part of `pandas/__init__.py` and some files that only had one failure.
|
[
"Docs",
"Strings"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jorisvandenbossche "
] |
3,238,405,429
| 61,886
|
DOC: fix doctests for repr changes with the new string dtype
|
closed
| 2025-07-17T07:06:43
| 2025-07-26T17:14:23
| 2025-07-26T17:14:22
|
https://github.com/pandas-dev/pandas/issues/61886
| true
| null | null |
jorisvandenbossche
| 9
|
Now the string dtype is turned on by default (https://github.com/pandas-dev/pandas/pull/61722), we also have to fix the doctests to match the new behaviour (the doctests are currently, temporarily, allowed to fail to avoid red CI until this issue is fixed).
The failures can be seen in the current doctests logs, for example at https://github.com/pandas-dev/pandas/actions/runs/16332737970/job/46138722939#step:6:23
There are two main groups of failures:
- `dtype: object` that needs to become `dtype: str` in Series output (or object->str in some other reprs, and a few None->NaN changes)
- The representation of Categorical no longer using quoted values
I would propose to first start with the first bullet point (we should maybe reconsider if the categorical repr change is actually what we want -> https://github.com/pandas-dev/pandas/issues/61890), and the failing files are:
- [x] `pandas/core/base.py` (https://github.com/pandas-dev/pandas/pull/61905)
- [x] `pandas/core/generic.py`
- [x] `pandas/core/strings/accessor.py`
- [x] `pandas/core/arrays/datetimelike.py`, `pandas/core/arrays/datetimes.py` and `pandas/core/indexes/datetimelike.py`
Let's do one PR per bullet point here.
You can run the doctest and verify changes with for example:
```
pytest --doctest-modules pandas/core/base.py
```
Example PR for some fixes in other files: https://github.com/pandas-dev/pandas/pull/61887
|
[
"Docs",
"Strings"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@Nadav-Zilberberg as I mentioned in the issue, please wait with the categorical failures (discussion for that in https://github.com/pandas-dev/pandas/issues/61890). \nAlso do you have a PR for this that is \"ready for review\"? I don't see one, and the issue linking here is just a copy of this issue in your own repo, and so this looks more like spam.",
"@jorisvandenbossche added blocker tags since I think we want to revert the changes in #61722 for the doc build `continue-on-error: true` before release?",
"It's not critical for the RC, but ideally those are fixed for the final release (note the doc build itself is already fixed and running again, this is only the doctests)",
"take",
"@arthurlw can you comment on which file you are working? (@paulbehrisch already did the first one in the meantime)",
"I’ll take on the second one then (`pandas/core/generic.py`). Happy to coordinate if anyone else wants to work on the remaining files!",
"I'll take on the final bullet (final 3 files, arrays and indexes) if nobody's working on it @arthurlw ",
"Sounds good @DevastatingRPG 👍",
"> It's not critical for the RC, but ideally those are fixed for the final release (note the doc build itself is already fixed and running again, this is only the doctests)\n\nthanks @jorisvandenbossche for the info. yes I see that #61864 addresses the concern I had regarding the doc build and the reason I added the blocker tags. So therefore I am happy to now remove them."
] |
3,238,123,884
| 61,885
|
Fix warning for extra fields in read_csv with on_bad_lines callable
|
closed
| 2025-07-17T05:01:15
| 2025-07-29T01:01:52
| 2025-07-29T01:01:52
|
https://github.com/pandas-dev/pandas/pull/61885
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61885
|
https://github.com/pandas-dev/pandas/pull/61885
|
tisjayy
| 2
|
- [ ] closes #61837 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"trailing white space is annoying",
"pre-commit.ci autofix\r\n"
] |
3,237,427,949
| 61,884
|
API: Index.__cmp__(Series) return NotImplemented
|
closed
| 2025-07-16T21:42:36
| 2025-07-17T14:18:24
| 2025-07-16T23:50:26
|
https://github.com/pandas-dev/pandas/pull/61884
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61884
|
https://github.com/pandas-dev/pandas/pull/61884
|
jbrockmendel
| 1
|
- [x] closes #36759 (Replace xxxx with the GitHub issue number)
- [x] closes #54475
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Numeric Operations"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jbrockmendel "
] |
3,237,298,600
| 61,883
|
BUG: Timedelta with invalid keyword
|
closed
| 2025-07-16T20:38:53
| 2025-07-16T21:41:16
| 2025-07-16T21:40:23
|
https://github.com/pandas-dev/pandas/pull/61883
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61883
|
https://github.com/pandas-dev/pandas/pull/61883
|
jbrockmendel
| 1
|
- [x] closes #53801 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Error Reporting",
"Timedelta"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jbrockmendel "
] |
3,237,278,110
| 61,882
|
BUG: disallow exotic np.datetime64 unit
|
closed
| 2025-07-16T20:31:24
| 2025-07-17T20:13:15
| 2025-07-17T19:21:20
|
https://github.com/pandas-dev/pandas/pull/61882
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61882
|
https://github.com/pandas-dev/pandas/pull/61882
|
jbrockmendel
| 1
|
- [x] closes #25611 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
This doesn't work locally for reasons described [here](https://github.com/pandas-dev/pandas/issues/25611#issuecomment-3079363946). @seberg says it should work, so let's see if the CI can prove him right.
|
[
"Datetime"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jbrockmendel "
] |
3,237,191,289
| 61,881
|
API: boolean ops with float dtype
|
open
| 2025-07-16T20:00:48
| 2025-07-17T17:48:56
| null |
https://github.com/pandas-dev/pandas/issues/61881
| true
| null | null |
jbrockmendel
| 3
|
```python
# from #60234 OP
ser1 = pd.Series([False, False])
ser2 = pd.Series([0.0, 0.1])
ser1 | ser2 # <- works by casting floats to bool
ser2 | ser1 # <- raises TypeError
# We also have special-casing for NaNs among floats
ser3 = pd.Series([np.nan, 1.0])
ser3 & ser3 # <- raises bc you can't do float & float
ser3[:-1] & ser3[:-1] # has no non-NaN floats so we special-case
```
logical ops (&, |, ^) have inconsistent behavior. I think we should simplify this significantly and more closely resemble the numpy behavior.
|
[
"Numeric Operations",
"Needs Discussion",
"API - Consistency"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"> I think we should simplify this significantly and more closely resemble the numpy behavior.\n\nAnd the numpy behaviour is to simply raise an error in all those cases?\n\nThat was also my first reaction to seeing this issue: personally I don't really see a reason that those ops are allowed to work. _If_ you want to do logical ops on floats, you can always cast the float to bool manually first.",
"> And the numpy behaviour is to simply raise an error in all those cases?\n\nYes.\n\n> personally I don't really see a reason that those ops are allowed to work\n\nLooking at what tests fail when i add a deprecation warning, i think it is mostly about ops that do silent reindexing which casts int/bool to float/object.",
"Looks like a bunch of issues tentatively related to the silent casting/filling we do: #51267, #23191, #40565, #60234, #52597, #41764. (Some of these are for arithmetic ops which go through a different path than logical ops).\n\nIf I'm right that the main motivation for this behavior is reindex-based, then two options come to mind:\n\n1) wait for PDEP16 when after which reindexing won't cast ints/bools and then deprecate this behavior entirely (or if numpy dtypes aren't supported at all, we'll never go through these paths and can rip them out)\n2) In silent-alignment cases, instead of doing an outer-join and then operating, we could do an inner-join, operate, then reindex to the outer-join index. Then deprecate/change this behavior for all non-alignment cases.\n\nUpdate: looks like we already do something similar to 2 for some DataFrame ops in _arith_method_with_reindex"
] |
3,237,151,939
| 61,880
|
Set up ty in CI
|
closed
| 2025-07-16T19:46:36
| 2025-07-28T17:14:13
| 2025-07-28T17:14:13
|
https://github.com/pandas-dev/pandas/pull/61880
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61880
|
https://github.com/pandas-dev/pandas/pull/61880
|
tqa236
| 1
|
Hello,
This PR sets up [`ty`](https://github.com/astral-sh/ty), a type checker developed by the creator of `ruff`, in CI.
I'm aware that `ty` is still in preview but there can be a couple of benefits in adding it now.
- It's already very fast and can help us debug faster. `mypy` is quite slow and disrupts the coding flow.
- The cost of setup is pretty cheap, as it's mostly similar to `mypy` and `pyright`. Currently I ignore all errors so that we can fix and enable them gradually, like the other linting and typing errors.
- This also allows us to evaluate if/when `ty` can be a replacement of `mypy` and `pyright`.
Let me know what you think.
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"CI",
"Typing"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the PR but closing until ty is more matrue"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.