repo
stringlengths 5
51
| instance_id
stringlengths 11
56
| base_commit
stringlengths 40
40
| patch
stringlengths 400
56.6k
| test_patch
stringlengths 0
895k
| problem_statement
stringlengths 27
55.6k
| hints_text
stringlengths 0
72k
| created_at
int64 1,447B
1,739B
| labels
sequencelengths 0
7
⌀ | category
stringclasses 4
values | edit_functions
sequencelengths 1
10
| added_functions
sequencelengths 0
19
| edit_functions_length
int64 1
10
| __index_level_0__
int64 1
659
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Zulko/moviepy | Zulko__moviepy-2253 | c88852f6f3753469d4aeed677dd0b772764ccf42 | diff --git a/moviepy/video/io/ffmpeg_reader.py b/moviepy/video/io/ffmpeg_reader.py
index f871bd8fd..536024371 100644
--- a/moviepy/video/io/ffmpeg_reader.py
+++ b/moviepy/video/io/ffmpeg_reader.py
@@ -35,8 +35,10 @@ def __init__(
decode_file=decode_file,
print_infos=print_infos,
)
- self.fps = infos["video_fps"]
- self.size = infos["video_size"]
+ # If framerate is unavailable, assume 1.0 FPS to avoid divide-by-zero errors.
+ self.fps = infos.get("video_fps", 1.0)
+ # If frame size is unavailable, set 1x1 divide-by-zero errors.
+ self.size = infos.get("video_size", (1, 1))
# ffmpeg automatically rotates videos if rotation information is
# available, so exchange width and height
@@ -55,10 +57,10 @@ def __init__(
self.size = target_resolution
self.resize_algo = resize_algo
- self.duration = infos["video_duration"]
- self.ffmpeg_duration = infos["duration"]
- self.n_frames = infos["video_n_frames"]
- self.bitrate = infos["video_bitrate"]
+ self.duration = infos.get("video_duration", 0.0)
+ self.ffmpeg_duration = infos.get("duration", 0.0)
+ self.n_frames = infos.get("video_n_frames", 0)
+ self.bitrate = infos.get("video_bitrate", 0)
self.infos = infos
@@ -556,8 +558,11 @@ def parse(self):
# last input file, must be included in self.result
if self._current_input_file:
self._current_input_file["streams"].append(self._current_stream)
- # include their chapters, if there are
- if len(input_chapters) == self._current_input_file["input_number"] + 1:
+ # include their chapters, if there are any
+ if (
+ "input_number" in self._current_input_file
+ and len(input_chapters) == self._current_input_file["input_number"] + 1
+ ):
self._current_input_file["chapters"] = input_chapters[
self._current_input_file["input_number"]
]
@@ -565,13 +570,13 @@ def parse(self):
# some video duration utilities
if self.result["video_found"] and self.check_duration:
+ self.result["video_duration"] = self.result["duration"]
self.result["video_n_frames"] = int(
- self.result["duration"] * self.result["video_fps"]
+ self.result["duration"] * self.result.get("video_fps", 0)
)
- self.result["video_duration"] = self.result["duration"]
else:
- self.result["video_n_frames"] = 1
- self.result["video_duration"] = None
+ self.result["video_n_frames"] = 0
+ self.result["video_duration"] = 0.0
# We could have also recomputed duration from the number of frames, as follows:
# >>> result['video_duration'] = result['video_n_frames'] / result['video_fps']
| MoviePy 2.0 throws exception on loading video previous version worked with
#### Expected Behavior
MoviePy should continue to work with the same videos it did previously, even if those videos aren't fully compliant (e.g. are missing some metadata).
#### Actual Behavior
The same video crashes on MoviePy 2.0 but works with MoviePy 1.x.
#### Steps to Reproduce the Problem
See the `corrupt_video.mp4` file from https://github.com/Breakthrough/PySceneDetect/tree/resources/tests/resources and the associated unit test in https://github.com/Breakthrough/PySceneDetect/blob/95d20ddca57bb8cba77354697cc092643bd04afb/tests/test_video_stream.py#L359
The issue seems to come from the ffmpeg info parser assuming the presence of certain metadata fields. Instead of failing, MoviePy should probably try to set some reasonable default value for these fields if they are not critical, to improve compatibility with media files.
#### Specifications
Tested on a wide variety of OS and Python versions:
- os: [macos-13, macos-14, ubuntu-20.04, ubuntu-latest, windows-latest]
- python-version: ["3.7", "3.8", "3.9", "3.10", "3.11", "3.12", "3.13"]
MoviePy 2.0 throws exception on loading video previous version worked with
#### Expected Behavior
MoviePy should continue to work with the same videos it did previously, even if those videos aren't fully compliant (e.g. are missing some metadata).
#### Actual Behavior
The same video crashes on MoviePy 2.0 but works with MoviePy 1.x.
#### Steps to Reproduce the Problem
See the `corrupt_video.mp4` file from https://github.com/Breakthrough/PySceneDetect/tree/resources/tests/resources and the associated unit test in https://github.com/Breakthrough/PySceneDetect/blob/95d20ddca57bb8cba77354697cc092643bd04afb/tests/test_video_stream.py#L359
The issue seems to come from the ffmpeg info parser assuming the presence of certain metadata fields. Instead of failing, MoviePy should probably try to set some reasonable default value for these fields if they are not critical, to improve compatibility with media files.
#### Specifications
Tested on a wide variety of OS and Python versions:
- os: [macos-13, macos-14, ubuntu-20.04, ubuntu-latest, windows-latest]
- python-version: ["3.7", "3.8", "3.9", "3.10", "3.11", "3.12", "3.13"]
| 1,732,332,301,000 | null | Bug Report | [
"moviepy/video/io/ffmpeg_reader.py:FFMPEG_VideoReader.__init__",
"moviepy/video/io/ffmpeg_reader.py:FFmpegInfosParser.parse"
] | [] | 2 | 484 |
||
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11827 | 2037a6414f81db8080ca724dca506fde91974c5d | diff --git a/yt_dlp/update.py b/yt_dlp/update.py
index ca2ec5f376a0..dfab132afdfe 100644
--- a/yt_dlp/update.py
+++ b/yt_dlp/update.py
@@ -525,11 +525,16 @@ def filename(self):
@functools.cached_property
def cmd(self):
"""The command-line to run the executable, if known"""
+ argv = None
# There is no sys.orig_argv in py < 3.10. Also, it can be [] when frozen
if getattr(sys, 'orig_argv', None):
- return sys.orig_argv
+ argv = sys.orig_argv
elif getattr(sys, 'frozen', False):
- return sys.argv
+ argv = sys.argv
+ # linux_static exe's argv[0] will be /tmp/staticx-NNNN/yt-dlp_linux if we don't fixup here
+ if argv and os.getenv('STATICX_PROG_PATH'):
+ argv = [self.filename, *argv[1:]]
+ return argv
def restart(self):
"""Restart the executable"""
| noise downloads
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
old quality controls are skipped and some proffesional downloads are way too noisy as they might be on the fly being amplified due too high speed downloads
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
oot@cc6299b89843:/Application# yt-dlp -vU --restrict-filenames --audio-format mp3 --prefer-ffmpeg https://www.youtube.com/watch?v=rqRjv32l1FM
[debug] Command-line config: ['-vU', '--restrict-filenames', '--audio-format', 'mp3', '--prefer-ffmpeg', 'https://www.youtube.com/watch?v=rqRjv32l1FM']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp (linux_exe)
[debug] Python 3.11.11 (CPython x86_64 64bit) - Linux-6.1.0-28-amd64-x86_64-with (OpenSSL 3.1.7 3 Sep 2024)
[debug] exe versions: ffmpeg 5.1.6-0 (setts), ffprobe 5.1.6-0
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.44.2, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp/releases/latest/download/_update_spec
[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp/releases/download/2024.12.13/SHA2-256SUMS
Current version: [email protected] from yt-dlp/yt-dlp
Latest version: [email protected] from yt-dlp/yt-dlp
Current Build Hash: feca08aa6623e786be628d5f1a72fb2f4fce1ccb7af5b6429a06f5e79b14fead
Updating to [email protected] from yt-dlp/yt-dlp ...
[debug] Downloading yt-dlp_linux from https://github.com/yt-dlp/yt-dlp/releases/download/2024.12.13/yt-dlp_linux
Updated yt-dlp to [email protected] from yt-dlp/yt-dlp
[debug] Restarting: /tmp/staticx-JhoGDA/yt-dlp_linux -vU --restrict-filenames --audio-format mp3 --prefer-ffmpeg 'https://www.youtube.com/watch?v=rqRjv32l1FM'
[debug] Command-line config: ['-vU', '--restrict-filenames', '--audio-format', 'mp3', '--prefer-ffmpeg', 'https://www.youtube.com/watch?v=rqRjv32l1FM']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp (linux_exe)
[debug] Python 3.11.11 (CPython x86_64 64bit) - Linux-6.1.0-28-amd64-x86_64-with (OpenSSL 3.1.7 3 Sep 2024)
[debug] exe versions: ffmpeg 5.1.6-0 (setts), ffprobe 5.1.6-0
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.44.2, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp/releases/latest/download/_update_spec
[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp/releases/download/2024.12.13/SHA2-256SUMS
Current version: [email protected] from yt-dlp/yt-dlp
Latest version: [email protected] from yt-dlp/yt-dlp
Current Build Hash: e7788b8556c73e409d5713a4fdb71df12734b0853353f2da2352540dc9d22c95
Updating to [email protected] from yt-dlp/yt-dlp ...
[debug] Downloading yt-dlp_linux from https://github.com/yt-dlp/yt-dlp/releases/download/2024.12.13/yt-dlp_linux
Updated yt-dlp to [email protected] from yt-dlp/yt-dlp
[debug] Restarting: /tmp/staticx-JhoGDA/yt-dlp_linux -vU --restrict-filenames --audio-format mp3 --prefer-ffmpeg 'https://www.youtube.com/watch?v=rqRjv32l1FM'
[debug] Command-line config: ['-vU', '--restrict-filenames', '--audio-format', 'mp3', '--prefer-ffmpeg', 'https://www.youtube.com/watch?v=rqRjv32l1FM']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp (linux_exe)
[debug] Python 3.11.11 (CPython x86_64 64bit) - Linux-6.1.0-28-amd64-x86_64-with (OpenSSL 3.1.7 3 Sep 2024)
[debug] exe versions: ffmpeg 5.1.6-0 (setts), ffprobe 5.1.6-0
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.44.2, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp/releases/latest/download/_update_spec
[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp/releases/download/2024.12.13/SHA2-256SUMS
Current version: [email protected] from yt-dlp/yt-dlp
Latest version: [email protected] from yt-dlp/yt-dlp
Current Build Hash: e7788b8556c73e409d5713a4fdb71df12734b0853353f2da2352540dc9d22c95
Updating to [email protected] from yt-dlp/yt-dlp ...
[debug] Downloading yt-dlp_linux from https://github.com/yt-dlp/yt-dlp/releases/download/2024.12.13/yt-dlp_linux
Updated yt-dlp to [email protected] from yt-dlp/yt-dlp
[debug] Restarting: /tmp/staticx-JhoGDA/yt-dlp_linux -vU --restrict-filenames --audio-format mp3 --prefer-ffmpeg 'https://www.youtube.com/watch?v=rqRjv32l1FM'
[debug] Command-line config: ['-vU', '--restrict-filenames', '--audio-format', 'mp3', '--prefer-ffmpeg', 'https://www.youtube.com/watch?v=rqRjv32l1FM']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp (linux_exe)
[debug] Python 3.11.11 (CPython x86_64 64bit) - Linux-6.1.0-28-amd64-x86_64-with (OpenSSL 3.1.7 3 Sep 2024)
[debug] exe versions: ffmpeg 5.1.6-0 (setts), ffprobe 5.1.6-0
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.44.2, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp/releases/latest/download/_update_spec
[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp/releases/download/2024.12.13/SHA2-256SUMS
Current version: [email protected] from yt-dlp/yt-dlp
Latest version: [email protected] from yt-dlp/yt-dlp
Current Build Hash: e7788b8556c73e409d5713a4fdb71df12734b0853353f2da2352540dc9d22c95
Updating to [email protected] from yt-dlp/yt-dlp ...
[debug] Downloading yt-dlp_linux from https://github.com/yt-dlp/yt-dlp/releases/download/2024.12.13/yt-dlp_linux
Updated yt-dlp to [email protected] from yt-dlp/yt-dlp
[debug] Restarting: /tmp/staticx-JhoGDA/yt-dlp_linux -vU --restrict-filenames --audio-format mp3 --prefer-ffmpeg 'https://www.youtube.com/watch?v=rqRjv32l1FM'
[debug] Command-line config: ['-vU', '--restrict-filenames', '--audio-format', 'mp3', '--prefer-ffmpeg', 'https://www.youtube.com/watch?v=rqRjv32l1FM']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp (linux_exe)
[debug] Python 3.11.11 (CPython x86_64 64bit) - Linux-6.1.0-28-amd64-x86_64-with (OpenSSL 3.1.7 3 Sep 2024)
[debug] exe versions: ffmpeg 5.1.6-0 (setts), ffprobe 5.1.6-0
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.44.2, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp/releases/latest/download/_update_spec
[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp/releases/download/2024.12.13/SHA2-256SUMS
Current version: [email protected] from yt-dlp/yt-dlp
Latest version: [email protected] from yt-dlp/yt-dlp
Current Build Hash: e7788b8556c73e409d5713a4fdb71df12734b0853353f2da2352540dc9d22c95
Updating to [email protected] from yt-dlp/yt-dlp ...
[debug] Downloading yt-dlp_linux from https://github.com/yt-dlp/yt-dlp/releases/download/2024.12.13/yt-dlp_linux
Updated yt-dlp to [email protected] from yt-dlp/yt-dlp
[debug] Restarting: /tmp/staticx-JhoGDA/yt-dlp_linux -vU --restrict-filenames --audio-format mp3 --prefer-ffmpeg 'https://www.youtube.com/watch?v=rqRjv32l1FM'
[debug] Command-line config: ['-vU', '--restrict-filenames', '--audio-format', 'mp3', '--prefer-ffmpeg', 'https://www.youtube.com/watch?v=rqRjv32l1FM']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp (linux_exe)
[debug] Python 3.11.11 (CPython x86_64 64bit) - Linux-6.1.0-28-amd64-x86_64-with (OpenSSL 3.1.7 3 Sep 2024)
[debug] exe versions: ffmpeg 5.1.6-0 (setts), ffprobe 5.1.6-0
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.44.2, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp/releases/latest/download/_update_spec
[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp/releases/download/2024.12.13/SHA2-256SUMS
Current version: [email protected] from yt-dlp/yt-dlp
Latest version: [email protected] from yt-dlp/yt-dlp
Current Build Hash: e7788b8556c73e409d5713a4fdb71df12734b0853353f2da2352540dc9d22c95
Updating to [email protected] from yt-dlp/yt-dlp ...
[debug] Downloading yt-dlp_linux from https://github.com/yt-dlp/yt-dlp/releases/download/2024.12.13/yt-dlp_linux
Updated yt-dlp to [email protected] from yt-dlp/yt-dlp
[debug] Restarting: /tmp/staticx-JhoGDA/yt-dlp_linux -vU --restrict-filenames --audio-format mp3 --prefer-ffmpeg 'https://www.youtube.com/watch?v=rqRjv32l1FM'
[debug] Command-line config: ['-vU', '--restrict-filenames', '--audio-format', 'mp3', '--prefer-ffmpeg', 'https://www.youtube.com/watch?v=rqRjv32l1FM']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp (linux_exe)
[debug] Python 3.11.11 (CPython x86_64 64bit) - Linux-6.1.0-28-amd64-x86_64-with (OpenSSL 3.1.7 3 Sep 2024)
[debug] exe versions: ffmpeg 5.1.6-0 (setts), ffprobe 5.1.6-0
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.44.2, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp/releases/latest/download/_update_spec
[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp/releases/download/2024.12.13/SHA2-256SUMS
Current version: [email protected] from yt-dlp/yt-dlp
Latest version: [email protected] from yt-dlp/yt-dlp
Current Build Hash: e7788b8556c73e409d5713a4fdb71df12734b0853353f2da2352540dc9d22c95
Updating to [email protected] from yt-dlp/yt-dlp ...
[debug] Downloading yt-dlp_linux from https://github.com/yt-dlp/yt-dlp/releases/download/2024.12.13/yt-dlp_linux
Updated yt-dlp to [email protected] from yt-dlp/yt-dlp
[debug] Restarting: /tmp/staticx-JhoGDA/yt-dlp_linux -vU --restrict-filenames --audio-format mp3 --prefer-ffmpeg 'https://www.youtube.com/watch?v=rqRjv32l1FM'
[debug] Command-line config: ['-vU', '--restrict-filenames', '--audio-format', 'mp3', '--prefer-ffmpeg', 'https://www.youtube.com/watch?v=rqRjv32l1FM']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp (linux_exe)
[debug] Python 3.11.11 (CPython x86_64 64bit) - Linux-6.1.0-28-amd64-x86_64-with (OpenSSL 3.1.7 3 Sep 2024)
[debug] exe versions: ffmpeg 5.1.6-0 (setts), ffprobe 5.1.6-0
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.44.2, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp/releases/latest/download/_update_spec
[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp/releases/download/2024.12.13/SHA2-256SUMS
Current version: [email protected] from yt-dlp/yt-dlp
Latest version: [email protected] from yt-dlp/yt-dlp
Current Build Hash: e7788b8556c73e409d5713a4fdb71df12734b0853353f2da2352540dc9d22c95
Updating to [email protected] from yt-dlp/yt-dlp ...
[debug] Downloading yt-dlp_linux from https://github.com/yt-dlp/yt-dlp/releases/download/2024.12.13/yt-dlp_linux
Updated yt-dlp to [email protected] from yt-dlp/yt-dlp
[debug] Restarting: /tmp/staticx-JhoGDA/yt-dlp_linux -vU --restrict-filenames --audio-format mp3 --prefer-ffmpeg 'https://www.youtube.com/watch?v=rqRjv32l1FM'
[debug] Command-line config: ['-vU', '--restrict-filenames', '--audio-format', 'mp3', '--prefer-ffmpeg', 'https://www.youtube.com/watch?v=rqRjv32l1FM']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp (linux_exe)
[debug] Python 3.11.11 (CPython x86_64 64bit) - Linux-6.1.0-28-amd64-x86_64-with (OpenSSL 3.1.7 3 Sep 2024)
[debug] exe versions: ffmpeg 5.1.6-0 (setts), ffprobe 5.1.6-0
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.44.2, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp/releases/latest/download/_update_spec
[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp/releases/download/2024.12.13/SHA2-256SUMS
Current version: [email protected] from yt-dlp/yt-dlp
Latest version: [email protected] from yt-dlp/yt-dlp
Current Build Hash: e7788b8556c73e409d5713a4fdb71df12734b0853353f2da2352540dc9d22c95
Updating to [email protected] from yt-dlp/yt-dlp ...
[debug] Downloading yt-dlp_linux from https://github.com/yt-dlp/yt-dlp/releases/download/2024.12.13/yt-dlp_linux
Updated yt-dlp to [email protected] from yt-dlp/yt-dlp
[debug] Restarting: /tmp/staticx-JhoGDA/yt-dlp_linux -vU --restrict-filenames --audio-format mp3 --prefer-ffmpeg 'https://www.youtube.com/watch?v=rqRjv32l1FM'
[debug] Command-line config: ['-vU', '--restrict-filenames', '--audio-format', 'mp3', '--prefer-ffmpeg', 'https://www.youtube.com/watch?v=rqRjv32l1FM']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp (linux_exe)
[debug] Python 3.11.11 (CPython x86_64 64bit) - Linux-6.1.0-28-amd64-x86_64-with (OpenSSL 3.1.7 3 Sep 2024)
[debug] exe versions: ffmpeg 5.1.6-0 (setts), ffprobe 5.1.6-0
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.44.2, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp/releases/latest/download/_update_spec
[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp/releases/download/2024.12.13/SHA2-256SUMS
Current version: [email protected] from yt-dlp/yt-dlp
Latest version: [email protected] from yt-dlp/yt-dlp
Current Build Hash: e7788b8556c73e409d5713a4fdb71df12734b0853353f2da2352540dc9d22c95
Updating to [email protected] from yt-dlp/yt-dlp ...
[debug] Downloading yt-dlp_linux from https://github.com/yt-dlp/yt-dlp/releases/download/2024.12.13/yt-dlp_linux
^C
ERROR: Interrupted by user
ERROR: Interrupted by user
ERROR: Interrupted by user
ERROR: Interrupted by user
ERROR: Interrupted by user
ERROR: Interrupted by user
root@cc6299b89843:/Application#
ERROR: Interrupted by user
ERROR: Interrupted by user
ERROR: Interrupted by user
ERROR: Interrupted by user
^C
```
| 
Using `--audio-format mp3` alongside `--extract-audio` instructs yt-dlp to convert the audio track to mp3. This is lossy. To get the files as streamed by the site don't pass `--audio-format`.
Do note that nearly all sites don't offer uncompressed audio, so the files downloaded will have the same compression artifacts as present when playing in a web browser.
@seproDev howto get only the audio from the container then (i am a developer and not using lot' s of commands from yt-dlp and only want the best quality audio track available from the container.
Now without --audio-format its 8.6 GB download instead of 200 MB

For YouTube specifically there usually exist both a opus and aac audio format. yt-dlp preferrs opus due to being a newer codec.
You can download the opus audio format with
```
yt-dlp -x "URL"
```
If you instead want the aac format use
```
yt-dlp -x -S acodec:aac "URL"
```
`-x` is short for `--extract-audio` | 1,734,290,043,000 | null | Bug Report | [
"yt_dlp/update.py:Updater.cmd"
] | [] | 1 | 485 |
|
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11821 | 2037a6414f81db8080ca724dca506fde91974c5d | diff --git a/yt_dlp/extractor/youtube.py b/yt_dlp/extractor/youtube.py
index fd9c7107c7f7..b12a22d852ab 100644
--- a/yt_dlp/extractor/youtube.py
+++ b/yt_dlp/extractor/youtube.py
@@ -1495,7 +1495,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
},
# Age-gate videos. See https://github.com/yt-dlp/yt-dlp/pull/575#issuecomment-888837000
{
- 'note': 'Embed allowed age-gate video',
+ 'note': 'Embed allowed age-gate video; works with web_embedded',
'url': 'https://youtube.com/watch?v=HtVdAasjOgU',
'info_dict': {
'id': 'HtVdAasjOgU',
@@ -1525,7 +1525,6 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'heatmap': 'count:100',
'timestamp': 1401991663,
},
- 'skip': 'Age-restricted; requires authentication',
},
{
'note': 'Age-gate video with embed allowed in public site',
@@ -3983,10 +3982,20 @@ def append_client(*client_names):
else:
prs.append(pr)
+ # web_embedded can work around age-gate and age-verification for some embeddable videos
+ if self._is_agegated(pr) and variant != 'web_embedded':
+ append_client(f'web_embedded.{base_client}')
+ # Unauthenticated users will only get web_embedded client formats if age-gated
+ if self._is_agegated(pr) and not self.is_authenticated:
+ self.to_screen(
+ f'{video_id}: This video is age-restricted; some formats may be missing '
+ f'without authentication. {self._login_hint()}', only_once=True)
+
''' This code is pointless while web_creator is in _DEFAULT_AUTHED_CLIENTS
# EU countries require age-verification for accounts to access age-restricted videos
# If account is not age-verified, _is_agegated() will be truthy for non-embedded clients
- if self.is_authenticated and self._is_agegated(pr):
+ embedding_is_disabled = variant == 'web_embedded' and self._is_unplayable(pr)
+ if self.is_authenticated and (self._is_agegated(pr) or embedding_is_disabled):
self.to_screen(
f'{video_id}: This video is age-restricted and YouTube is requiring '
'account age-verification; some formats may be missing', only_once=True)
| [youtube] Age-restricted videos now always require sign-in
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Provide a description that is worded well enough to be understood
I'm unable to download any age restricted videos without signing in. Wasn't a problem till a week ago.
I'm on the latest nightly and it fails with or without the AGP plugin.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
C:\Users\Casey>yt-dlp https://www.youtube.com/watch?v=7Do70nztRNE -vU
[debug] Command-line config: ['https://www.youtube.com/watch?v=7Do70nztRNE', '-vU']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-nightly-builds [679c68240] (pip)
[debug] Python 3.11.2 (CPython AMD64 64bit) - Windows-10-10.0.19044-SP0 (OpenSSL 1.1.1s 1 Nov 2022)
[debug] exe versions: ffmpeg 2023-07-06-git-f00222e81f-essentials_build-www.gyan.dev (setts), ffprobe 2023-07-06-git-f00222e81f-essentials_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.19.0, brotli-1.1.0, certifi-2022.12.07, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.39.4, urllib3-1.26.18, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Extractor Plugins: AGB (YoutubeIE)
[debug] Plugin directories: ['C:\\Users\\Casey\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\yt_dlp_plugins']
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp-nightly-builds)
[youtube+AGB] Extracting URL: https://www.youtube.com/watch?v=7Do70nztRNE
[youtube+AGB] 7Do70nztRNE: Downloading webpage
[youtube+AGB] 7Do70nztRNE: Downloading ios player API JSON
[youtube+AGB] 7Do70nztRNE: This video is age-restricted; some formats may be missing without authentication. Use --cookies-from-browser or --cookies for the authentication. See https://github.com/yt-dlp/yt-dlp/wiki/FAQ#how-do-i-pass-cookies-to-yt-dlp for how to manually pass cookies
[youtube+AGB] 7Do70nztRNE: Downloading tv embedded player API JSON
[youtube+AGB] 7Do70nztRNE: Downloading mweb player API JSON
[youtube+AGB] 7Do70nztRNE: Downloading Zerody API JSON
WARNING: [youtube+AGB] Unable to download JSON metadata: HTTP Error 502: Bad Gateway
ERROR: [youtube+AGB] 7Do70nztRNE: Sign in to confirm your age. This video may be inappropriate for some users.
File "C:\Users\Casey\AppData\Local\Programs\Python\Python311\Lib\site-packages\yt_dlp\extractor\common.py", line 741, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Casey\AppData\Local\Programs\Python\Python311\Lib\site-packages\yt_dlp\extractor\youtube.py", line 4468, in _real_extract
self.raise_no_formats(reason, expected=True)
File "C:\Users\Casey\AppData\Local\Programs\Python\Python311\Lib\site-packages\yt_dlp\extractor\common.py", line 1275, in raise_no_formats
raise ExtractorError(msg, expected=expected, video_id=video_id)
```
| please provide a log without the plugin. run `set YTDLP_NO_PLUGINS=1` and then your download command again
Log without the AGP plugin
```
C:\Users\Casey>yt-dlp https://www.youtube.com/watch?v=7Do70nztRNE -vU
[debug] Command-line config: ['https://www.youtube.com/watch?v=7Do70nztRNE', '-vU']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-nightly-builds [679c68240] (pip)
[debug] Python 3.11.2 (CPython AMD64 64bit) - Windows-10-10.0.19044-SP0 (OpenSSL 1.1.1s 1 Nov 2022)
[debug] exe versions: ffmpeg 2023-07-06-git-f00222e81f-essentials_build-www.gyan.dev (setts), ffprobe 2023-07-06-git-f00222e81f-essentials_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.19.0, brotli-1.1.0, certifi-2022.12.07, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.39.4, urllib3-1.26.18, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp-nightly-builds)
[youtube] Extracting URL: https://www.youtube.com/watch?v=7Do70nztRNE
[youtube] 7Do70nztRNE: Downloading webpage
[youtube] 7Do70nztRNE: Downloading ios player API JSON
[youtube] 7Do70nztRNE: This video is age-restricted; some formats may be missing without authentication. Use --cookies-from-browser or --cookies for the authentication. See https://github.com/yt-dlp/yt-dlp/wiki/FAQ#how-do-i-pass-cookies-to-yt-dlp for how to manually pass cookies
[youtube] 7Do70nztRNE: Downloading tv embedded player API JSON
[youtube] 7Do70nztRNE: Downloading mweb player API JSON
ERROR: [youtube] 7Do70nztRNE: Sign in to confirm your age. This video may be inappropriate for some users.
File "C:\Users\Casey\AppData\Local\Programs\Python\Python311\Lib\site-packages\yt_dlp\extractor\common.py", line 741, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Casey\AppData\Local\Programs\Python\Python311\Lib\site-packages\yt_dlp\extractor\youtube.py", line 4468, in _real_extract
self.raise_no_formats(reason, expected=True)
File "C:\Users\Casey\AppData\Local\Programs\Python\Python311\Lib\site-packages\yt_dlp\extractor\common.py", line 1275, in raise_no_formats
raise ExtractorError(msg, expected=expected, video_id=video_id)
```
> ERROR: [youtube] 7Do70nztRNE: Sign in to confirm your age. This video may be inappropriate for some users.
The `tv_embedded` client that yt-dlp was using to work around the age-restriction now requires sign-in for every video, so it is no longer useful for this purpose.
The [linked pull request that will close this issue](https://github.com/yt-dlp/yt-dlp/pull/11297) merely removes the broken age-gate workaround and now-misleading warning message.
**There will likely not be a solution to this besides authenticating with cookies or oauth.**
(Obligatory mention of the [**risk you would be taking if you use cookies or oauth**](https://github.com/yt-dlp/yt-dlp/issues/10085))
---
> [youtube+AGB] 7Do70nztRNE: Downloading Zerody API JSON
> WARNING: [youtube+AGB] Unable to download JSON metadata: HTTP Error 502: Bad Gateway
As for the plugin error, that is out-of-scope for this issue tracker. It looks like an issue with the AGB API (the server, not the plugin code). There's likely nothing that the maintainer of the yt-dlp plugin can do about that either.
This is disappointing. I've already had an account blocked by Google when using oauth so downloading without signing in was the only real option for me.
So is there any workaround that anyone knows?
@eytay Your options are:
- Provide cookies to yt-dlp at the risk of your account getting blocked if you download an excessive amount
- Try the [YTAgeGateBypass](https://github.com/pukkandan/yt-dlp-YTAgeGateBypass) plugin (as you already did). This relies on a public account proxy and might not work.
I am surprised the bypass yt-dlp used worked for as long as it did.
It can't read my cookies, no matter from browser or the cookies.sqlite.
```
[ytdl_hook] ERROR: 'utf-8' codec can't decode byte 0x80 in position 16: invalid start byte
[ytdl_hook] ERROR: 'utf-8' codec can't decode byte 0x80 in position 16: invalid start byte
[ytdl_hook] youtube-dl failed: unexpected error occurred
Failed to recognize file format.
```
@laichiaheng `--cookies` requires a netscape cookie file. NOT the sqlite file your browser uses. If you want yt-dlp to read the cookies directly from your browser use `--cookies-from-browser`. If you need further help please open a new issue with a complete verbose log.
> @laichiaheng `--cookies` requires a netscape cookie file. NOT the sqlite file your browser uses. If you want yt-dlp to read the cookies directly from your browser use `--cookies-from-browser`. If you need further help please open a new issue with a complete verbose log.
I did, but it showed me the same error.
Nevermind, it seems to be fixed with the latest update, I guess, I can play the video with cookies from browser now.
Workaround for age-restricted and similar issues. This uses a personal throw-away google account, and optionally uses tor.
#### start a browser with a temp/throway profile (optionally with tor).
`torsocks chromium --temp-profile`
#### add an extension to export cookies.txt - "Get cookies.txt LOCALLY".
chromewebstore.google.com/detail/get-cookiestxt-locally/cclelndahbckbenkjhflpdbgdldlbecc
#### pin that extension to the taskbar.
#### create a google account using the temp-profile browser instance.
#### make a note of login credentials, in case you need to access the account later.
#### go to youtube, this sets YT cookies in the browser.
#### find an age-restricted video, and verify that you want to see it.
#### use the `Get cookies.txt LOCALLY` extension to "Export All Cookies".
#### (optional) re-name the saved cookies file to something more meaningful:
`mv -vi ~/Downloads/cookies.txt ~/Downloads/cookies-throwaway.txt`
#### (optional) check that only a minimal set of domains is stored in that cookie file:
```
awk '{print $1}' ~/Downloads/cookies-throwaway.txt | sort -u
#
accounts.google.com
.chromewebstore.google.com
chromewebstore.google.com
.google.com
.google.co.nz
ogs.google.com
www.google.com
.youtube.com
```
#### profit:
`torsocks yt-dlp --cookies ~/Downloads/cookies-throwaway.txt ...`
#### (optional) save the login credentials where they can be found when needed. DO NOT ADD THIS TO THE COOKIES.TXT FILE.
```
cat ~/Downloads/cookies-throwaway-login.txt
name: xxxxxx
login: xxxxxx
pass: xxxxxx
```
@atom-smasher
Thank you for this. But is creating Google Account without any personal information (e.g SMS verification) a challenge in itself?
> Thank you for this. But is creating Google Account without any personal information (e.g SMS verification) a challenge in itself?
It can be, but as I was going through step-by-step to document the process, it did not ask for any SMS verification.
I don't remember… It may have asked for something like a phone-number or backup email address, for account recovery purposes, but if it did, I was able to “skip” past them.
@atom-smasher How exactly you make your throwaway accounts? What browser/VPN server or real IP country/3rd party email if any for recovery/ask for new gmail as main or just use 3rd party as main/believable name/bd/etc were you using? I think probably all of this and more affects if you get forced phone verification or not. I had an old burner account I made with VPN and Edge but it got blocked trying it now. | 1,734,231,296,000 | null | Bug Report | [
"yt_dlp/extractor/youtube.py:YoutubeIE._extract_player_responses"
] | [] | 1 | 486 |
|
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11819 | 2037a6414f81db8080ca724dca506fde91974c5d | diff --git a/yt_dlp/update.py b/yt_dlp/update.py
index ca2ec5f376a0..9ccd44b5e77d 100644
--- a/yt_dlp/update.py
+++ b/yt_dlp/update.py
@@ -65,9 +65,14 @@ def _get_variant_and_executable_path():
machine = '_legacy' if version_tuple(platform.mac_ver()[0]) < (10, 15) else ''
else:
machine = f'_{platform.machine().lower()}'
+ is_64bits = sys.maxsize > 2**32
# Ref: https://en.wikipedia.org/wiki/Uname#Examples
if machine[1:] in ('x86', 'x86_64', 'amd64', 'i386', 'i686'):
- machine = '_x86' if platform.architecture()[0][:2] == '32' else ''
+ machine = '_x86' if not is_64bits else ''
+ # platform.machine() on 32-bit raspbian OS may return 'aarch64', so check "64-bitness"
+ # See: https://github.com/yt-dlp/yt-dlp/issues/11813
+ elif machine[1:] == 'aarch64' and not is_64bits:
+ machine = '_armv7l'
# sys.executable returns a /tmp/ path for staticx builds (linux_static)
# Ref: https://staticx.readthedocs.io/en/latest/usage.html#run-time-information
if static_exe_path := os.getenv('STATICX_PROG_PATH'):
| --update flag updates to the wrong software architecture
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
I was using `yt-dlp_linux_armv7l`, but using the `--update` flag updated me to the `yt-dlp_linux_aarch64` binary.
Attempting to run the updated binary doesn't work because it uses the wrong software architecture:
```
$ ./yt-dlp_linux_armv7l --help
bash: ./yt-dlp_linux_armv7l: No such file or directory
```
Steps to reproduce:
1. Download version `2024.12.06/yt-dlp_linux_armv7l` and confirm it is the right binary:
1. `mkdir test ; cd test`
2. `wget https://github.com/yt-dlp/yt-dlp/releases/download/2024.12.06/yt-dlp_linux_armv7l ; chmod a+x ./yt-dlp_linux_armv7l`
3. `sha256sum ./yt-dlp_linux_armv7l`
4. observer sha256 output `ed7ce4a5508dbecb5e0272ae57023eae243b4ac73d0969a498844fc3e111d8b4` is correct
5. `file ./yt-dlp_linux_armv7l`
6. Observe output is correct: `./yt-dlp_linux_armv7l: ELF 32-bit LSB pie executable, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-armhf.so.3, BuildID[sha1]=9a7fa40acc4aaeaf3d330fc2fc510872be2db480, for GNU/Linux 3.2.0, stripped`
2. Update yt-dlp and observe it now uses the wrong architecture
1. `./yt-dlp_linux_armv7l -vU` (verbose log pasted below)
2. `sha256sum ./yt-dlp_linux_armv7l`
3. observer sha256 output `d55bb8356ce48facdd0d1c34a54fc947824210a2bf67c9e2569b1b59080df7c1` corresponds to the linux_aarch64 architecture now rather than linux_armv7l
4. `file ./yt-dlp_linux_armv7l`
5. Observe output confirms we have the wrong architecture now: `./yt-dlp_linux_armv7l: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-aarch64.so.1, BuildID[sha1]=165c3840e46a056d08c976cddc9073109cf26ee7, for GNU/Linux 3.7.0, stripped`
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp (linux_aarch64_exe)
[debug] Python 3.9.5 (CPython aarch64 32bit) - Linux-6.1.21-v8+-aarch64-with-glibc2.31 (OpenSSL 1.1.1f 31 Mar 2020, glibc 2.31)
[debug] exe versions: ffmpeg 4.3.8-0, ffprobe 4.3.8-0
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.31.1, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp/releases/latest/download/_update_spec
[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp/releases/download/2024.12.13/SHA2-256SUMS
Current version: [email protected] from yt-dlp/yt-dlp
Latest version: [email protected] from yt-dlp/yt-dlp
Current Build Hash: ed7ce4a5508dbecb5e0272ae57023eae243b4ac73d0969a498844fc3e111d8b4
Updating to [email protected] from yt-dlp/yt-dlp ...
[debug] Downloading yt-dlp_linux_aarch64 from https://github.com/yt-dlp/yt-dlp/releases/download/2024.12.13/yt-dlp_linux_aarch64
Updated yt-dlp to [email protected] from yt-dlp/yt-dlp
```
| Is the verbose log you've provided the armv7l binary updating to the aarch64 binary?
If so, the armv7l binary is detecting itself as being the aarch64
>Is the verbose log you've provided the armv7l binary updating to the aarch64 binary?
Yes.
>If so, the armv7l binary is detecting itself as being the aarch64
Agreed -- very weird
Do you have python installed on your armv7l machine? If so, could you show the output of this command:
```
python -c "import platform; print(platform.machine())"
```
Here is the output you requested:
```
% python3 -c "import platform; print(platform.machine())"
aarch64
```
I'm using a raspberry pi 4b. I'm running 32 bit raspbian OS on it.
The output of `uname -a` is misleading on the system -- the `aarch64` output makes it seem like it is a 64 bit OS:
```
% uname -a
Linux piwall 6.1.21-v8+ #1642 SMP PREEMPT Mon Apr 3 17:24:16 BST 2023 aarch64 GNU/Linux
```
However, this output shows that it is actually a 32 bit OS:
```
% getconf LONG_BIT
32
```
In case it's helpful, here's some more output from my system:
```
% cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 11 (bullseye)"
NAME="Raspbian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
```
Thanks. Could you also show the output of this command please?
```
python3 -c "import sys; print(sys.maxsize > 2**32)"
```
Sure:
```
% python3 -c "import sys; print(sys.maxsize > 2**32)"
False
```
I think we could patch it like this:
```diff
diff --git a/yt_dlp/update.py b/yt_dlp/update.py
index ca2ec5f37..9ccd44b5e 100644
--- a/yt_dlp/update.py
+++ b/yt_dlp/update.py
@@ -65,9 +65,14 @@ def _get_variant_and_executable_path():
machine = '_legacy' if version_tuple(platform.mac_ver()[0]) < (10, 15) else ''
else:
machine = f'_{platform.machine().lower()}'
+ is_64bits = sys.maxsize > 2**32
# Ref: https://en.wikipedia.org/wiki/Uname#Examples
if machine[1:] in ('x86', 'x86_64', 'amd64', 'i386', 'i686'):
- machine = '_x86' if platform.architecture()[0][:2] == '32' else ''
+ machine = '_x86' if not is_64bits else ''
+ # platform.machine() on 32-bit raspbian OS may return 'aarch64', so check "64-bitness"
+ # See: https://github.com/yt-dlp/yt-dlp/issues/11813
+ elif machine[1:] == 'aarch64' and not is_64bits:
+ machine = '_armv7l'
# sys.executable returns a /tmp/ path for staticx builds (linux_static)
# Ref: https://staticx.readthedocs.io/en/latest/usage.html#run-time-information
if static_exe_path := os.getenv('STATICX_PROG_PATH'):
```
@Grub4K what do you think? | 1,734,229,904,000 | null | Bug Report | [
"yt_dlp/update.py:_get_variant_and_executable_path"
] | [] | 1 | 487 |
|
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11818 | 2037a6414f81db8080ca724dca506fde91974c5d | diff --git a/yt_dlp/extractor/youtube.py b/yt_dlp/extractor/youtube.py
index fd9c7107c7f7..e12f728ea323 100644
--- a/yt_dlp/extractor/youtube.py
+++ b/yt_dlp/extractor/youtube.py
@@ -518,11 +518,12 @@ def ucid_or_none(self, ucid):
return self._search_regex(rf'^({self._YT_CHANNEL_UCID_RE})$', ucid, 'UC-id', default=None)
def handle_or_none(self, handle):
- return self._search_regex(rf'^({self._YT_HANDLE_RE})$', handle, '@-handle', default=None)
+ return self._search_regex(rf'^({self._YT_HANDLE_RE})$', urllib.parse.unquote(handle or ''),
+ '@-handle', default=None)
def handle_from_url(self, url):
return self._search_regex(rf'^(?:https?://(?:www\.)?youtube\.com)?/({self._YT_HANDLE_RE})',
- url, 'channel handle', default=None)
+ urllib.parse.unquote(url or ''), 'channel handle', default=None)
def ucid_from_url(self, url):
return self._search_regex(rf'^(?:https?://(?:www\.)?youtube\.com)?/({self._YT_CHANNEL_UCID_RE})',
@@ -2801,6 +2802,35 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'extractor_args': {'youtube': {'player_client': ['ios'], 'player_skip': ['webpage']}},
},
},
+ {
+ # uploader_id has non-ASCII characters that are percent-encoded in YT's JSON
+ 'url': 'https://www.youtube.com/shorts/18NGQq7p3LY',
+ 'info_dict': {
+ 'id': '18NGQq7p3LY',
+ 'ext': 'mp4',
+ 'title': '아이브 이서 장원영 리즈 삐끼삐끼 챌린지',
+ 'description': '',
+ 'uploader': 'ㅇㅇ',
+ 'uploader_id': '@으아-v1k',
+ 'uploader_url': 'https://www.youtube.com/@으아-v1k',
+ 'channel': 'ㅇㅇ',
+ 'channel_id': 'UCC25oTm2J7ZVoi5TngOHg9g',
+ 'channel_url': 'https://www.youtube.com/channel/UCC25oTm2J7ZVoi5TngOHg9g',
+ 'thumbnail': r're:https?://.+/.+\.jpg',
+ 'playable_in_embed': True,
+ 'age_limit': 0,
+ 'duration': 3,
+ 'timestamp': 1724306170,
+ 'upload_date': '20240822',
+ 'availability': 'public',
+ 'live_status': 'not_live',
+ 'view_count': int,
+ 'like_count': int,
+ 'channel_follower_count': int,
+ 'categories': ['People & Blogs'],
+ 'tags': [],
+ },
+ },
]
_WEBPAGE_TESTS = [
| The `uploader_id` template does not print asian characters or letters with diacritical marks on the yt site.
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Provide a description that is worded well enough to be understood
The template `uploader_id` doesn't print asian characters or letters with diacritical marks on the yt site. While the template `uploader` does.
_Example Channel URL:_
1. https://www.youtube.com/@으아-v1k
2. https://www.youtube.com/c/CONTROLMÁS-SUILERALTAMIRANO
_Example Command:_
`yt-dlp -o "./%(uploader)s %(uploader_id)s/Videos/%(upload_date)s %(uploader_id)s %(title)s [%(id)s].%(ext)s" "Example_Channel_URL"`
_Output:_
Name of the created folders
1. `ㅇㅇ#`
2. `Suiler Altamirano - Control + NA`
Using the `--print` argument.
_Command:_
`yt-dlp -vU --print uploader,uploader_id "https://www.youtube.com/shorts/18NGQq7p3LY"`
_Ouput:_
```
ㅇㅇ
NA
```
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--print', 'uploader,uploader_id', 'https://www.youtube.com/shorts/18NGQq7p3LY']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [542166962] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 2024-07-07-git-0619138639-full_build-www.gyan.dev (setts), ffprobe 2024-07-07-git-0619138639-full_build-www.gyan.dev, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Extractor Plugins:
[debug] Post-Processor Plugins:
[debug] Plugin directories: []
[debug] Loaded 1839 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[youtube] Extracting URL: https://www.youtube.com/shorts/18NGQq7p3LY
[youtube] 18NGQq7p3LY: Downloading webpage
[youtube] 18NGQq7p3LY: Downloading ios player API JSON
[youtube] 18NGQq7p3LY: Downloading mweb player API JSON
[debug] Loading youtube-nsig.f8f53e1a from cache
[debug] [youtube] Decrypted nsig uq0JCV23R3b7atLWxO4 => 9vigDFaIWXoXwA
[debug] Loading youtube-nsig.f8f53e1a from cache
[debug] [youtube] Decrypted nsig -T_DSjpoOPpyTJKSn0b => N9TXRFGASCEhbA
[youtube] 18NGQq7p3LY: Downloading m3u8 information
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] 18NGQq7p3LY: Downloading 1 format(s): 315+251
ㅇㅇ
NA
```
| 1,734,228,450,000 | null | Bug Report | [
"yt_dlp/extractor/youtube.py:YoutubeBaseInfoExtractor.handle_or_none",
"yt_dlp/extractor/youtube.py:YoutubeBaseInfoExtractor.handle_from_url"
] | [] | 2 | 488 |
||
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11782 | 6fef824025b3c2f0ca8af7ac9fa04b10d09a3591 | diff --git a/yt_dlp/extractor/youtube.py b/yt_dlp/extractor/youtube.py
index e69373ba2f42..0814d0a0621b 100644
--- a/yt_dlp/extractor/youtube.py
+++ b/yt_dlp/extractor/youtube.py
@@ -5282,6 +5282,7 @@ def _extract_entries(self, parent_renderer, continuation_list):
'channelRenderer': lambda x: self._grid_entries({'items': [{'channelRenderer': x}]}),
'hashtagTileRenderer': lambda x: [self._hashtag_tile_entry(x)],
'richGridRenderer': lambda x: self._extract_entries(x, continuation_list),
+ 'lockupViewModel': lambda x: [self._extract_lockup_view_model(x)],
}
for key, renderer in isr_content.items():
if key not in known_renderers:
| [Youtube] Playlist search broken
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Germany
### Provide a description that is worded well enough to be understood
Using the youtube search functionality with playlist filter enabled does not work anymore.
Worked on previous versions. Should be related to current playlist issues which return 0 results.
Expected behavior should be getting the json result of the playlists results.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vvvv', '-4', '--no-warnings', '--no-check-certificate', '--dump-json', '--playlist-start', '1', '--playlist-end', '50', '--flat-playlist', 'https://www.youtube.com/results?search_query=cute+cat&sp=EgIQAw%253D%253D']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] (linux_exe)
[debug] Lazy loading extractors is disabled
[debug] Python 3.9.2 (CPython x86_64 64bit) - Linux-4.19.0-24-amd64-x86_64-with-glibc2.31 (OpenSSL 1.1.1w 11 Sep 2023, glibc 2.31)
[debug] exe versions: ffmpeg 4.3.8-0, ffprobe 4.3.8-0
[debug] Optional libraries: Cryptodome-3.9.7, certifi-2020.06.20, requests-2.25.1, secretstorage-3.3.1, sqlite3-3.34.1, urllib3-1.26.5
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1837 extractors
[youtube:search_url] Extracting URL: https://www.youtube.com/results?search_query=cute+cat&sp=EgIQAw%253D%253D
[download] Downloading playlist: cute cat
[youtube:search_url] query "cute cat": Downloading web client config
[youtube:search_url] query "cute cat" page 1: Downloading API JSON
[youtube:search_url] query "cute cat" page 2: Downloading API JSON
[youtube:search_url] query "cute cat" page 3: Downloading API JSON
[youtube:search_url] query "cute cat" page 4: Downloading API JSON
[youtube:search_url] query "cute cat" page 5: Downloading API JSON
[youtube:search_url] query "cute cat" page 6: Downloading API JSON
[youtube:search_url] query "cute cat" page 7: Downloading API JSON
[youtube:search_url] query "cute cat" page 8: Downloading API JSON
[youtube:search_url] query "cute cat" page 9: Downloading API JSON
[youtube:search_url] query "cute cat" page 10: Downloading API JSON
[youtube:search_url] query "cute cat" page 11: Downloading API JSON
[youtube:search_url] query "cute cat" page 12: Downloading API JSON
[youtube:search_url] query "cute cat" page 13: Downloading API JSON
[youtube:search_url] query "cute cat" page 14: Downloading API JSON
[youtube:search_url] query "cute cat" page 15: Downloading API JSON
[youtube:search_url] query "cute cat" page 16: Downloading API JSON
[youtube:search_url] query "cute cat" page 17: Downloading API JSON
[youtube:search_url] query "cute cat" page 18: Downloading API JSON
[youtube:search_url] query "cute cat" page 19: Downloading API JSON
[youtube:search_url] query "cute cat" page 20: Downloading API JSON
[youtube:search_url] query "cute cat" page 21: Downloading API JSON
[youtube:search_url] query "cute cat" page 22: Downloading API JSON
[youtube:search_url] query "cute cat" page 23: Downloading API JSON
[youtube:search_url] query "cute cat" page 24: Downloading API JSON
[youtube:search_url] Playlist cute cat: Downloading 0 items
[debug] The information of all playlist entries will be held in memory
[download] Finished downloading playlist: cute cat
```
| did you update to nightly/master like the issue template told you to, though?
yes, i am on master and compiled it myself.
> yes, i am on master and compiled it myself.
that verbose log tells me that you are on yt-dlp stable branch and not nightly/master branch
how can i be on stable if i clone the master branche and compile it? I am on the master branche.
if you want to extract the flat-playlist from eg.
https://www.youtube.com/results?search_query=cute+cat&sp=EgIQAw%253D%253D
its still broken in master.
> if you want to extract the flat-playlist from eg. https://www.youtube.com/results?search_query=cute+cat&sp=EgIQAw%253D%253D
>
> its still broken in master.
You need to change the search link to this: https://m.youtube.com/results?search_query=your+search+terms
Well that’s a potential workaround but no fix.
If YouTube is doing site/layout changes, then the parsers have to get updated and not just pointed to a mobile site, which is likely to also getting changed.
> > if you want to extract the flat-playlist from eg. https://www.youtube.com/results?search_query=cute+cat&sp=EgIQAw%253D%253D
> > its still broken in master.
>
> You need to change the search link to this: https://m.youtube.com/results?search_query=your+search+terms
for notice:
suggested change is also broken.
https://m.youtube.com/results?search_query=cute+cat&sp=EgIQAw%253D%253D
does not work, as the parsing for playlists in general is broken and we only search for playlists with "&sp=EgIQAw%253D%253D" search filter added.
> > > if you want to extract the flat-playlist from eg. https://www.youtube.com/results?search_query=cute+cat&sp=EgIQAw%253D%253D
> > > its still broken in master.
> >
> >
> > You need to change the search link to this: https://m.youtube.com/results?search_query=your+search+terms
>
> for notice:
>
> suggested change is also broken.
>
> https://m.youtube.com/results?search_query=cute+cat&sp=EgIQAw%253D%253D
>
> does not work, as the parsing for playlists in general is broken and we only search for playlists with "&sp=EgIQAw%253D%253D" search filter added.
cute cat playlist as you search doesn't exist
I do not search for a specific playlist. That is a general search request to youtube to get a list of all available playlists where youtube results for the query.
This worked before and for years. To be able to search for type "playlist" only in the search results and get it back as a json response by using --flat-playlist and --dump-json
> I do not search for a specific playlist. That is a general search request to youtube to get a list of all available playlists where youtube results for the query. This worked before and for years. To be able to search for type "playlist" only in the search results and get it back as a json response by using --flat-playlist and --dump-json
playlists from youtube results for cute cat that you say don't exist now
so then whats this?:
https://www.youtube.com/results?search_query=cute+cat&sp=EgIQAw%253D%253D
its a list of playlists for the search query "cute cat", which before yt-dlp parsed as a json list of playlist urls, with titles, amount of videos in the playlist, creator ...
i think you don't get the point what is tried to archive here, which worked fine before.
I am currently debugging this issue and found out, that the search query calls "_search_results" function first and then "_extract_entries" is beeing called.
Inside "_extract_entries" there is a definition of known renderers:
```
known_renderers = {
'playlistVideoListRenderer': self._playlist_entries,
'gridRenderer': self._grid_entries,
'reelShelfRenderer': self._grid_entries,
'shelfRenderer': self._shelf_entries,
'musicResponsiveListItemRenderer': lambda x: [self._music_reponsive_list_entry(x)],
'backstagePostThreadRenderer': self._post_thread_entries,
'videoRenderer': lambda x: [self._video_entry(x)],
'playlistRenderer': lambda x: self._grid_entries({'items': [{'playlistRenderer': x}]}),
'channelRenderer': lambda x: self._grid_entries({'items': [{'channelRenderer': x}]}),
'hashtagTileRenderer': lambda x: [self._hashtag_tile_entry(x)],
'richGridRenderer': lambda x: self._extract_entries(x, continuation_list),
}
```
which is beeing looked for.
I added prints to see whats happening for:
```
for key, renderer in isr_content.items():
if key not in known_renderers:
print("key NOT found in known_renderers: " + key)
continue
for entry in known_renderers[key](renderer):
print("found renderer: " + entry)
```
which results in this output:
```
[youtube:search_url] query "cute cat" page 1: Downloading API JSON
=====================_extract_entries=====================
key NOT found in known_renderers: adSlotRenderer
key NOT found in known_renderers: lockupViewModel
key NOT found in known_renderers: lockupViewModel
key NOT found in known_renderers: adSlotRenderer
key NOT found in known_renderers: lockupViewModel
key NOT found in known_renderers: lockupViewModel
key NOT found in known_renderers: adSlotRenderer
key NOT found in known_renderers: lockupViewModel
key NOT found in known_renderers: lockupViewModel
key NOT found in known_renderers: lockupViewModel
key NOT found in known_renderers: lockupViewModel
key NOT found in known_renderers: lockupViewModel
key NOT found in known_renderers: adSlotRenderer
key NOT found in known_renderers: lockupViewModel
key NOT found in known_renderers: lockupViewModel
key NOT found in known_renderers: lockupViewModel
key NOT found in known_renderers: lockupViewModel
key NOT found in known_renderers: lockupViewModel
key NOT found in known_renderers: adSlotRenderer
key NOT found in known_renderers: lockupViewModel
key NOT found in known_renderers: lockupViewModel
key NOT found in known_renderers: lockupViewModel
key NOT found in known_renderers: lockupViewModel
key NOT found in known_renderers: lockupViewModel
key NOT found in known_renderers: lockupViewModel
```
For me this looks like on the search results there are "lockupViewModel" which is missing in the "knwon_rednerers", so no parsing is done anymore, as continue just skips it as no renderer is defined anymore for this case.
Any ideas to speed this up fixing this? Otherwise I have to look through all the code myself.
This issue probably highly relates to:
https://github.com/yt-dlp/yt-dlp/pull/11615
I am looking how this already merged pull request can be used to solve this issue too.
How about
```diff
diff --git a/yt_dlp/extractor/youtube.py b/yt_dlp/extractor/youtube.py
index e69373ba2..0814d0a06 100644
--- a/yt_dlp/extractor/youtube.py
+++ b/yt_dlp/extractor/youtube.py
@@ -5282,6 +5282,7 @@ def _extract_entries(self, parent_renderer, continuation_list):
'channelRenderer': lambda x: self._grid_entries({'items': [{'channelRenderer': x}]}),
'hashtagTileRenderer': lambda x: [self._hashtag_tile_entry(x)],
'richGridRenderer': lambda x: self._extract_entries(x, continuation_list),
+ 'lockupViewModel': lambda x: [self._extract_lockup_view_model(x)],
}
for key, renderer in isr_content.items():
if key not in known_renderers:
```
> How about
>
> ```diff
> diff --git a/yt_dlp/extractor/youtube.py b/yt_dlp/extractor/youtube.py
> index e69373ba2..0814d0a06 100644
> --- a/yt_dlp/extractor/youtube.py
> +++ b/yt_dlp/extractor/youtube.py
> @@ -5282,6 +5282,7 @@ def _extract_entries(self, parent_renderer, continuation_list):
> 'channelRenderer': lambda x: self._grid_entries({'items': [{'channelRenderer': x}]}),
> 'hashtagTileRenderer': lambda x: [self._hashtag_tile_entry(x)],
> 'richGridRenderer': lambda x: self._extract_entries(x, continuation_list),
> + 'lockupViewModel': lambda x: [self._extract_lockup_view_model(x)],
> }
> for key, renderer in isr_content.items():
> if key not in known_renderers:
> ```
this does work!
output:
```
=====================_extract_entries=====================
[youtube:search_url] Playlist cute cat: Downloading 50 items
[debug] The information of all playlist entries will be held in memory
[download] Downloading item 1 of 50
{"title": "Cute Cat of NI", "thumbnails": [{"url": "https://i.ytimg.com/vi/phXmMI35fIo/hqdefault.jpg?sqp=-oaymwEWCKgBEF5IWvKriqkDCQgBFQAAiEIYAQ==&rs=AOn4CLCexppnbWUPXQgz341lRGXxTwGGww", "height": 94, "width": 168}, {"url": "https://i.ytimg.com/vi/phXmMI35fIo/hqdefault.jpg?sqp=-oaymwEWCMQBEG5IWvKriqkDCQgBFQAAiEIYAQ==&rs=AOn4CLCNJgQ2qs-2znEem1B90IIx-h8QcA", "height": 110, "width": 196}, {"url": "https://i.ytimg.com/vi/phXmMI35fIo/hqdefault.jpg?sqp=-oaymwEXCPYBEIoBSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLCpjHdPSkoqpwTrF9zPElW4ICJwkw", "height": 138, "width": 246}, {"url": "https://i.ytimg.com/vi/phXmMI35fIo/hqdefault.jpg?sqp=-oaymwEXCNACELwBSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLAgoA5ZH_HfuzDBg1QYHVu4R_kRzQ", "height": 188, "width": 336}], "ie_key": "YoutubeTab", "id": "PLBFAOHoKf1k6-tWa_yjDJKxnR6srQXhSH", "_type": "url", "url": "https://www.youtube.com/playlist?list=PLBFAOHoKf1k6-tWa_yjDJKxnR6srQXhSH", "__x_forwarded_for_ip": null, "webpage_url": "https://www.youtube.com/playlist?list=PLBFAOHoKf1k6-tWa_yjDJKxnR6srQXhSH", "original_url": "https://www.youtube.com/playlist?list=PLBFAOHoKf1k6-tWa_yjDJKxnR6srQXhSH", "webpage_url_basename": "playlist", "webpage_url_domain": "youtube.com", "extractor": "youtube:tab", "extractor_key": "YoutubeTab", "playlist_count": null, "playlist": "cute cat", "playlist_id": "cute cat", "playlist_title": "cute cat", "playlist_uploader": null, "playlist_uploader_id": null, "playlist_channel": null, "playlist_channel_id": null, "playlist_webpage_url": "https://www.youtube.com/results?search_query=cute+cat&sp=EgIQAw%253D%253D", "n_entries": 50, "playlist_index": 1, "__last_playlist_index": 50, "playlist_autonumber": 1, "epoch": 1733840411, "release_year": null, "_version": {"version": "2024.12.06", "current_git_head": null, "release_git_head": "4bd2655398aed450456197a6767639114a24eac2", "repository": "yt-dlp/yt-dlp"}}
[download] Downloading item 2 of 50
{"title": "Cute cat \u2618\ufe0f \u3010Cute Lofi Mix\ud83c\udf52\u3011\ud83c\udf3crelax / study / sleep / work / aesthetic", "thumbnails": [{"url": "https://i.ytimg.com/vi/hW9g7To610w/hqdefault.jpg?sqp=-oaymwEWCKgBEF5IWvKriqkDCQgBFQAAiEIYAQ==&rs=AOn4CLDPa1XYgae4ZMThWSTnPuckBqOwYg", "height": 94, "width": 168}, {"url": "https://i.ytimg.com/vi/hW9g7To610w/hqdefault.jpg?sqp=-oaymwEWCMQBEG5IWvKriqkDCQgBFQAAiEIYAQ==&rs=AOn4CLDA0cKLczY3zRxwelwc4LBRfz_FOA", "height": 110, "width": 196}, {"url": "https://i.ytimg.com/vi/hW9g7To610w/hqdefault.jpg?sqp=-oaymwEXCPYBEIoBSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLDd60KIWDS1XMf99cU-L93sTYWa1w", "height": 138, "width": 246}, {"url": "https://i.ytimg.com/vi/hW9g7To610w/hqdefault.jpg?sqp=-oaymwEXCNACELwBSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLC2dbcOvc9nlk_OrOPTyeklrmxFXg", "height": 188, "width": 336}], "ie_key": "YoutubeTab", "id": "PLumUPw_clK5HHuZIUFXApW7jShX5JPmMJ", "_type": "url", "url": "https://www.youtube.com/playlist?list=PLumUPw_clK5HHuZIUFXApW7jShX5JPmMJ", "__x_forwarded_for_ip": null,"webpage_url": "https://www.youtube.com/playlist?list=PLumUPw_clK5HHuZIUFXApW7jShX5JPmMJ", "original_url": "https://www.youtube.com/playlist?list=PLumUPw_clK5HHuZIUFXApW7jShX5JPmMJ", "webpage_url_basename": "playlist", "webpage_url_domain": "youtube.com", "extractor": "youtube:tab", "extractor_key": "YoutubeTab", "playlist_count": null, "playlist": "cute cat", "playlist_id": "cute cat", "playlist_title": "cute cat", "playlist_uploader": null, "playlist_uploader_id": null, "playlist_channel": null, "playlist_channel_id": null, "playlist_webpage_url": "https://www.youtube.com/results?search_query=cute+cat&sp=EgIQAw%253D%253D", "n_entries": 50, "playlist_index": 2, "__last_playlist_index": 50, "playlist_autonumber": 2, "epoch": 1733840411, "release_year": null, "_version": {"version": "2024.12.06", "current_git_head": null, "release_git_head": "4bd2655398aed450456197a6767639114a24eac2", "repository": "yt-dlp/yt-dlp"}}
[download] Downloading item 3 of 50
{"title": "Cute Kitten, Cute Cat | Little Kittens", "thumbnails": [{"url": "https://i.ytimg.com/vi/nAvtX22KNTg/hqdefault.jpg?sqp=-oaymwEWCKgBEF5IWvKriqkDCQgBFQAAiEIYAQ==&rs=AOn4CLA37XsAbQxF7--pdSd9V-8Mgdvk-Q", "height": 94, "width": 168}, {"url": "https://i.ytimg.com/vi/nAvtX22KNTg/hqdefault.jpg?sqp=-oaymwEWCMQBEG5IWvKriqkDCQgBFQAAiEIYAQ==&rs=AOn4CLDgP2JXJQQ9eKyjRq_kzHH-PvSjZQ", "height": 110, "width": 196}, {"url": "https://i.ytimg.com/vi/nAvtX22KNTg/hqdefault.jpg?sqp=-oaymwEXCPYBEIoBSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLD7sRLFMJcsZyb78yGmcrvD3JIZmw", "height": 138, "width": 246}, {"url": "https://i.ytimg.com/vi/nAvtX22KNTg/hqdefault.jpg?sqp=-oaymwEXCNACELwBSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLBE3bQuBq3UpaLpq71JYp9bmfRZnQ", "height": 188, "width": 336}], "ie_key": "YoutubeTab", "id": "PLlBfWe1gfOezsfNKeEE00vzCjgBgBVeE-", "_type": "url", "url": "https://www.youtube.com/playlist?list=PLlBfWe1gfOezsfNKeEE00vzCjgBgBVeE-", "__x_forwarded_for_ip": null, "webpage_url": "https://www.youtube.com/playlist?list=PLlBfWe1gfOezsfNKeEE00vzCjgBgBVeE-", "original_url": "https://www.youtube.com/playlist?list=PLlBfWe1gfOezsfNKeEE00vzCjgBgBVeE-", "webpage_url_basename": "playlist", "webpage_url_domain": "youtube.com", "extractor": "youtube:tab", "extractor_key": "YoutubeTab", "playlist_count": null, "playlist": "cute cat", "playlist_id": "cute cat", "playlist_title": "cute cat", "playlist_uploader": null, "playlist_uploader_id": null, "playlist_channel": null, "playlist_channel_id": null, "playlist_webpage_url": "https://www.youtube.com/results?search_query=cute+cat&sp=EgIQAw%253D%253D", "n_entries": 50, "playlist_index": 3, "__last_playlist_index": 50, "playlist_autonumber": 3, "epoch": 1733840411, "release_year": null, "_version": {"version": "2024.12.06", "current_git_head": null, "release_git_head": "4bd2655398aed450456197a6767639114a24eac2", "repository": "yt-dlp/yt-dlp"}}
[download] Downloading item 4 of 50
{"title": "Cute Cat Videos", "thumbnails": [{"url": "https://i.ytimg.com/vi/nMeSViR6Uoc/hqdefault.jpg?sqp=-oaymwEWCKgBEF5IWvKriqkDCQgBFQAAiEIYAQ==&rs=AOn4CLDZyAud_wlAIhfjJsELZG7jgfRlpw", "height": 94, "width": 168}, {"url": "https://i.ytimg.com/vi/nMeSViR6Uoc/hqdefault.jpg?sqp=-oaymwEWCMQBEG5IWvKriqkDCQgBFQAAiEIYAQ==&rs=AOn4CLDqn_ufZK0wAMo5ahDJOSvyiOFLfw", "height": 110, "width": 196}, {"url": "https://i.ytimg.com/vi/nMeSViR6Uoc/hqdefault.jpg?sqp=-oaymwEXCPYBEIoBSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLAkHUkEJUTHwdwphBPzCBmis1Ga-A", "height": 138, "width": 246}, {"url": "https://i.ytimg.com/vi/nMeSViR6Uoc/hqdefault.jpg?sqp=-oaymwEXCNACELwBSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLDLyEqB0fjSAVjRU2YOLzsztNKgvA", "height": 188, "width": 336}], "ie_key": "YoutubeTab", "id": "PL7-x6iMIEpgc1Vhq-FASNb186TWXVS_u2", "_type": "url", "url": "https://www.youtube.com/playlist?list=PL7-x6iMIEpgc1Vhq-FASNb186TWXVS_u2", "__x_forwarded_for_ip": null, "webpage_url": "https://www.youtube.com/playlist?list=PL7-x6iMIEpgc1Vhq-FASNb186TWXVS_u2", "original_url": "https://www.youtube.com/playlist?list=PL7-x6iMIEpgc1Vhq-FASNb186TWXVS_u2", "webpage_url_basename": "playlist", "webpage_url_domain": "youtube.com", "extractor": "youtube:tab", "extractor_key": "YoutubeTab", "playlist_count": null, "playlist": "cute cat", "playlist_id": "cute cat", "playlist_title": "cute cat", "playlist_uploader": null, "playlist_uploader_id": null, "playlist_channel": null, "playlist_channel_id": null, "playlist_webpage_url": "https://www.youtube.com/results?search_query=cute+cat&sp=EgIQAw%253D%253D", "n_entries": 50, "playlist_index": 4, "__last_playlist_index": 50, "playlist_autonumber": 4, "epoch": 1733840411, "release_year": null, "_version": {"version": "2024.12.06", "current_git_head": null, "release_git_head": "4bd2655398aed450456197a6767639114a24eac2", "repository": "yt-dlp/yt-dlp"}}
[download] Downloading item 5 of 50
{"title": "Cute cat and puppy world", "thumbnails": [{"url": "https://i.ytimg.com/vi/GCnQYjXvV7Q/hqdefault.jpg?sqp=-oaymwEWCKgBEF5IWvKriqkDCQgBFQAAiEIYAQ==&rs=AOn4CLBd6bosQTSVHcMWS8miLbZT2gq_ig", "height": 94, "width": 168}, {"url": "https://i.ytimg.com/vi/GCnQYjXvV7Q/hqdefault.jpg?sqp=-oaymwEWCMQBEG5IWvKriqkDCQgBFQAAiEIYAQ==&rs=AOn4CLDV7lb4j_K5SUFVZQ5qRuAcU4MWNQ", "height": 110, "width": 196}, {"url": "https://i.ytimg.com/vi/GCnQYjXvV7Q/hqdefault.jpg?sqp=-oaymwEXCPYBEIoBSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLBqjGracCqwvT3R-4U19k5GUiAY4w", "height": 138, "width": 246}, {"url": "https://i.ytimg.com/vi/GCnQYjXvV7Q/hqdefault.jpg?sqp=-oaymwEXCNACELwBSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLCotimRQTnkRcbXiEDjnTTJEOwGzQ", "height": 188, "width": 336}], "ie_key": "YoutubeTab", "id": "PLdL74Q19adY0dh-ymKIxIHrwSRyIFL5a1", "_type": "url", "url": "https://www.youtube.com/playlist?list=PLdL74Q19adY0dh-ymKIxIHrwSRyIFL5a1", "__x_forwarded_for_ip": null, "webpage_url": "https://www.youtube.com/playlist?list=PLdL74Q19adY0dh-ymKIxIHrwSRyIFL5a1", "original_url": "https://www.youtube.com/playlist?list=PLdL74Q19adY0dh-ymKIxIHrwSRyIFL5a1", "webpage_url_basename": "playlist", "webpage_url_domain": "youtube.com", "extractor": "youtube:tab", "extractor_key": "YoutubeTab", "playlist_count": null, "playlist": "cute cat", "playlist_id": "cute cat", "playlist_title": "cute cat", "playlist_uploader": null, "playlist_uploader_id": null, "playlist_channel": null, "playlist_channel_id": null, "playlist_webpage_url": "https://www.youtube.com/results?search_query=cute+cat&sp=EgIQAw%253D%253D", "n_entries": 50, "playlist_index": 5, "__last_playlist_index": 50, "playlist_autonumber": 5, "epoch": 1733840411, "release_year": null, "_version": {"version": "2024.12.06", "current_git_head": null, "release_git_head": "4bd2655398aed450456197a6767639114a24eac2", "repository": "yt-dlp/yt-dlp"}}
[download] Downloading item 6 of 50
{"title": "Duet Cats Cute Popcat Music - all SONG, CATS and FOOD", "thumbnails": [{"url": "https://i.ytimg.com/vi/X5-AkEhhjho/hqdefault.jpg?sqp=-oaymwEWCKgBEF5IWvKriqkDCQgBFQAAiEIYAQ==&rs=AOn4CLBhqwVi5L8OHVcLOc6KHVavr3Cplg", "height": 94, "width": 168}, {"url": "https://i.ytimg.com/vi/X5-AkEhhjho/hqdefault.jpg?sqp=-oaymwEWCMQBEG5IWvKriqkDCQgBFQAAiEIYAQ==&rs=AOn4CLBFRO7M6WkW6tPC_5Y__Ze9xM8M2A", "height": 110, "width": 196}, {"url": "https://i.ytimg.com/vi/X5-AkEhhjho/hqdefault.jpg?sqp=-oaymwEXCPYBEIoBSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLA1IVtb2WvU1ZVY4AYhDP3uT7n99A", "height": 138, "width": 246}, {"url": "https://i.ytimg.com/vi/X5-AkEhhjho/hqdefault.jpg?sqp=-oaymwEXCNACELwBSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLDolJQfj3Qoim5rPS8LWifAerl5Aw", "height": 188, "width": 336}], "ie_key": "YoutubeTab", "id": "PL7DH5KKQ3-8LkwNGLHF_shz4Pmew1vtVN", "_type": "url", "url": "https://www.youtube.com/playlist?list=PL7DH5KKQ3-8LkwNGLHF_shz4Pmew1vtVN", "__x_forwarded_for_ip": null, "webpage_url": "https://www.youtube.com/playlist?list=PL7DH5KKQ3-8LkwNGLHF_shz4Pmew1vtVN", "original_url": "https://www.youtube.com/playlist?list=PL7DH5KKQ3-8LkwNGLHF_shz4Pmew1vtVN", "webpage_url_basename": "playlist", "webpage_url_domain": "youtube.com", "extractor": "youtube:tab", "extractor_key": "YoutubeTab", "playlist_count": null, "playlist": "cute cat", "playlist_id": "cute cat", "playlist_title": "cute cat", "playlist_uploader": null, "playlist_uploader_id": null, "playlist_channel": null, "playlist_channel_id": null, "playlist_webpage_url": "https://www.youtube.com/results?search_query=cute+cat&sp=EgIQAw%253D%253D", "n_entries": 50, "playlist_index": 6, "__last_playlist_index": 50, "playlist_autonumber": 6, "epoch": 1733840411, "release_year": null, "_version": {"version": "2024.12.06", "current_git_head": null, "release_git_head": "4bd2655398aed450456197a6767639114a24eac2", "repository": "yt-dlp/yt-dlp"}}
[download] Downloading item 7 of 50
{"title": "Cute cat", "thumbnails": [{"url": "https://i.ytimg.com/vi/w9g6_xz8hUQ/hqdefault.jpg?sqp=-oaymwEWCKgBEF5IWvKriqkDCQgBFQAAiEIYAQ==&rs=AOn4CLDWIKpFazsSR-TQR4NRvvEoI2dXRA", "height": 94, "width": 168}, {"url": "https://i.ytimg.com/vi/w9g6_xz8hUQ/hqdefault.jpg?sqp=-oaymwEWCMQBEG5IWvKriqkDCQgBFQAAiEIYAQ==&rs=AOn4CLB2gg6WiCJO7rSggnpyY_btM8318w", "height": 110, "width": 196}, {"url": "https://i.ytimg.com/vi/w9g6_xz8hUQ/hqdefault.jpg?sqp=-oaymwEXCPYBEIoBSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLDVYR5pdbk9qoXhAF-kLDZT1mpIzA", "height": 138, "width": 246}, {"url": "https://i.ytimg.com/vi/w9g6_xz8hUQ/hqdefault.jpg?sqp=-oaymwEXCNACELwBSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLACrgV-nDaVlyDlUmPN8bCGwTffyQ", "height": 188, "width": 336}], "ie_key": "YoutubeTab", "id": "PL-_BwsNAuul_t0zp8SPKxbD-ktDqxiG5W", "_type": "url", "url": "https://www.youtube.com/playlist?list=PL-_BwsNAuul_t0zp8SPKxbD-ktDqxiG5W", "__x_forwarded_for_ip": null, "webpage_url": "https://www.youtube.com/playlist?list=PL-_BwsNAuul_t0zp8SPKxbD-ktDqxiG5W", "original_url": "https://www.youtube.com/playlist?list=PL-_BwsNAuul_t0zp8SPKxbD-ktDqxiG5W", "webpage_url_basename": "playlist", "webpage_url_domain": "youtube.com", "extractor": "youtube:tab", "extractor_key": "YoutubeTab", "playlist_count": null, "playlist": "cute cat", "playlist_id": "cute cat", "playlist_title": "cute cat", "playlist_uploader": null, "playlist_uploader_id": null, "playlist_channel": null, "playlist_channel_id": null, "playlist_webpage_url": "https://www.youtube.com/results?search_query=cute+cat&sp=EgIQAw%253D%253D", "n_entries": 50, "playlist_index": 7, "__last_playlist_index": 50, "playlist_autonumber": 7, "epoch": 1733840411, "release_year": null, "_version": {"version": "2024.12.06", "current_git_head": null, "release_git_head": "4bd2655398aed450456197a6767639114a24eac2", "repository": "yt-dlp/yt-dlp"}}
[download] Downloading item 8 of 50
...
``` | 1,733,842,458,000 | null | Bug Report | [
"yt_dlp/extractor/youtube.py:YoutubeTabBaseInfoExtractor._extract_entries"
] | [] | 1 | 489 |
|
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11734 | 354cb4026cf2191e1a130ec2a627b95cabfbc60a | diff --git a/yt_dlp/extractor/bilibili.py b/yt_dlp/extractor/bilibili.py
index 91619d9d5ca9..2db951a6084d 100644
--- a/yt_dlp/extractor/bilibili.py
+++ b/yt_dlp/extractor/bilibili.py
@@ -681,12 +681,6 @@ def _real_extract(self, url):
old_video_id = format_field(aid, None, f'%s_part{part_id or 1}')
cid = traverse_obj(video_data, ('pages', part_id - 1, 'cid')) if part_id else video_data.get('cid')
- play_info = (
- traverse_obj(
- self._search_json(r'window\.__playinfo__\s*=', webpage, 'play info', video_id, default=None),
- ('data', {dict}))
- or self._download_playinfo(video_id, cid, headers=headers, query={'try_look': 1}))
-
festival_info = {}
if is_festival:
festival_info = traverse_obj(initial_state, {
@@ -724,6 +718,13 @@ def _real_extract(self, url):
duration=traverse_obj(initial_state, ('videoData', 'duration', {int_or_none})),
__post_extractor=self.extract_comments(aid))
+ play_info = None
+ if self.is_logged_in:
+ play_info = traverse_obj(
+ self._search_json(r'window\.__playinfo__\s*=', webpage, 'play info', video_id, default=None),
+ ('data', {dict}))
+ if not play_info:
+ play_info = self._download_playinfo(video_id, cid, headers=headers, query={'try_look': 1})
formats = self.extract_formats(play_info)
if video_data.get('is_upower_exclusive'):
| [BiliBili] extract 720p/1080p format without logging in by passing `'try_look': 1` to the api
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a site-specific feature
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
China/Out of China
### Example URLs
https://www.bilibili.com/video/BV1fK4y1t7hj/?spm_id_from=333.337.search-card.all.click&vd_source=...c145ee572cfa536d2947
### Provide a description that is worded well enough to be understood
As mentioned in https://github.com/yt-dlp/yt-dlp/pull/9117#discussion_r1608974583, it is possible to extract 720p/1080p formats(`80`&`64`) without logging in by passing the parameter `'try_look': 1` to the API.
(though premium formats `120`&`116` are still not accessible)
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[BiliBili] Extracting URL: https://www.bilibili.com/video/BV1fK4y1t7hj/?spm_id_from=333.337.search-card.all.click&vd_source=...c145ee572cfa536d2947
[BiliBili] 1fK4y1t7hj: Downloading webpage
[BiliBili] BV1fK4y1t7hj: Extracting videos in anthology
[BiliBili] BV1fK4y1t7hj: Downloading wbi sign
[BiliBili] BV1fK4y1t7hj: Downloading video formats for cid 196018899
[BiliBili] Format(s) 4K 超清, 1080P 60帧, 1080P 高清, 720P 高清 are missing; you have to login or become a premium member to download them. Use --cookies-from-browser or --cookies for the authentication. See https://github.com/yt-dlp/yt-dlp/wiki/FAQ#how-do-i-pass-cookies-to-yt-dlp for how to manually pass cookies
[BiliBili] 883362563: Extracting chapters
[info] Available formats for BV1fK4y1t7hj:
ID EXT RESOLUTION FPS │ FILESIZE TBR PROTO │ VCODEC VBR ACODEC ABR
─────────────────────────────────────────────────────────────────────────────────────
30216 m4a audio only │ ≈ 1.71MiB 49k https │ audio only mp4a.40.5 49k
30232 m4a audio only │ ≈ 4.72MiB 134k https │ audio only mp4a.40.2 134k
30280 m4a audio only │ ≈10.10MiB 287k https │ audio only mp4a.40.2 287k
30016 mp4 640x290 29 │ ≈12.24MiB 348k https │ avc1.64001E 348k video only
100022 mp4 792x360 30 │ ≈ 7.11MiB 202k https │ av01.0.04M.08 202k video only
30011 mp4 792x360 30 │ ≈10.98MiB 312k https │ hev1.1.6.L120 312k video only
30032 mp4 854x388 29 │ ≈24.20MiB 688k https │ avc1.64001E 688k video only
100023 mp4 1056x480 30 │ ≈15.72MiB 447k https │ av01.0.04M.08 447k video only
30033 mp4 1056x480 30 │ ≈10.57MiB 300k https │ hev1.1.6.L120 300k video only
```
| With `'try_look': 1` passed to the api, it gives:
```
[BiliBili] Extracting URL: https://www.bilibili.com/video/BV1fK4y1t7hj/?spm_id_from=333.337.search-card.all.click&vd_source=...c145ee572cfa536d2947
[BiliBili] 1fK4y1t7hj: Downloading webpage
[BiliBili] BV1fK4y1t7hj: Extracting videos in anthology
[BiliBili] BV1fK4y1t7hj: Downloading wbi sign
[BiliBili] BV1fK4y1t7hj: Downloading video formats for cid 196018899
[BiliBili] Format(s) 4K 超清, 1080P 60帧 are missing; you have to login or become a premium member to download them. Use --cookies-from-browser or --cookies for the authentication. See https://github.com/yt-dlp/yt-dlp/wiki/FAQ#how-do-i-pass-cookies-to-yt-dlp for how to manually pass cookies
[BiliBili] 883362563: Extracting chapters
[info] Available formats for BV1fK4y1t7hj:
ID EXT RESOLUTION FPS │ FILESIZE TBR PROTO │ VCODEC VBR ACODEC ABR
───────────────────────────────────────────────────────────────────────────────────────
30216 m4a audio only │ ≈ 1.71MiB 49k https │ audio only mp4a.40.5 49k
30232 m4a audio only │ ≈ 4.72MiB 134k https │ audio only mp4a.40.2 134k
30280 m4a audio only │ ≈10.10MiB 287k https │ audio only mp4a.40.2 287k
30016 mp4 640x290 29 │ ≈12.24MiB 348k https │ avc1.64001E 348k video only
100022 mp4 792x360 30 │ ≈ 7.11MiB 202k https │ av01.0.04M.08 202k video only
30011 mp4 792x360 30 │ ≈10.98MiB 312k https │ hev1.1.6.L120 312k video only
30032 mp4 854x388 29 │ ≈24.20MiB 688k https │ avc1.64001E 688k video only
100023 mp4 1056x480 30 │ ≈15.72MiB 447k https │ av01.0.04M.08 447k video only
30033 mp4 1056x480 30 │ ≈10.57MiB 300k https │ hev1.1.6.L120 300k video only
30064 mp4 1280x580 29 │ ≈48.29MiB 1373k https │ avc1.64001F 1373k video only
100024 mp4 1584x720 30 │ ≈35.36MiB 1005k https │ av01.0.08M.08 1005k video only
30066 mp4 1584x720 30 │ ≈17.87MiB 508k https │ hev1.1.6.L120 508k video only
30080 mp4 1920x872 29 │ ≈70.08MiB 1992k https │ avc1.640032 1992k video only
100026 mp4 2378x1080 30 │ ≈50.91MiB 1447k https │ av01.0.12M.08 1447k video only
30077 mp4 2378x1080 30 │ ≈42.76MiB 1215k https │ hev1.1.6.L150 1215k video only
```
* When passing a normal logged-in(non-premium account) cookie, the premium formats are still not provided.
My patch:
```diff
diff --git a/yt_dlp/extractor/bilibili.py b/yt_dlp/extractor/bilibili.py
index a84b7a6f7..8e53f59dc 100644
--- a/yt_dlp/extractor/bilibili.py
+++ b/yt_dlp/extractor/bilibili.py
@@ -164,14 +164,12 @@ def _sign_wbi(self, params, video_id):
params['w_rid'] = hashlib.md5(f'{query}{self._get_wbi_key(video_id)}'.encode()).hexdigest()
return params
- def _download_playinfo(self, bvid, cid, headers=None, qn=None):
- params = {'bvid': bvid, 'cid': cid, 'fnval': 4048}
- if qn:
- params['qn'] = qn
+ def _download_playinfo(self, bvid, cid, headers=None, **kwargs):
+ params = {'bvid': bvid, 'cid': cid, 'fnval': 4048, **kwargs}
return self._download_json(
'https://api.bilibili.com/x/player/wbi/playurl', bvid,
query=self._sign_wbi(params, bvid), headers=headers,
- note=f'Downloading video formats for cid {cid} {qn or ""}')['data']
+ note=f'Downloading video formats for cid {cid} {kwargs.get("qn", "")}')['data']
def json2srt(self, json_data):
srt_data = ''
@@ -723,6 +721,7 @@ def _real_extract(self, url):
duration=traverse_obj(initial_state, ('videoData', 'duration', {int_or_none})),
__post_extractor=self.extract_comments(aid))
else:
+ play_info = self._download_playinfo(video_id, cid, headers=headers, try_look=1)
formats = self.extract_formats(play_info)
if not traverse_obj(play_info, ('dash')):
```
seems like bilibili has added `window.__playinfo__` back onto the webpage. That explains https://github.com/yt-dlp/yt-dlp/issues/11665#issuecomment-2516376014
Should we always use the API even when playinfo is embedded?
Is there any benefit to using the `window.__playinfo__` JSON object besides saving a request?
> Is there any benefit to using the `window.__playinfo__` JSON object besides saving a request?
For BilibiliIE, No
<details><summary>
patch: remove playinfo extraction from `window.__playinfo__` and always download it _after_ the `is_interactive` check(`_get_interactive_entries` doesn't need playinfo)
</summary>
```diff
diff --git a/yt_dlp/extractor/bilibili.py b/yt_dlp/extractor/bilibili.py
index 91619d9d5..b121324de 100644
--- a/yt_dlp/extractor/bilibili.py
+++ b/yt_dlp/extractor/bilibili.py
@@ -681,12 +681,6 @@ def _real_extract(self, url):
old_video_id = format_field(aid, None, f'%s_part{part_id or 1}')
cid = traverse_obj(video_data, ('pages', part_id - 1, 'cid')) if part_id else video_data.get('cid')
- play_info = (
- traverse_obj(
- self._search_json(r'window\.__playinfo__\s*=', webpage, 'play info', video_id, default=None),
- ('data', {dict}))
- or self._download_playinfo(video_id, cid, headers=headers, query={'try_look': 1}))
-
festival_info = {}
if is_festival:
festival_info = traverse_obj(initial_state, {
@@ -724,6 +718,7 @@ def _real_extract(self, url):
duration=traverse_obj(initial_state, ('videoData', 'duration', {int_or_none})),
__post_extractor=self.extract_comments(aid))
+ play_info = self._download_playinfo(video_id, cid, headers=headers, query={'try_look': 1})
formats = self.extract_formats(play_info)
if video_data.get('is_upower_exclusive'):
```
</details>
<details><summary>test log</summary>
```log
[debug] Command-line config: ['-vF', '--no-simulate', '--test', '--no-playlist', 'https://www.bilibili.com/video/BV1jL41167ZG/', 'bilisearch:4k60', 'https://www.bilibili.com/video/BV1GJ411x7h7/?']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [2b67ac300] (source)
[debug] Lazy loading extractors is disabled
[debug] Git HEAD: cfa76f35d
[debug] Python 3.13.0 (CPython x86_64 64bit) - Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35 (OpenSSL 3.0.2 15 Mar 2022, glibc 2.35)
[debug] exe versions: ffmpeg 4.4.2 (setts), ffprobe 4.4.2
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.37.2, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[BiliBili] Extracting URL: https://www.bilibili.com/video/BV1jL41167ZG/
[BiliBili] 1jL41167ZG: Downloading webpage
[BiliBili] BV1jL41167ZG: Extracting videos in anthology
[BiliBili] BV1jL41167ZG: Downloading wbi sign
mutagen-1.47.0[BiliBili] BV1jL41167ZG: Downloading video formats for cid 1131949939
WARNING: [BiliBili] BV1jL41167ZG: This is a supporter-only video, only the preview will be extracted: 该视频为「高级充电回馈」专属视频,开通「18元档包月充电」即可观看. Use --cookies-from-browser or --cookies for the authentication. See https://github.com/yt-dlp/yt-dlp/wiki/FAQ#how-do-i-pass-cookies-to-yt-dlp for how to manually pass cookies
[BiliBili] 443708639: Extracting chapters
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[info] Available formats for BV1jL41167ZG:
ID EXT RESOLUTION │ FILESIZE PROTO │ VCODEC ACODEC MORE INFO
────────────────────────────────────────────────────────────────
32 mp4 unknown │ 517.19KiB https │ unknown unknown 试看
[debug] Default format spec: bestvideo*+bestaudio/best
[info] BV1jL41167ZG: Downloading 1 format(s): 32
[debug] Invoking http downloader on "https://upos-sz-mirroraliov.bilivideo.com/upgcxcode/39/99/1131949939/1131949939_da4-1-29.mp4?e=ig8euxZM2rNcNbRVhwdVhwdlhWdVhwdVhoNvNC8BqJIzNbfqXBvEqxTEto8BTrNvN0GvT90W5JZMkX_YN0MvXg8gNEV4NC8xNEV4N03eN0B5tZlqNxTEto8BTrNvNeZVuJ10Kj_g2UB02J0mN0B5tZlqNCNEto8BTrNvNC7MTX502C8f2jmMQJ6mqF2fka1mqx6gqj0eN0B599M=&uipk=5&nbs=1&deadline=1733350193&gen=playurlv2&os=aliovbv&oi=3526874561&trid=77bba2f6fdcf4782b811b76b64d3fe25u&mid=0&platform=pc&og=cos&upsig=70dd6bf862a0de04001044100002d805&uparams=e,uipk,nbs,deadline,gen,os,oi,trid,mid,platform,og&bvc=vod&nettype=0&orderid=0,2&buvid=BDC5BB95-117A-73BD-2A2C-4F150EBDC1A690723infoc&build=0&f=u_0_0&agrr=1&bw=52960&logo=80000000"
[download] 一场大火引发的离奇死亡!古典推理经典短篇集《不可能犯罪诊断书》! [BV1jL41167ZG].mp4 has already been downloaded
[download] 100% of 10.00KiB
[BiliBiliSearch] Extracting URL: bilisearch:4k60
[download] Downloading playlist: 4k60
[BiliBiliSearch] 4k60: Extracting results from page 1
[BiliBiliSearch] Playlist 4k60: Downloading 1 items of 1
[download] Downloading item 1 of 1
[BiliBili] Extracting URL: http://www.bilibili.com/video/av286406916
[BiliBili] 286406916: Downloading webpage
[BiliBili] BV1yf4y1R7mU: Extracting videos in anthology
[BiliBili] Downloading just the video BV1yf4y1R7mU because of --no-playlist
[BiliBili] BV1yf4y1R7mU: Downloading video formats for cid 214423511
[BiliBili] Format(s) 4K 超清, 1080P 60帧 are missing; you have to become a premium member to download them. Use --cookies-from-browser or --cookies for the authentication. See https://github.com/yt-dlp/yt-dlp/wiki/FAQ#how-do-i-pass-cookies-to-yt-dlp for how to manually pass cookies
[BiliBili] 286406916: Extracting chapters
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[info] Available formats for BV1yf4y1R7mU_p1:
ID EXT RESOLUTION FPS │ FILESIZE TBR PROTO │ VCODEC VBR ACODEC ABR
───────────────────────────────────────────────────────────────────────────────────────
30216 m4a audio only │ ≈ 2.66MiB 51k https │ audio only mp4a.40.5 51k
30232 m4a audio only │ ≈ 5.93MiB 113k https │ audio only mp4a.40.2 113k
30280 m4a audio only │ ≈ 5.93MiB 113k https │ audio only mp4a.40.2 113k
30016 mp4 490x360 29 │ ≈13.50MiB 258k https │ avc1.64001E 258k video only
100109 mp4 490x360 30 │ ≈ 9.30MiB 178k https │ hev1.1.6.L120 178k video only
100022 mp4 490x360 30 │ ≈10.64MiB 203k https │ av01.0.01M.08 203k video only
30032 mp4 654x480 29 │ ≈23.25MiB 444k https │ avc1.64001E 444k video only
100023 mp4 654x480 30 │ ≈21.79MiB 416k https │ av01.0.04M.08 416k video only
100110 mp4 654x480 30 │ ≈12.79MiB 244k https │ hev1.1.6.L120 244k video only
30064 mp4 982x720 29 │ ≈43.64MiB 833k https │ avc1.64001F 833k video only
100024 mp4 982x720 30 │ ≈41.55MiB 793k https │ av01.0.05M.08 793k video only
100111 mp4 982x720 30 │ ≈20.49MiB 391k https │ hev1.1.6.L120 391k video only
30080 mp4 1472x1080 29 │ ≈70.07MiB 1337k https │ avc1.640032 1337k video only
100026 mp4 1472x1080 30 │ ≈55.07MiB 1051k https │ av01.0.08M.08 1051k video only
100113 mp4 1472x1080 30 │ ≈43.98MiB 839k https │ hev1.1.6.L120 839k video only
[debug] Default format spec: bestvideo*+bestaudio/best
[info] BV1yf4y1R7mU_p1: Downloading 1 format(s): 100113+30280
[debug] Invoking http downloader on "https://upos-sz-mirroraliov.bilivideo.com/upgcxcode/11/35/214423511/214423511-1-100113.m4s?e=ig8euxZM2rNcNbdlhoNvNC8BqJIzNbfqXBvEqxTEto8BTrNvN0GvT90W5JZMkX_YN0MvXg8gNEV4NC8xNEV4N03eN0B5tZlqNxTEto8BTrNvNeZVuJ10Kj_g2UB02J0mN0B5tZlqNCNEto8BTrNvNC7MTX502C8f2jmMQJ6mqF2fka1mqx6gqj0eN0B599M=&uipk=5&nbs=1&deadline=1733350196&gen=playurlv2&os=aliovbv&oi=3526874561&trid=1a192ecbe746440fb857d62727058149u&mid=0&platform=pc&og=hw&upsig=a024db055bba6ef60c8e86a21523f86f&uparams=e,uipk,nbs,deadline,gen,os,oi,trid,mid,platform,og&bvc=vod&nettype=0&orderid=0,2&buvid=BDC5BB95-117A-73BD-2A2C-4F150EBDC1A690723infoc&build=0&f=u_0_0&agrr=1&bw=105008&logo=80000000"
[download] Destination: 【4K60帧】群星《北京欢迎你》2008原版+宽屏版 AI修复高清收藏版 p01 【AI修复】08年原版4_3比例 [BV1yf4y1R7mU_p1].f100113.mp4
[download] 100% of 10.00KiB in 00:00:00 at 17.48KiB/s
[debug] Invoking http downloader on "https://upos-sz-mirroraliov.bilivideo.com/upgcxcode/11/35/214423511/214423511_nb3-1-30280.m4s?e=ig8euxZM2rNcNbdlhoNvNC8BqJIzNbfqXBvEqxTEto8BTrNvN0GvT90W5JZMkX_YN0MvXg8gNEV4NC8xNEV4N03eN0B5tZlqNxTEto8BTrNvNeZVuJ10Kj_g2UB02J0mN0B5tZlqNCNEto8BTrNvNC7MTX502C8f2jmMQJ6mqF2fka1mqx6gqj0eN0B599M=&uipk=5&nbs=1&deadline=1733350196&gen=playurlv2&os=aliovbv&oi=3526874561&trid=1a192ecbe746440fb857d62727058149u&mid=0&platform=pc&og=hw&upsig=a4cb8ca94574f8425cb20fece02eb6f5&uparams=e,uipk,nbs,deadline,gen,os,oi,trid,mid,platform,og&bvc=vod&nettype=0&orderid=0,2&buvid=BDC5BB95-117A-73BD-2A2C-4F150EBDC1A690723infoc&build=0&f=u_0_0&agrr=1&bw=14173&logo=80000000"
[download] Destination: 【4K60帧】群星《北京欢迎你》2008原版+宽屏版 AI修复高清收藏版 p01 【AI修复】08年原版4_3比例 [BV1yf4y1R7mU_p1].f30280.m4a
[download] 100% of 10.00KiB in 00:00:00 at 38.68KiB/s
[Merger] Merging formats into "【4K60帧】群星《北京欢迎你》2008原版+宽屏版 AI修复高清收藏版 p01 【AI修复】08年原版4_3比例 [BV1yf4y1R7mU_p1].mp4"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:【4K60帧】群星《北京欢迎你》2008原版+宽屏版 AI修复高清收藏版 p01 【AI修复】08年原版4_3比例 [BV1yf4y1R7mU_p1].f100113.mp4' -i 'file:【4K60帧】群星《北京欢迎你》2008原版+宽屏版 AI修复高清收藏版 p01 【AI修复】08年原版4_3比例 [BV1yf4y1R7mU_p1].f30280.m4a' -c copy -map 0:v:0 -map 1:a:0 -movflags +faststart 'file:【4K60帧】群星《北京欢迎你》2008原版+宽屏版 AI修复高清收藏版 p01 【AI修复】08年原版4_3比例 [BV1yf4y1R7mU_p1].temp.mp4'
Deleting original file 【4K60帧】群星《北京欢迎你》2008原版+宽屏版 AI修复高清收藏版 p01 【AI修复】08年原版4_3比例 [BV1yf4y1R7mU_p1].f30280.m4a (pass -k to keep)
Deleting original file 【4K60帧】群星《北京欢迎你》2008原版+宽屏版 AI修复高清收藏版 p01 【AI修复】08年原版4_3比例 [BV1yf4y1R7mU_p1].f100113.mp4 (pass -k to keep)
[download] Finished downloading playlist: 4k60
[BiliBili] Extracting URL: https://www.bilibili.com/video/BV1GJ411x7h7/?
[BiliBili] 1GJ411x7h7: Downloading webpage
ERROR: [BiliBili] 1GJ411x7h7: This video may be deleted or geo-restricted. You might want to try a VPN or a proxy server (with --proxy)
File "/home/user/yt-dlp_dev/yt-dlp-fork/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
File "/home/user/yt-dlp_dev/yt-dlp-fork/yt_dlp/extractor/bilibili.py", line 649, in _real_extract
raise ExtractorError(
'This video may be deleted or geo-restricted. '
'You might want to try a VPN or a proxy server (with --proxy)', expected=True)
```
</details> | 1,733,343,965,000 | null | Feature Request | [
"yt_dlp/extractor/bilibili.py:BiliBiliIE._real_extract"
] | [] | 1 | 490 |
|
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11711 | d8fb3490863653182864d2a53522f350d67a9ff8 | diff --git a/yt_dlp/extractor/bilibili.py b/yt_dlp/extractor/bilibili.py
index 72d5f20cf36b..e538e5308946 100644
--- a/yt_dlp/extractor/bilibili.py
+++ b/yt_dlp/extractor/bilibili.py
@@ -652,13 +652,6 @@ def _real_extract(self, url):
else:
video_data = initial_state['videoData']
- if video_data.get('is_upower_exclusive'):
- high_level = traverse_obj(initial_state, ('elecFullInfo', 'show_info', 'high_level', {dict})) or {}
- raise ExtractorError(
- 'This is a supporter-only video: '
- f'{join_nonempty("title", "sub_title", from_dict=high_level, delim=",")}. '
- f'{self._login_hint()}', expected=True)
-
video_id, title = video_data['bvid'], video_data.get('title')
# Bilibili anthologies are similar to playlists but all videos share the same video ID as the anthology itself.
@@ -726,62 +719,72 @@ def _real_extract(self, url):
self._get_interactive_entries(video_id, cid, metainfo, headers=headers), **metainfo,
duration=traverse_obj(initial_state, ('videoData', 'duration', {int_or_none})),
__post_extractor=self.extract_comments(aid))
- else:
- formats = self.extract_formats(play_info)
-
- if not traverse_obj(play_info, ('dash')):
- # we only have legacy formats and need additional work
- has_qn = lambda x: x in traverse_obj(formats, (..., 'quality'))
- for qn in traverse_obj(play_info, ('accept_quality', lambda _, v: not has_qn(v), {int})):
- formats.extend(traverse_obj(
- self.extract_formats(self._download_playinfo(video_id, cid, headers=headers, qn=qn)),
- lambda _, v: not has_qn(v['quality'])))
- self._check_missing_formats(play_info, formats)
- flv_formats = traverse_obj(formats, lambda _, v: v['fragments'])
- if flv_formats and len(flv_formats) < len(formats):
- # Flv and mp4 are incompatible due to `multi_video` workaround, so drop one
- if not self._configuration_arg('prefer_multi_flv'):
- dropped_fmts = ', '.join(
- f'{f.get("format_note")} ({f.get("format_id")})' for f in flv_formats)
- formats = traverse_obj(formats, lambda _, v: not v.get('fragments'))
- if dropped_fmts:
- self.to_screen(
- f'Dropping incompatible flv format(s) {dropped_fmts} since mp4 is available. '
- 'To extract flv, pass --extractor-args "bilibili:prefer_multi_flv"')
- else:
- formats = traverse_obj(
- # XXX: Filtering by extractor-arg is for testing purposes
- formats, lambda _, v: v['quality'] == int(self._configuration_arg('prefer_multi_flv')[0]),
- ) or [max(flv_formats, key=lambda x: x['quality'])]
-
- if traverse_obj(formats, (0, 'fragments')):
- # We have flv formats, which are individual short videos with their own timestamps and metainfo
- # Binary concatenation corrupts their timestamps, so we need a `multi_video` workaround
- return {
- **metainfo,
- '_type': 'multi_video',
- 'entries': [{
- 'id': f'{metainfo["id"]}_{idx}',
- 'title': metainfo['title'],
- 'http_headers': metainfo['http_headers'],
- 'formats': [{
- **fragment,
- 'format_id': formats[0].get('format_id'),
- }],
- 'subtitles': self.extract_subtitles(video_id, cid) if idx == 0 else None,
- '__post_extractor': self.extract_comments(aid) if idx == 0 else None,
- } for idx, fragment in enumerate(formats[0]['fragments'])],
- 'duration': float_or_none(play_info.get('timelength'), scale=1000),
- }
- else:
- return {
- **metainfo,
- 'formats': formats,
- 'duration': float_or_none(play_info.get('timelength'), scale=1000),
- 'chapters': self._get_chapters(aid, cid),
- 'subtitles': self.extract_subtitles(video_id, cid),
- '__post_extractor': self.extract_comments(aid),
- }
+
+ formats = self.extract_formats(play_info)
+
+ if video_data.get('is_upower_exclusive'):
+ high_level = traverse_obj(initial_state, ('elecFullInfo', 'show_info', 'high_level', {dict})) or {}
+ msg = f'{join_nonempty("title", "sub_title", from_dict=high_level, delim=",")}. {self._login_hint()}'
+ if not formats:
+ raise ExtractorError(f'This is a supporter-only video: {msg}', expected=True)
+ if '试看' in traverse_obj(play_info, ('accept_description', ..., {str})):
+ self.report_warning(
+ f'This is a supporter-only video, only the preview will be extracted: {msg}',
+ video_id=video_id)
+
+ if not traverse_obj(play_info, 'dash'):
+ # we only have legacy formats and need additional work
+ has_qn = lambda x: x in traverse_obj(formats, (..., 'quality'))
+ for qn in traverse_obj(play_info, ('accept_quality', lambda _, v: not has_qn(v), {int})):
+ formats.extend(traverse_obj(
+ self.extract_formats(self._download_playinfo(video_id, cid, headers=headers, qn=qn)),
+ lambda _, v: not has_qn(v['quality'])))
+ self._check_missing_formats(play_info, formats)
+ flv_formats = traverse_obj(formats, lambda _, v: v['fragments'])
+ if flv_formats and len(flv_formats) < len(formats):
+ # Flv and mp4 are incompatible due to `multi_video` workaround, so drop one
+ if not self._configuration_arg('prefer_multi_flv'):
+ dropped_fmts = ', '.join(
+ f'{f.get("format_note")} ({f.get("format_id")})' for f in flv_formats)
+ formats = traverse_obj(formats, lambda _, v: not v.get('fragments'))
+ if dropped_fmts:
+ self.to_screen(
+ f'Dropping incompatible flv format(s) {dropped_fmts} since mp4 is available. '
+ 'To extract flv, pass --extractor-args "bilibili:prefer_multi_flv"')
+ else:
+ formats = traverse_obj(
+ # XXX: Filtering by extractor-arg is for testing purposes
+ formats, lambda _, v: v['quality'] == int(self._configuration_arg('prefer_multi_flv')[0]),
+ ) or [max(flv_formats, key=lambda x: x['quality'])]
+
+ if traverse_obj(formats, (0, 'fragments')):
+ # We have flv formats, which are individual short videos with their own timestamps and metainfo
+ # Binary concatenation corrupts their timestamps, so we need a `multi_video` workaround
+ return {
+ **metainfo,
+ '_type': 'multi_video',
+ 'entries': [{
+ 'id': f'{metainfo["id"]}_{idx}',
+ 'title': metainfo['title'],
+ 'http_headers': metainfo['http_headers'],
+ 'formats': [{
+ **fragment,
+ 'format_id': formats[0].get('format_id'),
+ }],
+ 'subtitles': self.extract_subtitles(video_id, cid) if idx == 0 else None,
+ '__post_extractor': self.extract_comments(aid) if idx == 0 else None,
+ } for idx, fragment in enumerate(formats[0]['fragments'])],
+ 'duration': float_or_none(play_info.get('timelength'), scale=1000),
+ }
+
+ return {
+ **metainfo,
+ 'formats': formats,
+ 'duration': float_or_none(play_info.get('timelength'), scale=1000),
+ 'chapters': self._get_chapters(aid, cid),
+ 'subtitles': self.extract_subtitles(video_id, cid),
+ '__post_extractor': self.extract_comments(aid),
+ }
class BiliBiliBangumiIE(BilibiliBaseIE):
| [bilibili] supporter-only videos broken after 239f5f3
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
CN
### Provide a description that is worded well enough to be understood
[account-needed]
after 239f5f3 , yt-dlp raises an `ExtractorError` on every supporter-only video regardless of whether the user has logged in as a supporter. But I don't have a supporter's account. An account with access to _any_ supporter-only video on the site would help
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['--test', 'https://www.bilibili.com/video/BV1jL41167ZG/', '-vF', '--no-simulate']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] (source)
[debug] Lazy loading extractors is disabled
[debug] Git HEAD: 62cba8a1b
[debug] Python 3.13.0 (CPython x86_64 64bit) - Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35 (OpenSSL 3.0.2 15 Mar 2022, glibc 2.35)
[debug] exe versions: ffmpeg 4.4.2 (setts), ffprobe 4.4.2
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.37.2, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[BiliBili] Extracting URL: https://www.bilibili.com/video/BV1jL41167ZG/
[BiliBili] 1jL41167ZG: Downloading webpage
ERROR: [BiliBili] 1jL41167ZG: This is a supporter-only video: 该视频为「高级充电回馈」专属视频,开通「18元档包月充电」即可观看. Use --cookies-from-browser or --cookies for the authentication. See https://github.com/yt-dlp/yt-dlp/wiki/FAQ#how-do-i-pass-cookies-to-yt-dlp for how to manually pass cookies
File "/home/user/yt-dlp_dev/yt-dlp-fork/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
File "/home/user/yt-dlp_dev/yt-dlp-fork/yt_dlp/extractor/bilibili.py", line 657, in _real_extract
raise ExtractorError(
...<2 lines>...
f'{self._login_hint()}', expected=True)
```
| Well, there's the problem. If you don't have a supporter account, of course it can't download a video meant for supporters only. | 1,733,169,578,000 | null | Bug Report | [
"yt_dlp/extractor/bilibili.py:BiliBiliIE._real_extract"
] | [] | 1 | 492 |
|
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11683 | 00dcde728635633eee969ad4d498b9f233c4a94e | diff --git a/yt_dlp/extractor/mitele.py b/yt_dlp/extractor/mitele.py
index 3573a2a3fd72..76fef337a2ea 100644
--- a/yt_dlp/extractor/mitele.py
+++ b/yt_dlp/extractor/mitele.py
@@ -80,9 +80,9 @@ class MiTeleIE(TelecincoBaseIE):
def _real_extract(self, url):
display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id)
- pre_player = self._parse_json(self._search_regex(
- r'window\.\$REACTBASE_STATE\.prePlayer_mtweb\s*=\s*({.+})',
- webpage, 'Pre Player'), display_id)['prePlayer']
+ pre_player = self._search_json(
+ r'window\.\$REACTBASE_STATE\.prePlayer_mtweb\s*=',
+ webpage, 'Pre Player', display_id)['prePlayer']
title = pre_player['title']
video_info = self._parse_content(pre_player['video'], url)
content = pre_player.get('content') or {}
| [MiTele]: Failed to parse JSON (caused by JSONDecodeError('Extra data in \'d":false}}</script> \': line 1 column 9378 (char 9377)'));
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Spain
### Provide a description that is worded well enough to be understood
Similar to some other allowed websites, mitele is also having an issue (not able to test entire site selection of TV shows) with downloading a TV show. it is geo restricted but user is in Spain. the error follows:
_Failed to parse JSON (caused by JSONDecodeError('Extra data in \'d":false}}</script> \': line 1 column 9378 (char 9377)'));_
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.mitele.es/programas-tv/horizonte/temporada-5/programa-181-40_014084253/player/']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-nightly-builds [00dcde728] (zip)
[debug] Python 3.10.12 (CPython x86_64 64bit) - Linux-6.8.0-49-generic-x86_64-with-glibc2.35 (OpenSSL 3.0.2 15 Mar 2022, glibc 2.35)
[debug] exe versions: ffmpeg 4.4.2 (setts), ffprobe 4.4.2
[debug] Optional libraries: Cryptodome-3.11.0, brotli-1.0.9, certifi-2020.06.20, mutagen-1.45.1, requests-2.32.3, secretstorage-3.3.1, sqlite3-3.37.2, urllib3-1.26.5, websockets-9.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp-nightly-builds)
[MiTele] Extracting URL: https://www.mitele.es/programas-tv/horizonte/temporada-5/programa-181-40_014084253/player/
[MiTele] programa-181-40_014084253: Downloading webpage
ERROR: [MiTele] programa-181-40_014084253: programa-181-40_014084253: Failed to parse JSON (caused by JSONDecodeError('Extra data in \'d":false}}</script> \': line 1 column 9378 (char 9377)')); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/mitele.py", line 83, in _real_extract
pre_player = self._parse_json(self._search_regex(
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 1094, in _parse_json
self.__print_error('Failed to parse JSON' if errnote is None else errnote, fatal, video_id, ve)
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 1077, in __print_error
raise ExtractorError(f'{video_id}: {errnote}', cause=err)
File "/usr/local/bin/yt-dlp/yt_dlp/utils/_utils.py", line 565, in decode
File "/usr/lib/python3.10/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 9378 (char 9377)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 1091, in _parse_json
return json.loads(
File "/usr/lib/python3.10/json/__init__.py", line 359, in loads
return cls(**kw).decode(s)
File "/usr/local/bin/yt-dlp/yt_dlp/utils/_utils.py", line 573, in decode
json.decoder.JSONDecodeError: Extra data in 'd":false}}</script> ': line 1 column 9378 (char 9377)
```
| 1,732,917,815,000 | null | Bug Report | [
"yt_dlp/extractor/mitele.py:MiTeleIE._real_extract"
] | [] | 1 | 493 |
||
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11667 | 00dcde728635633eee969ad4d498b9f233c4a94e | diff --git a/yt_dlp/extractor/bilibili.py b/yt_dlp/extractor/bilibili.py
index 02ea67707fcd..f01befcc0b6f 100644
--- a/yt_dlp/extractor/bilibili.py
+++ b/yt_dlp/extractor/bilibili.py
@@ -18,7 +18,6 @@
InAdvancePagedList,
OnDemandPagedList,
bool_or_none,
- clean_html,
determine_ext,
filter_dict,
float_or_none,
@@ -639,31 +638,27 @@ def _real_extract(self, url):
headers['Referer'] = url
initial_state = self._search_json(r'window\.__INITIAL_STATE__\s*=', webpage, 'initial state', video_id)
+
+ if traverse_obj(initial_state, ('error', 'trueCode')) == -403:
+ self.raise_login_required()
+ if traverse_obj(initial_state, ('error', 'trueCode')) == -404:
+ raise ExtractorError(
+ 'This video may be deleted or geo-restricted. '
+ 'You might want to try a VPN or a proxy server (with --proxy)', expected=True)
+
is_festival = 'videoData' not in initial_state
if is_festival:
video_data = initial_state['videoInfo']
else:
- play_info_obj = self._search_json(
- r'window\.__playinfo__\s*=', webpage, 'play info', video_id, fatal=False)
- if not play_info_obj:
- if traverse_obj(initial_state, ('error', 'trueCode')) == -403:
- self.raise_login_required()
- if traverse_obj(initial_state, ('error', 'trueCode')) == -404:
- raise ExtractorError(
- 'This video may be deleted or geo-restricted. '
- 'You might want to try a VPN or a proxy server (with --proxy)', expected=True)
- play_info = traverse_obj(play_info_obj, ('data', {dict}))
- if not play_info:
- if traverse_obj(play_info_obj, 'code') == 87007:
- toast = get_element_by_class('tips-toast', webpage) or ''
- msg = clean_html(
- f'{get_element_by_class("belongs-to", toast) or ""},'
- + (get_element_by_class('level', toast) or ''))
- raise ExtractorError(
- f'This is a supporter-only video: {msg}. {self._login_hint()}', expected=True)
- raise ExtractorError('Failed to extract play info')
video_data = initial_state['videoData']
+ if video_data.get('is_upower_exclusive'):
+ high_level = traverse_obj(initial_state, ('elecFullInfo', 'show_info', 'high_level', {dict})) or {}
+ raise ExtractorError(
+ 'This is a supporter-only video: '
+ f'{join_nonempty("title", "sub_title", from_dict=high_level, delim=",")}. '
+ f'{self._login_hint()}', expected=True)
+
video_id, title = video_data['bvid'], video_data.get('title')
# Bilibili anthologies are similar to playlists but all videos share the same video ID as the anthology itself.
@@ -689,10 +684,14 @@ def _real_extract(self, url):
old_video_id = format_field(aid, None, f'%s_part{part_id or 1}')
cid = traverse_obj(video_data, ('pages', part_id - 1, 'cid')) if part_id else video_data.get('cid')
+ play_info = (
+ traverse_obj(
+ self._search_json(r'window\.__playinfo__\s*=', webpage, 'play info', video_id, default=None),
+ ('data', {dict}))
+ or self._download_playinfo(video_id, cid, headers=headers))
+
festival_info = {}
if is_festival:
- play_info = self._download_playinfo(video_id, cid, headers=headers)
-
festival_info = traverse_obj(initial_state, {
'uploader': ('videoInfo', 'upName'),
'uploader_id': ('videoInfo', 'upMid', {str_or_none}),
| [BiliBili] unable to extract play info
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
China
### Provide a description that is worded well enough to be understood
I was able to download videos normally at first, but this afternoon I found that the video download started to report errors. I tested a video on three different IP hosts, and this video was able to download correctly yesterday, but today it fails with an error. I tried different types of URLs and also attempted to add cookies to download the video, but none of them worked. I suspect that Bilibili may have updated its anti-scraping mechanism.And I am very sure that my yt-dlp is the latest version.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp -Uv --extract-audio --audio-format mp3 -o "%(id)s-%(title)s.%(ext)s" "https://www.bilibi
li.com/video/BV1HB4y1N7CY/" --ffmpeg-location "D:\workhome\ffmpeg\ffmpeg-master-latest-win64-gpl\bin"
[debug] Command-line config: ['-Uv', '--extract-audio', '--audio-format', 'mp3', '-o', '%(id)s-%(title)s.%(ext)s', 'https://www.bilibili.com/video/BV1HB4y1N7CY/', '--ffmpeg-location', 'D:\\workhome\\ffmpeg\\ffmpeg-master-latest-win64-gpl\\bin']
[debug] Encodings: locale cp936, fs utf-8, pref cp936, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.0 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1w 11 Sep 2023)
[debug] exe versions: ffmpeg N-117770-g322b240cea-20241114 (setts), ffprobe N-117770-g322b240cea-20241114
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.08.30, requests-2.32.3, sqlite3-3.45.3, urllib3-1.26.20, websockets-10.4
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[BiliBili] Extracting URL: https://www.bilibili.com/video/BV1HB4y1N7CY/
[BiliBili] 1HB4y1N7CY: Downloading webpage
WARNING: [BiliBili] unable to extract play info; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
ERROR: [BiliBili] 1HB4y1N7CY: Failed to extract play info; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "D:\workhome\anaconda3\envs\crawl11\Lib\site-packages\yt_dlp\extractor\common.py", line 742, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "D:\workhome\anaconda3\envs\crawl11\Lib\site-packages\yt_dlp\extractor\bilibili.py", line 664, in _real_extract
raise ExtractorError('Failed to extract play info')
```
| I also encountered the same problem
> I also encountered the same problem
My cue is this.
```
[BiliBili] Extracting URL: https://www.bilibili.com/video/BV1ALzVYUEZf
[BiliBili] 1ALzVYUEZf: Downloading webpage
WARNING: [BiliBili] unable to extract play info; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
ERROR: [BiliBili] 1ALzVYUEZf: Failed to extract play info; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
```
seems like `window.__playinfo__` is removed from the page source now | 1,732,786,310,000 | null | Bug Report | [
"yt_dlp/extractor/bilibili.py:BiliBiliIE._real_extract"
] | [] | 1 | 494 |
|
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11645 | 4b5eec0aaa7c02627f27a386591b735b90e681a8 | diff --git a/yt_dlp/extractor/tiktok.py b/yt_dlp/extractor/tiktok.py
index ba15f08b6d85..9e53b3407220 100644
--- a/yt_dlp/extractor/tiktok.py
+++ b/yt_dlp/extractor/tiktok.py
@@ -413,15 +413,6 @@ def extract_addr(addr, add_meta={}):
for f in formats:
self._set_cookie(urllib.parse.urlparse(f['url']).hostname, 'sid_tt', auth_cookie.value)
- thumbnails = []
- for cover_id in ('cover', 'ai_dynamic_cover', 'animated_cover', 'ai_dynamic_cover_bak',
- 'origin_cover', 'dynamic_cover'):
- for cover_url in traverse_obj(video_info, (cover_id, 'url_list', ...)):
- thumbnails.append({
- 'id': cover_id,
- 'url': cover_url,
- })
-
stats_info = aweme_detail.get('statistics') or {}
music_info = aweme_detail.get('music') or {}
labels = traverse_obj(aweme_detail, ('hybrid_label', ..., 'text'), expected_type=str)
@@ -467,7 +458,17 @@ def extract_addr(addr, add_meta={}):
'formats': formats,
'subtitles': self.extract_subtitles(
aweme_detail, aweme_id, traverse_obj(author_info, 'uploader', 'uploader_id', 'channel_id')),
- 'thumbnails': thumbnails,
+ 'thumbnails': [
+ {
+ 'id': cover_id,
+ 'url': cover_url,
+ 'preference': -1 if cover_id in ('cover', 'origin_cover') else -2,
+ }
+ for cover_id in (
+ 'cover', 'ai_dynamic_cover', 'animated_cover',
+ 'ai_dynamic_cover_bak', 'origin_cover', 'dynamic_cover')
+ for cover_url in traverse_obj(video_info, (cover_id, 'url_list', ...))
+ ],
'duration': (traverse_obj(video_info, (
(None, 'download_addr'), 'duration', {int_or_none(scale=1000)}, any))
or traverse_obj(music_info, ('duration', {int_or_none}))),
@@ -600,11 +601,15 @@ def _parse_aweme_video_web(self, aweme_detail, webpage_url, video_id, extract_fl
'repost_count': 'shareCount',
'comment_count': 'commentCount',
}), expected_type=int_or_none),
- 'thumbnails': traverse_obj(aweme_detail, (
- (None, 'video'), ('thumbnail', 'cover', 'dynamicCover', 'originCover'), {
- 'url': ({url_or_none}, {self._proto_relative_url}),
- },
- )),
+ 'thumbnails': [
+ {
+ 'id': cover_id,
+ 'url': self._proto_relative_url(cover_url),
+ 'preference': -2 if cover_id == 'dynamicCover' else -1,
+ }
+ for cover_id in ('thumbnail', 'cover', 'dynamicCover', 'originCover')
+ for cover_url in traverse_obj(aweme_detail, ((None, 'video'), cover_id, {url_or_none}))
+ ],
}
| [TikTok] ERROR: Postprocessing: Conversion failed! when embedding thumbnail
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Provide a description that is worded well enough to be understood
Mainly the error "conversion failed", post processor errors. Lots of videos don't download. Also errors that have to do with "skipping unsupported chunk: ANMF" and "Nothing was written into output file, because at least one of its streams received no packets. Conversion failed!"
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['https://www.tiktok.com/@cooperspamsasf/video/7432045283686632710', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] [TikTok] Found universal data for rehydration
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[debug] Invoking http downloader on "https://v19-webapp-prime.tiktok.com/video/tos/useast2a/tos-useast2a-pve-0068/oAXOvcjeEAZzgjgfgQLKR5SGzeNrxA9ICICxHI/?a=1988&bti=ODszNWYuMDE6&ch=0&cr=3&dr=0&lr=all&cd=0%7C0%7C0%7C&cv=1&br=2404&bt=1202&cs=2&ds=4&ft=4fUEKMk88Zmo0WRLZb4jVaThrpWrKsd.&mime_type=video_mp4&qs=15&rc=NzNpZWU8OzRmNzs0Nzk1aUBpam93dnY5cnh4djMzNzczM0AtMS0uNS41NTIxMTBhXzEyYSNmZW9uMmRjbGVgLS1kMTZzcw%3D%3D&btag=e00088000&expire=1732609535&l=2024112602251903ACD4E62348E641B01E&ply_type=2&policy=2&signature=1e746658933c8ee3a81756c4afee15d3&tk=tt_chain_token"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -f image2 -pattern_type none -i "file:HALLOWEEN TYPEE #halloweencostume #swat #duo #blonde #brunette #thatassperfectbaby #soccergirls [7432045283686632710].webp" -update 1 -movflags +faststart "file:HALLOWEEN TYPEE #halloweencostume #swat #duo #blonde #brunette #thatassperfectbaby #soccergirls [7432045283686632710].png"
[debug] ffmpeg version 7.0.2-full_build-www.gyan.dev Copyright (c) 2000-2024 the FFmpeg developers
built with gcc 13.2.0 (Rev5, Built by MSYS2 project)
configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libxevd --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxeve --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-dxva2 --enable-d3d11va --enable-d3d12va --enable-ffnvcodec --enable-libvpl --enable-nvdec --enable-nvenc --enable-vaapi --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint
libavutil 59. 8.100 / 59. 8.100
libavcodec 61. 3.100 / 61. 3.100
libavformat 61. 1.100 / 61. 1.100
libavdevice 61. 1.100 / 61. 1.100
libavfilter 10. 1.100 / 10. 1.100
libswscale 8. 1.100 / 8. 1.100
libswresample 5. 1.100 / 5. 1.100
libpostproc 58. 1.100 / 58. 1.100
[webp @ 000001545c4432c0] skipping unsupported chunk: ANIM
[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF
[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF
[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF
[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF
[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF
[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF
[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF
[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF
[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF
[webp @ 000001545c4432c0] image data not found
[image2 @ 000001545c441940] Could not find codec parameters for stream 0 (Video: webp, none): unspecified size
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
Input #0, image2, from 'file:HALLOWEEN TYPEE #halloweencostume #swat #duo #blonde #brunette #thatassperfectbaby #soccergirls [7432045283686632710].webp':
Duration: 00:00:00.04, start: 0.000000, bitrate: N/A
Stream #0:0: Video: webp, none, 25 fps, 25 tbr, 25 tbn
Stream mapping:
Stream #0:0 -> #0:0 (webp (native) -> png (native))
Press [q] to stop, [?] for help
[webp @ 000001545c469fc0] skipping unsupported chunk: ANIM
[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF
[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF
[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF
[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF
[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF
[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF
[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF
[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF
[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF
[webp @ 000001545c469fc0] image data not found
[vist#0:0/webp @ 000001545c4432c0] [dec:webp @ 000001545c44c440] Decoding error: Invalid data found when processing input
[vist#0:0/webp @ 000001545c4432c0] [dec:webp @ 000001545c44c440] Decode error rate 1 exceeds maximum 0.666667
[vist#0:0/webp @ 000001545c4432c0] [dec:webp @ 000001545c44c440] Task finished with error code: -1145393733 (Error number -1145393733 occurred)
[vist#0:0/webp @ 000001545c4432c0] [dec:webp @ 000001545c44c440] Terminating thread with return code -1145393733 (Error number -1145393733 occurred)
Cannot determine format of input 0:0 after EOF
[vf#0:0 @ 000001545c44ac80] Task finished with error code: -1094995529 (Invalid data found when processing input)
[vf#0:0 @ 000001545c44ac80] Terminating thread with return code -1094995529 (Invalid data found when processing input)
[vost#0:0/png @ 000001545c448c00] Could not open encoder before EOF
[vost#0:0/png @ 000001545c448c00] Task finished with error code: -22 (Invalid argument)
[vost#0:0/png @ 000001545c448c00] Terminating thread with return code -22 (Invalid argument)
[out#0/image2 @ 000001545c467e40] Nothing was written into output file, because at least one of its streams received no packets.
frame= 0 fps=0.0 q=0.0 Lsize= 0KiB time=N/A bitrate=N/A speed=N/A
Conversion failed!
ERROR: Postprocessing: Conversion failed!
Traceback (most recent call last):
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\YoutubeDL.py", line 3556, in process_info
replace_info_dict(self.post_process(dl_filename, info_dict, files_to_move))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\YoutubeDL.py", line 3740, in post_process
info = self.run_all_pps('post_process', info, additional_pps=info.get('__postprocessors'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\YoutubeDL.py", line 3722, in run_all_pps
info = self.run_pp(pp, info)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\YoutubeDL.py", line 3700, in run_pp
files_to_delete, infodict = pp.run(infodict)
^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\postprocessor\common.py", line 22, in run
ret = func(self, info, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\postprocessor\common.py", line 127, in wrapper
return func(self, info)
^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\postprocessor\embedthumbnail.py", line 84, in run
thumbnail_filename = convertor.convert_thumbnail(thumbnail_filename, 'png')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\postprocessor\ffmpeg.py", line 1107, in convert_thumbnail
self.real_run_ffmpeg(
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\postprocessor\ffmpeg.py", line 367, in real_run_ffmpeg
raise FFmpegPostProcessorError(stderr.strip().splitlines()[-1])
yt_dlp.postprocessor.ffmpeg.FFmpegPostProcessorError: Conversion failed!
[ERROR] Failed to process URL: https://www.tiktok.com/@cooperspamsasf/video/7432045283686632710
[debug] Command-line config: ['https://www.tiktok.com/@bris.main/video/7439516415444536606', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@leilaaaaaaaaa34/video/7430073853495299350', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@erindottie/video/7428505324375559457', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@user415387491623/video/7434688554627910968', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@elsa.vikstrom/video/7431528033044942102', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@ellatomine2/video/7440197178603228449', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@elena__blondie/video/7440396119076506912', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@johaanssson/video/7440864222747086112', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] [TikTok] Found universal data for rehydration
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[debug] Invoking http downloader on "https://v19-webapp-prime.tiktok.com/video/tos/useast2a/tos-useast2a-ve-0068-euttp/ok6GJnAQE2q0AFfyAaPQIQDhK0KQBwD1EIcfR4/?a=1988&bti=ODszNWYuMDE6&ch=0&cr=3&dr=0&lr=all&cd=0%7C0%7C0%7C&cv=1&br=1346&bt=673&cs=2&ds=4&eid=256&ft=4fUEKMk88Zmo0bRLZb4jVHCurpWrKsd.&mime_type=video_mp4&qs=15&rc=ZDVnOzplaWlpZzdmNmdpOUBpM3ZuM3Q5cndudzMzZjczM0AxY2A0LzZjNTMxLTAwY2JfYSNgLTBoMmQ0MS5gLS1kMWNzcw%3D%3D&btag=e00088000&expire=1732609546&l=202411260225363935CAF2808D524710A5&ply_type=2&policy=2&signature=c15a759aebb22c7a55843e0c19030be4&tk=tt_chain_token"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -f image2 -pattern_type none -i "file:Ghettooo #fyp #viral #trend [7440864222747086112].webp" -update 1 -movflags +faststart "file:Ghettooo #fyp #viral #trend [7440864222747086112].png"
[debug] ffmpeg version 7.0.2-full_build-www.gyan.dev Copyright (c) 2000-2024 the FFmpeg developers
built with gcc 13.2.0 (Rev5, Built by MSYS2 project)
configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libxevd --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxeve --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-dxva2 --enable-d3d11va --enable-d3d12va --enable-ffnvcodec --enable-libvpl --enable-nvdec --enable-nvenc --enable-vaapi --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint
libavutil 59. 8.100 / 59. 8.100
libavcodec 61. 3.100 / 61. 3.100
libavformat 61. 1.100 / 61. 1.100
libavdevice 61. 1.100 / 61. 1.100
libavfilter 10. 1.100 / 10. 1.100
libswscale 8. 1.100 / 8. 1.100
libswresample 5. 1.100 / 5. 1.100
libpostproc 58. 1.100 / 58. 1.100
[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANIM
[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb512c0] image data not found
[image2 @ 000001c6cbb569c0] Could not find codec parameters for stream 0 (Video: webp, none): unspecified size
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
Input #0, image2, from 'file:Ghettooo #fyp #viral #trend [7440864222747086112].webp':
Duration: 00:00:00.04, start: 0.000000, bitrate: N/A
Stream #0:0: Video: webp, none, 25 fps, 25 tbr, 25 tbn
Stream mapping:
Stream #0:0 -> #0:0 (webp (native) -> png (native))
Press [q] to stop, [?] for help
[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANIM
[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb61cc0] image data not found
[vist#0:0/webp @ 000001c6cbb512c0] [dec:webp @ 000001c6cbb59e00] Decoding error: Invalid data found when processing input
[vist#0:0/webp @ 000001c6cbb512c0] [dec:webp @ 000001c6cbb59e00] Decode error rate 1 exceeds maximum 0.666667
[vist#0:0/webp @ 000001c6cbb512c0] [dec:webp @ 000001c6cbb59e00] Task finished with error code: -1145393733 (Error number -1145393733 occurred)
[vist#0:0/webp @ 000001c6cbb512c0] [dec:webp @ 000001c6cbb59e00] Terminating thread with return code -1145393733 (Error number -1145393733 occurred)
Cannot determine format of input 0:0 after EOF
[vf#0:0 @ 000001c6cbb53140] Task finished with error code: -1094995529 (Invalid data found when processing input)
[vf#0:0 @ 000001c6cbb53140] Terminating thread with return code -1094995529 (Invalid data found when processing input)
[vost#0:0/png @ 000001c6cbb6f7c0] Could not open encoder before EOF
[vost#0:0/png @ 000001c6cbb6f7c0] Task finished with error code: -22 (Invalid argument)
[vost#0:0/png @ 000001c6cbb6f7c0] Terminating thread with return code -22 (Invalid argument)
[out#0/image2 @ 000001c6cbb6ef40] Nothing was written into output file, because at least one of its streams received no packets.
frame= 0 fps=0.0 q=0.0 Lsize= 0KiB time=N/A bitrate=N/A speed=N/A
Conversion failed!
ERROR: Postprocessing: Conversion failed!
Traceback (most recent call last):
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\YoutubeDL.py", line 3556, in process_info
replace_info_dict(self.post_process(dl_filename, info_dict, files_to_move))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\YoutubeDL.py", line 3740, in post_process
info = self.run_all_pps('post_process', info, additional_pps=info.get('__postprocessors'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\YoutubeDL.py", line 3722, in run_all_pps
info = self.run_pp(pp, info)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\YoutubeDL.py", line 3700, in run_pp
files_to_delete, infodict = pp.run(infodict)
^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\postprocessor\common.py", line 22, in run
ret = func(self, info, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\postprocessor\common.py", line 127, in wrapper
return func(self, info)
^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\postprocessor\embedthumbnail.py", line 84, in run
thumbnail_filename = convertor.convert_thumbnail(thumbnail_filename, 'png')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\postprocessor\ffmpeg.py", line 1107, in convert_thumbnail
self.real_run_ffmpeg(
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\postprocessor\ffmpeg.py", line 367, in real_run_ffmpeg
raise FFmpegPostProcessorError(stderr.strip().splitlines()[-1])
yt_dlp.postprocessor.ffmpeg.FFmpegPostProcessorError: Conversion failed!
[ERROR] Failed to process URL: https://www.tiktok.com/@johaanssson/video/7440864222747086112
[debug] Command-line config: ['https://www.tiktok.com/@filippasekesan0/video/7440543183844560150', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@elana.maguire15/video/7439872632234708257', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@smostervik/video/7434809831665503520', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@bille.135/video/7439449253501603104', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@kristal.329/video/7435311238092950815', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@johanna_nordstrand/video/7440174704758983969', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@cassidyannpayne/video/7440590041866456362', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@backup_josefinelykk/video/7440092940057267488', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@elina.pp3/video/7439466484176391456', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
```
| The thumbnail that yt-dlp is attempting to embed is an animated webp, and ffmpeg is choking on it.
We could deprioritize them like this:
```diff
diff --git a/yt_dlp/extractor/tiktok.py b/yt_dlp/extractor/tiktok.py
index ba15f08b6..721d36e49 100644
--- a/yt_dlp/extractor/tiktok.py
+++ b/yt_dlp/extractor/tiktok.py
@@ -420,6 +420,7 @@ def extract_addr(addr, add_meta={}):
thumbnails.append({
'id': cover_id,
'url': cover_url,
+ 'preference': -1 if cover_id in ('cover', 'origin_cover') else -2,
})
stats_info = aweme_detail.get('statistics') or {}
@@ -572,11 +573,21 @@ def _parse_aweme_video_web(self, aweme_detail, webpage_url, video_id, extract_fl
'uploader_id': (('authorId', 'uid', 'id'), {str_or_none}),
}), get_all=False)
+ thumbnails = []
+ for cover_id in ('thumbnail', 'cover', 'dynamicCover', 'originCover'):
+ for cover_url in traverse_obj(aweme_detail, ((None, 'video'), cover_id, {url_or_none})):
+ thumbnails.append({
+ 'id': cover_id,
+ 'url': self._proto_relative_url(cover_url),
+ 'preference': -2 if cover_id == 'dynamicCover' else -1,
+ })
+
return {
'id': video_id,
'formats': None if extract_flat else self._extract_web_formats(aweme_detail),
'subtitles': None if extract_flat else self.extract_subtitles(aweme_detail, video_id, None),
'http_headers': {'Referer': webpage_url},
+ 'thumbnails': thumbnails,
**author_info,
'channel_url': format_field(author_info, 'channel_id', self._UPLOADER_URL_FORMAT, default=None),
'uploader_url': format_field(
@@ -600,11 +611,6 @@ def _parse_aweme_video_web(self, aweme_detail, webpage_url, video_id, extract_fl
'repost_count': 'shareCount',
'comment_count': 'commentCount',
}), expected_type=int_or_none),
- 'thumbnails': traverse_obj(aweme_detail, (
- (None, 'video'), ('thumbnail', 'cover', 'dynamicCover', 'originCover'), {
- 'url': ({url_or_none}, {self._proto_relative_url}),
- },
- )),
}
``` | 1,732,592,483,000 | null | Bug Report | [
"yt_dlp/extractor/tiktok.py:TikTokBaseIE._parse_aweme_video_app",
"yt_dlp/extractor/tiktok.py:TikTokBaseIE._parse_aweme_video_web"
] | [] | 2 | 495 |
|
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11644 | 4b5eec0aaa7c02627f27a386591b735b90e681a8 | diff --git a/yt_dlp/extractor/dacast.py b/yt_dlp/extractor/dacast.py
index 4e81aa4a7bca..537352e5f78b 100644
--- a/yt_dlp/extractor/dacast.py
+++ b/yt_dlp/extractor/dacast.py
@@ -1,3 +1,4 @@
+import functools
import hashlib
import re
import time
@@ -51,6 +52,15 @@ class DacastVODIE(DacastBaseIE):
'thumbnail': 'https://universe-files.dacast.com/26137208-5858-65c1-5e9a-9d6b6bd2b6c2',
},
'params': {'skip_download': 'm3u8'},
+ }, { # /uspaes/ in hls_url
+ 'url': 'https://iframe.dacast.com/vod/f9823fc6-faba-b98f-0d00-4a7b50a58c5b/348c5c84-b6af-4859-bb9d-1d01009c795b',
+ 'info_dict': {
+ 'id': '348c5c84-b6af-4859-bb9d-1d01009c795b',
+ 'ext': 'mp4',
+ 'title': 'pl1-edyta-rubas-211124.mp4',
+ 'uploader_id': 'f9823fc6-faba-b98f-0d00-4a7b50a58c5b',
+ 'thumbnail': 'https://universe-files.dacast.com/4d0bd042-a536-752d-fc34-ad2fa44bbcbb.png',
+ },
}]
_WEBPAGE_TESTS = [{
'url': 'https://www.dacast.com/support/knowledgebase/how-can-i-embed-a-video-on-my-website/',
@@ -74,6 +84,15 @@ class DacastVODIE(DacastBaseIE):
'params': {'skip_download': 'm3u8'},
}]
+ @functools.cached_property
+ def _usp_signing_secret(self):
+ player_js = self._download_webpage(
+ 'https://player.dacast.com/js/player.js', None, 'Downloading player JS')
+ # Rotates every so often, but hardcode a fallback in case of JS change/breakage before rotation
+ return self._search_regex(
+ r'\bUSP_SIGNING_SECRET\s*=\s*(["\'])(?P<secret>(?:(?!\1).)+)', player_js,
+ 'usp signing secret', group='secret', fatal=False) or 'odnInCGqhvtyRTtIiddxtuRtawYYICZP'
+
def _real_extract(self, url):
user_id, video_id = self._match_valid_url(url).group('user_id', 'id')
query = {'contentId': f'{user_id}-vod-{video_id}', 'provider': 'universe'}
@@ -94,10 +113,10 @@ def _real_extract(self, url):
if 'DRM_EXT' in hls_url:
self.report_drm(video_id)
elif '/uspaes/' in hls_url:
- # From https://player.dacast.com/js/player.js
+ # Ref: https://player.dacast.com/js/player.js
ts = int(time.time())
signature = hashlib.sha1(
- f'{10413792000 - ts}{ts}YfaKtquEEpDeusCKbvYszIEZnWmBcSvw').digest().hex()
+ f'{10413792000 - ts}{ts}{self._usp_signing_secret}'.encode()).digest().hex()
hls_aes['uri'] = f'https://keys.dacast.com/uspaes/{video_id}.key?s={signature}&ts={ts}'
for retry in self.RetryManager():
| DacastVOD - ERROR: Strings must be encoded before hashing
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Poland
### Provide a description that is worded well enough to be understood
Playing link https://iframe.dacast.com/vod/f9823fc6-faba-b98f-0d00-4a7b50a58c5b/348c5c84-b6af-4859-bb9d-1d01009c795b in a web browser works fine but yt-dlp throws:
ERROR: Strings must be encoded before hashing
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://iframe.dacast.com/vod/f9823fc6-faba-b98f-0d00-4a7b50a58c5b/348c5c84-b6af-4859-bb9d-1d01009c795b']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792]
[debug] Python 3.12.7 (CPython x86_64 64bit) - Linux-6.11.9-arch1-1-x86_64-with-glibc2.40 (OpenSSL 3.4.0 22 Oct 2024, glibc 2.40)
[debug] exe versions: ffmpeg 7.1 (setts), ffprobe 7.1, rtmpdump 2.4
[debug] Optional libraries: certifi-2024.08.30, requests-2.32.3, sqlite3-3.46.1, urllib3-1.26.20
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[DacastVOD] Extracting URL: https://iframe.dacast.com/vod/f9823fc6-faba-b98f-0d00-4a7b50a58c5b/348c5c84-b6af-4859-bb9d-1d01009c795b
[DacastVOD] 348c5c84-b6af-4859-bb9d-1d01009c795b: Downloading JSON metadata
[DacastVOD] 348c5c84-b6af-4859-bb9d-1d01009c795b: Downloading access JSON
ERROR: Strings must be encoded before hashing
Traceback (most recent call last):
File "/usr/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1624, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1759, in __extract_info
ie_result = ie.extract(url)
^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/yt_dlp/extractor/dacast.py", line 99, in _real_extract
signature = hashlib.sha1(
^^^^^^^^^^^^^
TypeError: Strings must be encoded before hashing
```
| Looks like a combination of a coding mistake plus an outdated key (so it would've needed a fix even without the mistake):
```diff
diff --git a/yt_dlp/extractor/dacast.py b/yt_dlp/extractor/dacast.py
index 4e81aa4a7..537352e5f 100644
--- a/yt_dlp/extractor/dacast.py
+++ b/yt_dlp/extractor/dacast.py
@@ -1,3 +1,4 @@
+import functools
import hashlib
import re
import time
@@ -51,6 +52,15 @@ class DacastVODIE(DacastBaseIE):
'thumbnail': 'https://universe-files.dacast.com/26137208-5858-65c1-5e9a-9d6b6bd2b6c2',
},
'params': {'skip_download': 'm3u8'},
+ }, { # /uspaes/ in hls_url
+ 'url': 'https://iframe.dacast.com/vod/f9823fc6-faba-b98f-0d00-4a7b50a58c5b/348c5c84-b6af-4859-bb9d-1d01009c795b',
+ 'info_dict': {
+ 'id': '348c5c84-b6af-4859-bb9d-1d01009c795b',
+ 'ext': 'mp4',
+ 'title': 'pl1-edyta-rubas-211124.mp4',
+ 'uploader_id': 'f9823fc6-faba-b98f-0d00-4a7b50a58c5b',
+ 'thumbnail': 'https://universe-files.dacast.com/4d0bd042-a536-752d-fc34-ad2fa44bbcbb.png',
+ },
}]
_WEBPAGE_TESTS = [{
'url': 'https://www.dacast.com/support/knowledgebase/how-can-i-embed-a-video-on-my-website/',
@@ -74,6 +84,15 @@ class DacastVODIE(DacastBaseIE):
'params': {'skip_download': 'm3u8'},
}]
+ @functools.cached_property
+ def _usp_signing_secret(self):
+ player_js = self._download_webpage(
+ 'https://player.dacast.com/js/player.js', None, 'Downloading player JS')
+ # Rotates every so often, but hardcode a fallback in case of JS change/breakage before rotation
+ return self._search_regex(
+ r'\bUSP_SIGNING_SECRET\s*=\s*(["\'])(?P<secret>(?:(?!\1).)+)', player_js,
+ 'usp signing secret', group='secret', fatal=False) or 'odnInCGqhvtyRTtIiddxtuRtawYYICZP'
+
def _real_extract(self, url):
user_id, video_id = self._match_valid_url(url).group('user_id', 'id')
query = {'contentId': f'{user_id}-vod-{video_id}', 'provider': 'universe'}
@@ -94,10 +113,10 @@ def _real_extract(self, url):
if 'DRM_EXT' in hls_url:
self.report_drm(video_id)
elif '/uspaes/' in hls_url:
- # From https://player.dacast.com/js/player.js
+ # Ref: https://player.dacast.com/js/player.js
ts = int(time.time())
signature = hashlib.sha1(
- f'{10413792000 - ts}{ts}YfaKtquEEpDeusCKbvYszIEZnWmBcSvw').digest().hex()
+ f'{10413792000 - ts}{ts}{self._usp_signing_secret}'.encode()).digest().hex()
hls_aes['uri'] = f'https://keys.dacast.com/uspaes/{video_id}.key?s={signature}&ts={ts}'
for retry in self.RetryManager():
``` | 1,732,592,344,000 | null | Bug Report | [
"yt_dlp/extractor/dacast.py:DacastVODIE._real_extract"
] | [
"yt_dlp/extractor/dacast.py:DacastVODIE._usp_signing_secret"
] | 1 | 496 |
|
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11636 | 4b5eec0aaa7c02627f27a386591b735b90e681a8 | diff --git a/yt_dlp/extractor/dropbox.py b/yt_dlp/extractor/dropbox.py
index c122096230be..2bfeebc7cbba 100644
--- a/yt_dlp/extractor/dropbox.py
+++ b/yt_dlp/extractor/dropbox.py
@@ -48,32 +48,30 @@ def _real_extract(self, url):
webpage = self._download_webpage(url, video_id)
fn = urllib.parse.unquote(url_basename(url))
title = os.path.splitext(fn)[0]
- password = self.get_param('videopassword')
+ content_id = None
for part in self._yield_decoded_parts(webpage):
if '/sm/password' in part:
- webpage = self._download_webpage(
- update_url('https://www.dropbox.com/sm/password', query=part.partition('?')[2]), video_id)
+ content_id = self._search_regex(r'content_id=([\w.+=/-]+)', part, 'content ID')
break
- if (self._og_search_title(webpage, default=None) == 'Dropbox - Password Required'
- or 'Enter the password for this link' in webpage):
- if password:
- response = self._download_json(
- 'https://www.dropbox.com/sm/auth', video_id, 'POSTing video password',
- headers={'content-type': 'application/x-www-form-urlencoded; charset=UTF-8'},
- data=urlencode_postdata({
- 'is_xhr': 'true',
- 't': self._get_cookies('https://www.dropbox.com')['t'].value,
- 'content_id': self._search_regex(r'content_id=([\w.+=/-]+)["\']', webpage, 'content id'),
- 'password': password,
- 'url': url,
- }))
-
- if response.get('status') != 'authed':
- raise ExtractorError('Invalid password', expected=True)
- elif not self._get_cookies('https://dropbox.com').get('sm_auth'):
+ if content_id:
+ password = self.get_param('videopassword')
+ if not password:
raise ExtractorError('Password protected video, use --video-password <password>', expected=True)
+
+ response = self._download_json(
+ 'https://www.dropbox.com/sm/auth', video_id, 'POSTing video password',
+ data=urlencode_postdata({
+ 'is_xhr': 'true',
+ 't': self._get_cookies('https://www.dropbox.com')['t'].value,
+ 'content_id': content_id,
+ 'password': password,
+ 'url': update_url(url, scheme='', netloc=''),
+ }))
+ if response.get('status') != 'authed':
+ raise ExtractorError('Invalid password', expected=True)
+
webpage = self._download_webpage(url, video_id)
formats, subtitles = [], {}
| Dropbox "No video formats found!" Error for password protected videos
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
United States
### Provide a description that is worded well enough to be understood
Downloading Dropbox videos with passwords was working fine up to a month ago, but as of late only returns a "No video format found!" error using the same command lines that previously worked fine (i.e. `yt-dlp --video-password "PASSWORD" "URL"`)
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--video-password', 'PRIVATE', 'https://www.dropbox.com/scl/fi/mo9nwwjtwgajfsog9ysdp/BTS-Episodes-Series-11-Reaction.mp4?rlkey=v633fmz95bwn0kqtia777nxrm&st=khvzkp6s&dl=0']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-nightly-builds [4b5eec0aa] (win_x86_exe)
[debug] Python 3.10.11 (CPython AMD64 32bit) - Windows-10-10.0.19042-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 2024-04-01-git-7bf85d2d3a-essentials_build-www.gyan.dev (setts), ffprobe 2024-04-01-git-7bf85d2d3a-essentials_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp-nightly-builds)
[Dropbox] Extracting URL: https://www.dropbox.com/scl/fi/mo9nwwjtwgajfsog9ysdp/BTS-Episodes-Series-11-Reaction.mp4?rlkey=v633fmz95bwn0kqtia777nxrm&st=khvzkp6s&dl=0
[Dropbox] mo9nwwjtwgajfsog9ysdp: Downloading webpage
[Dropbox] mo9nwwjtwgajfsog9ysdp: Downloading webpage
ERROR: [Dropbox] mo9nwwjtwgajfsog9ysdp: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1624, in wrapper
File "yt_dlp\YoutubeDL.py", line 1780, in __extract_info
File "yt_dlp\YoutubeDL.py", line 1839, in process_ie_result
File "yt_dlp\YoutubeDL.py", line 2846, in process_video_result
File "yt_dlp\YoutubeDL.py", line 1121, in raise_no_formats
yt_dlp.utils.ExtractorError: [Dropbox] mo9nwwjtwgajfsog9ysdp: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
```
| 1,732,577,785,000 | null | Bug Report | [
"yt_dlp/extractor/dropbox.py:DropboxIE._real_extract"
] | [] | 1 | 497 |
||
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11624 | fe70f20aedf528fdee332131bc9b6710e54e6f10 | diff --git a/yt_dlp/extractor/chaturbate.py b/yt_dlp/extractor/chaturbate.py
index a40b7d39c7f4..d031d3985e33 100644
--- a/yt_dlp/extractor/chaturbate.py
+++ b/yt_dlp/extractor/chaturbate.py
@@ -59,17 +59,16 @@ def _extract_from_api(self, video_id, tld):
'Accept': 'application/json',
}, fatal=False, impersonate=True) or {}
- status = response.get('room_status')
- if status != 'public':
+ m3u8_url = response.get('url')
+ if not m3u8_url:
+ status = response.get('room_status')
if error := self._ERROR_MAP.get(status):
raise ExtractorError(error, expected=True)
- self.report_warning('Falling back to webpage extraction')
+ if status == 'public':
+ self.raise_geo_restricted()
+ self.report_warning(f'Got status "{status}" from API; falling back to webpage extraction')
return None
- m3u8_url = response.get('url')
- if not m3u8_url:
- self.raise_geo_restricted()
-
return {
'id': video_id,
'title': video_id,
| [chaturbate] Support downloading non-public rooms again
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a site-specific feature
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Example URLs
Unable to provide any
### Provide a description that is worded well enough to be understood
Recent changes has dropped support for downloading any room with a status other than "public", even if you have access to that room, for instance via browser login and `--cookies-from-browser` option. This has worked before, and should work again.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://chaturbate.com/xiawa_xo/']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-master-builds [fe70f20ae] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 7.1-full_build-www.gyan.dev (setts), ffprobe 7.1-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-master-builds/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp-master-builds
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp-master-builds)
[Chaturbate] Extracting URL: https://chaturbate.com/xiawa_xo/
[Chaturbate] xiawa_xo: Downloading JSON metadata
ERROR: [Chaturbate] xiawa_xo: Hidden session in progress
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\chaturbate.py", line 154, in _real_extract
File "yt_dlp\extractor\chaturbate.py", line 65, in _extract_from_api
```
| 1,732,485,081,000 | null | Feature Request | [
"yt_dlp/extractor/chaturbate.py:ChaturbateIE._extract_from_api"
] | [] | 1 | 498 |
||
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11615 | e0f1ae813b36e783e2348ba2a1566e12f5cd8f6e | diff --git a/yt_dlp/extractor/youtube.py b/yt_dlp/extractor/youtube.py
index a02a2428ab05..7a9133466d9b 100644
--- a/yt_dlp/extractor/youtube.py
+++ b/yt_dlp/extractor/youtube.py
@@ -4986,6 +4986,10 @@ def _grid_entries(self, grid_renderer):
for item in grid_renderer['items']:
if not isinstance(item, dict):
continue
+ if lockup_view_model := traverse_obj(item, ('lockupViewModel', {dict})):
+ if entry := self._extract_lockup_view_model(lockup_view_model):
+ yield entry
+ continue
renderer = self._extract_basic_item_renderer(item)
if not isinstance(renderer, dict):
continue
@@ -5084,10 +5088,30 @@ def _playlist_entries(self, video_list_renderer):
continue
yield self._extract_video(renderer)
+ def _extract_lockup_view_model(self, view_model):
+ content_id = view_model.get('contentId')
+ if not content_id:
+ return
+ content_type = view_model.get('contentType')
+ if content_type not in ('LOCKUP_CONTENT_TYPE_PLAYLIST', 'LOCKUP_CONTENT_TYPE_PODCAST'):
+ self.report_warning(
+ f'Unsupported lockup view model content type "{content_type}"{bug_reports_message()}', only_once=True)
+ return
+ return self.url_result(
+ f'https://www.youtube.com/playlist?list={content_id}', ie=YoutubeTabIE, video_id=content_id,
+ title=traverse_obj(view_model, (
+ 'metadata', 'lockupMetadataViewModel', 'title', 'content', {str})),
+ thumbnails=self._extract_thumbnails(view_model, (
+ 'contentImage', 'collectionThumbnailViewModel', 'primaryThumbnail', 'thumbnailViewModel', 'image'), final_key='sources'))
+
def _rich_entries(self, rich_grid_renderer):
+ if lockup_view_model := traverse_obj(rich_grid_renderer, ('content', 'lockupViewModel', {dict})):
+ if entry := self._extract_lockup_view_model(lockup_view_model):
+ yield entry
+ return
renderer = traverse_obj(
rich_grid_renderer,
- ('content', ('videoRenderer', 'reelItemRenderer', 'playlistRenderer', 'shortsLockupViewModel', 'lockupViewModel'), any)) or {}
+ ('content', ('videoRenderer', 'reelItemRenderer', 'playlistRenderer', 'shortsLockupViewModel'), any)) or {}
video_id = renderer.get('videoId')
if video_id:
yield self._extract_video(renderer)
@@ -5114,18 +5138,6 @@ def _rich_entries(self, rich_grid_renderer):
})),
thumbnails=self._extract_thumbnails(renderer, 'thumbnail', final_key='sources'))
return
- # lockupViewModel extraction
- content_id = renderer.get('contentId')
- if content_id and renderer.get('contentType') == 'LOCKUP_CONTENT_TYPE_PODCAST':
- yield self.url_result(
- f'https://www.youtube.com/playlist?list={content_id}',
- ie=YoutubeTabIE, video_id=content_id,
- **traverse_obj(renderer, {
- 'title': ('metadata', 'lockupMetadataViewModel', 'title', 'content', {str}),
- }),
- thumbnails=self._extract_thumbnails(renderer, (
- 'contentImage', 'collectionThumbnailViewModel', 'primaryThumbnail', 'thumbnailViewModel', 'image'), final_key='sources'))
- return
def _video_entry(self, video_renderer):
video_id = video_renderer.get('videoId')
@@ -5794,7 +5806,7 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
'info_dict': {
'id': 'UCYO_jab_esuFRV4b17AJtAw',
'title': '3Blue1Brown - Playlists',
- 'description': 'md5:4d1da95432004b7ba840ebc895b6b4c9',
+ 'description': 'md5:602e3789e6a0cb7d9d352186b720e395',
'channel_url': 'https://www.youtube.com/channel/UCYO_jab_esuFRV4b17AJtAw',
'channel': '3Blue1Brown',
'channel_id': 'UCYO_jab_esuFRV4b17AJtAw',
| [youtube:tab] Tab/playlist extraction intermittently yielding 0 items
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Switzerland
### Provide a description that is worded well enough to be understood
I'm using yt-dlp on a script of mine and I realised that one of my tests was flakey. In particular, listing playlists for a given YouTube channel would fail about 33% of the time, but nothing in my code seemed amiss.
Searching through the issues to see if I was doing something wrong or there was someone having similar trouble, I found https://github.com/yt-dlp/yt-dlp/issues/11511 (not related to my issue per-se), so I tried the exact command that was suggested to see if I could replicate my issues there, and indeed, I'm having the same trouble using the CLI command.
More than half the times, running the command `yt-dlp -vU --print "%(id)s|%(title)s|%(playlist_index)s" --flat-playlist "https://www.youtube.com/@LibraryoftheUntold/playlists"` works fine. But occasionally, it silently fails with no output or warning. It just doesn't print anything. When using the Python library directly, the issue manifests as an empty list in the `entries` key, whilst all other fields seem to be populated as usual.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
#### Expected Behaviour
> Most of the time, this is the output.
```shell
❯ yt-dlp -vU --print "%(id)s|%(title)s|%(playlist_index)s" --flat-playlist "https://www.youtube.com/@LibraryoftheUntold/playlists"
[debug] Command-line config: ['-vU', '--print', '%(id)s|%(title)s|%(playlist_index)s', '--flat-playlist', 'https://www.youtube.com/@LibraryoftheUntold/playlists']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [197d0b03b] (pip)
[debug] Python 3.13.0 (CPython arm64 64bit) - macOS-15.1-arm64-arm-64bit-Mach-O (OpenSSL 3.4.0 22 Oct 2024)
[debug] exe versions: ffmpeg 7.1 (setts), ffprobe 7.1
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.47.0, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[youtube:tab] Extracting URL: https://www.youtube.com/@LibraryoftheUntold/playlists
[youtube:tab] @LibraryoftheUntold/playlists: Downloading webpage
[debug] [youtube:tab] Selected tab: 'playlists' (playlists), Requested tab: 'playlists'
[download] Downloading playlist: Library of the Untold - Playlists
[youtube:tab] Playlist Library of the Untold - Playlists: Downloading 9 items of 9
[debug] The information of all playlist entries will be held in memory
[download] Downloading item 1 of 9
PL_39VJI5VnWsEY21lkFbMtTte1FecamGO|Paranormal / Strange Happenings|1
[download] Downloading item 2 of 9
PL_39VJI5VnWtOERqIiMX_Yt3_wEwJ8J-O|Podcasts with Incredible Minds|2
[download] Downloading item 3 of 9
PL_39VJI5VnWslKfsegFKYnslK3aUMwfRQ|Zen|3
[download] Downloading item 4 of 9
PL_39VJI5VnWsdRQcmUh3tNuyrdo0RbkbN|Alchemical Transmutation|4
[download] Downloading item 5 of 9
PL_39VJI5VnWvcHp6c1aO1fq3ir-1-lwMK|Gnostic Thought|5
[download] Downloading item 6 of 9
PL_39VJI5VnWvbkUkqZa38hPoJtQEclDAW|Unexplained / Great Mysteries|6
[download] Downloading item 7 of 9
PL_39VJI5VnWsT05y2pHAmjJsTh-E1qS30|Quick Summaries|7
[download] Downloading item 8 of 9
PL_39VJI5VnWvNy-mRKjkhq_hxTzaWa0RT|Book of Wonderful Suffering -by J. W. Phipps|8
[download] Downloading item 9 of 9
PL_39VJI5VnWuC9OnKk86lBa_IjRHQWqHw|ALL Documentaries|9
[download] Finished downloading playlist: Library of the Untold - Playlists
```
#### Occasional wrong behaviour
> Sometimes, this happens. I haven't found a clear pattern when it does.
```shell
❯ yt-dlp -vU --print "%(id)s|%(title)s|%(playlist_index)s" --flat-playlist "https://www.youtube.com/@LibraryoftheUntold/playlists"
[debug] Command-line config: ['-vU', '--print', '%(id)s|%(title)s|%(playlist_index)s', '--flat-playlist', 'https://www.youtube.com/@LibraryoftheUntold/playlists']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [197d0b03b] (pip)
[debug] Python 3.13.0 (CPython arm64 64bit) - macOS-15.1-arm64-arm-64bit-Mach-O (OpenSSL 3.4.0 22 Oct 2024)
[debug] exe versions: ffmpeg 7.1 (setts), ffprobe 7.1
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.47.0, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[youtube:tab] Extracting URL: https://www.youtube.com/@LibraryoftheUntold/playlists
[youtube:tab] @LibraryoftheUntold/playlists: Downloading webpage
[debug] [youtube:tab] Selected tab: 'playlists' (playlists), Requested tab: 'playlists'
[download] Downloading playlist: Library of the Untold - Playlists
[youtube:tab] Playlist Library of the Untold - Playlists: Downloading 0 items
[debug] The information of all playlist entries will be held in memory
[download] Finished downloading playlist: Library of the Untold - Playlists
```
| My first thought is that either the YouTube API is actually returning no data for some reason (which seems unlikely, but possible), or yt-dlp is silently failing somewhere and the error is being ignored.
YT is rolling out changes; with the new response, yt-dlp's extractor is looking in the wrong place for entry and continuation data. Probably related to #11130 | 1,732,392,313,000 | null | Bug Report | [
"yt_dlp/extractor/youtube.py:YoutubeTabBaseInfoExtractor._grid_entries",
"yt_dlp/extractor/youtube.py:YoutubeTabBaseInfoExtractor._rich_entries"
] | [
"yt_dlp/extractor/youtube.py:YoutubeTabBaseInfoExtractor._extract_lockup_view_model"
] | 2 | 499 |
|
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11596 | f9197295388b44ee0a8992cb00f361c7ef42acdb | diff --git a/yt_dlp/extractor/stripchat.py b/yt_dlp/extractor/stripchat.py
index 31c8afbc6268..84846042f38f 100644
--- a/yt_dlp/extractor/stripchat.py
+++ b/yt_dlp/extractor/stripchat.py
@@ -28,24 +28,21 @@ class StripchatIE(InfoExtractor):
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id, headers=self.geo_verification_headers())
+ data = self._search_json(
+ r'<script\b[^>]*>\s*window\.__PRELOADED_STATE__\s*=',
+ webpage, 'data', video_id, transform_source=lowercase_escape)
- data = self._parse_json(
- self._search_regex(
- r'<script\b[^>]*>\s*window\.__PRELOADED_STATE__\s*=(?P<value>.*?)<\/script>',
- webpage, 'data', default='{}', group='value'),
- video_id, transform_source=lowercase_escape, fatal=False)
- if not data:
- raise ExtractorError('Unable to find configuration for stream.')
-
- if traverse_obj(data, ('viewCam', 'show'), expected_type=dict):
- raise ExtractorError('Model is in private show', expected=True)
- elif not traverse_obj(data, ('viewCam', 'model', 'isLive'), expected_type=bool):
+ if traverse_obj(data, ('viewCam', 'show', {dict})):
+ raise ExtractorError('Model is in a private show', expected=True)
+ if not traverse_obj(data, ('viewCam', 'model', 'isLive', {bool})):
raise UserNotLive(video_id=video_id)
- model_id = traverse_obj(data, ('viewCam', 'model', 'id'), expected_type=int)
+ model_id = data['viewCam']['model']['id']
formats = []
- for host in traverse_obj(data, ('config', 'data', (
+ # HLS hosts are currently found in .configV3.static.features.hlsFallback.fallbackDomains[]
+ # The rest of the path is for backwards compatibility and to guard against A/B testing
+ for host in traverse_obj(data, ((('config', 'data'), ('configV3', 'static')), (
(('features', 'featuresV2'), 'hlsFallback', 'fallbackDomains', ...), 'hlsStreamHost'))):
formats = self._extract_m3u8_formats(
f'https://edge-hls.{host}/hls/{model_id}/master/{model_id}_auto.m3u8',
@@ -53,7 +50,7 @@ def _real_extract(self, url):
if formats:
break
if not formats:
- self.raise_no_formats('No active streams found', expected=True)
+ self.raise_no_formats('Unable to extract stream host', video_id=video_id)
return {
'id': video_id,
| stripchat extractor not working: "No active stream found"
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Israel
### Provide a description that is worded well enough to be understood
When trying to download a stream from stripchat.com yt-dlp gives back "No active stream" even though a stream is live at the same time. If I paste the m3u8 of the same stream then yt-dlp works and downloads the stream.
sidenote:
I tried to set my yt-dlp to nightly but can't because I downloaded it from homebrew. and using yt-dlp --update-to nightly gives back:
"ERROR: You installed yt-dlp from a manual build or with a package manager; Use that to update"
and no documentation on how to exactly do that.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://stripchat.com/MagicLilu']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792]
[debug] Python 3.13.0 (CPython arm64 64bit) - macOS-14.6.1-arm64-arm-64bit-Mach-O (OpenSSL 3.4.0 22 Oct 2024)
[debug] exe versions: ffmpeg 7.1 (setts), ffprobe 7.1
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.47.0, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[Stripchat] Extracting URL: https://stripchat.com/MagicLilu
[Stripchat] MagicLilu: Downloading webpage
ERROR: [Stripchat] MagicLilu: No active streams found
File "/opt/homebrew/Cellar/yt-dlp/HEAD-f919729/libexec/lib/python3.13/site-packages/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
File "/opt/homebrew/Cellar/yt-dlp/HEAD-f919729/libexec/lib/python3.13/site-packages/yt_dlp/extractor/stripchat.py", line 56, in _real_extract
self.raise_no_formats('No active streams found', expected=True)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/yt-dlp/HEAD-f919729/libexec/lib/python3.13/site-packages/yt_dlp/extractor/common.py", line 1276, in raise_no_formats
raise ExtractorError(msg, expected=expected, video_id=video_id)
```
| Looks like they're using .live as a host now
https://edge-hls.doppiocdn.com/hls/164812713/master/164812713_auto.m3u8 doesn't work but https://edge-hls.doppiocdn.live/hls/164812713/master/164812713_auto.m3u8 does so Stripchat extractor needs .live as a fallback I think. Also is there a way to also use xHamsterLive as it uses the exact same manifests but doesn't work because of domain name. | 1,732,141,453,000 | null | Bug Report | [
"yt_dlp/extractor/stripchat.py:StripchatIE._real_extract"
] | [] | 1 | 500 |
|
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11555 | f2a4983df7a64c4e93b56f79dbd16a781bd90206 | diff --git a/yt_dlp/extractor/chaturbate.py b/yt_dlp/extractor/chaturbate.py
index 864d61f9c2b8..aa70f26a1bcb 100644
--- a/yt_dlp/extractor/chaturbate.py
+++ b/yt_dlp/extractor/chaturbate.py
@@ -5,6 +5,7 @@
ExtractorError,
lowercase_escape,
url_or_none,
+ urlencode_postdata,
)
@@ -40,14 +41,48 @@ class ChaturbateIE(InfoExtractor):
'only_matching': True,
}]
- _ROOM_OFFLINE = 'Room is currently offline'
+ _ERROR_MAP = {
+ 'offline': 'Room is currently offline',
+ 'private': 'Room is currently in a private show',
+ 'away': 'Performer is currently away',
+ 'password protected': 'Room is password protected',
+ 'hidden': 'Hidden session in progress',
+ }
- def _real_extract(self, url):
- video_id, tld = self._match_valid_url(url).group('id', 'tld')
+ def _extract_from_api(self, video_id, tld):
+ response = self._download_json(
+ f'https://chaturbate.{tld}/get_edge_hls_url_ajax/', video_id,
+ data=urlencode_postdata({'room_slug': video_id}),
+ headers={
+ **self.geo_verification_headers(),
+ 'X-Requested-With': 'XMLHttpRequest',
+ 'Accept': 'application/json',
+ }, fatal=False, impersonate=True) or {}
+
+ status = response.get('room_status')
+ if status != 'public':
+ if error := self._ERROR_MAP.get(status):
+ raise ExtractorError(error, expected=True)
+ self.report_warning('Falling back to webpage extraction')
+ return None
+
+ m3u8_url = response.get('url')
+ if not m3u8_url:
+ self.raise_geo_restricted()
+
+ return {
+ 'id': video_id,
+ 'title': video_id,
+ 'thumbnail': f'https://roomimg.stream.highwebmedia.com/ri/{video_id}.jpg',
+ 'is_live': True,
+ 'age_limit': 18,
+ 'formats': self._extract_m3u8_formats(m3u8_url, video_id, ext='mp4', live=True),
+ }
+ def _extract_from_webpage(self, video_id, tld):
webpage = self._download_webpage(
f'https://chaturbate.{tld}/{video_id}/', video_id,
- headers=self.geo_verification_headers())
+ headers=self.geo_verification_headers(), impersonate=True)
found_m3u8_urls = []
@@ -85,8 +120,8 @@ def _real_extract(self, url):
webpage, 'error', group='error', default=None)
if not error:
if any(p in webpage for p in (
- self._ROOM_OFFLINE, 'offline_tipping', 'tip_offline')):
- error = self._ROOM_OFFLINE
+ self._ERROR_MAP['offline'], 'offline_tipping', 'tip_offline')):
+ error = self._ERROR_MAP['offline']
if error:
raise ExtractorError(error, expected=True)
raise ExtractorError('Unable to find stream URL')
@@ -113,3 +148,7 @@ def _real_extract(self, url):
'is_live': True,
'formats': formats,
}
+
+ def _real_extract(self, url):
+ video_id, tld = self._match_valid_url(url).group('id', 'tld')
+ return self._extract_from_api(video_id, tld) or self._extract_from_webpage(video_id, tld)
| [Chaturbate] Consider using the API
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a site-specific feature
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Example URLs
https://chaturbate.com/kira_censori/
### Provide a description that is worded well enough to be understood
Currently yt-dlp loads the entire page to find the m3u8 stream URL, but another way is to use the API:
```sh
$ curl -X POST "https://chaturbate.com/get_edge_hls_url_ajax/" -H "X-Requested-With: XMLHttpRequest" -d "room_slug=kira_censori"
```
```json
{
"success": true,
"url": "https://edge17-hel.live.mmcdn.com/live-hls/amlst:kira_censori-sd-203fe4e99b463f0b5013d75b7f491286d7f8cbdad109cef79db409bfc80e33d0_trns_h264/playlist.m3u8",
"room_status": "public",
"hidden_message": ""
}
```
This endpoint provides the same m3u8 stream URL that is embedded in HTML (specifically in `window.initialRoomDossier`). The advantage is that this is ~500 times smaller in size compared to downloading the entire HTML page and simplifies error handling.
Here is a rough sequence of actions:
```
if "success":
- true:
if "room_status":
- "public":
if "url":
- [m3u8 stream url]
- "": [room is geo-blocked]
- something else: [room is private or offline]
- false: [room doesn't exist]
```
All possible `room_status` values can be found [here](https://devportal.cb.dev/wiki/api/$room#roomstatus-string). Not sure what we need to tell the user in non-`public` cases, above is just an example.
Maybe someday I'll do a PR if I have time. What do you think about that?
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
_No response_
| 1,731,692,715,000 | null | Feature Request | [
"yt_dlp/extractor/chaturbate.py:ChaturbateIE._real_extract"
] | [
"yt_dlp/extractor/chaturbate.py:ChaturbateIE._extract_from_api",
"yt_dlp/extractor/chaturbate.py:ChaturbateIE._extract_from_webpage"
] | 1 | 501 |
||
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11542 | f2a4983df7a64c4e93b56f79dbd16a781bd90206 | diff --git a/yt_dlp/extractor/spankbang.py b/yt_dlp/extractor/spankbang.py
index 6805a72deb7b..05f0bb1468ed 100644
--- a/yt_dlp/extractor/spankbang.py
+++ b/yt_dlp/extractor/spankbang.py
@@ -71,9 +71,11 @@ class SpankBangIE(InfoExtractor):
def _real_extract(self, url):
mobj = self._match_valid_url(url)
video_id = mobj.group('id') or mobj.group('id_2')
+ country = self.get_param('geo_bypass_country') or 'US'
+ self._set_cookie('.spankbang.com', 'country', country.upper())
webpage = self._download_webpage(
url.replace(f'/{video_id}/embed', f'/{video_id}/video'),
- video_id, headers={'Cookie': 'country=US'})
+ video_id, impersonate=True)
if re.search(r'<[^>]+\b(?:id|class)=["\']video_removed', webpage):
raise ExtractorError(
| spankbang - 403 Forbidden errors
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that a **supported** site is broken
- [X] I've verified that I'm running yt-dlp version **2023.03.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
USA
### Provide a description that is worded well enough to be understood
All video url's from Spankbang are returning 403 forbidden errors. I have confirmed that they load and play in the browser just fine. Verbose output is provided. My `yt-dlp` version is completely up to date.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
> yt-dlp -vU https://spankbang.com/6c6z5/video/fan+appreciation+event+english+sub
[debug] Command-line config: ['-vU', 'https://spankbang.com/6c6z5/video/fan+appreciation+event+english+sub']
[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, err utf-8, pref UTF-8
[debug] yt-dlp version 2022.02.04 [c1653e9ef] (zip)
[debug] Python version 3.8.10 (CPython 64bit) - Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.29
[debug] exe versions: ffmpeg 4.2.4, ffprobe 4.2.4
[debug] Optional libraries: Cryptodome, secretstorage, mutagen, sqlite, websockets
[debug] Proxy map: {}
Latest version: 2023.03.04, Current version: 2022.02.04
Current Build Hash a16fe3b3bd474d562c4b8645579b209377b967d58d4edffe6e31dc8de81d7283
Updating to version 2023.03.04 ...
ERROR: Unable to write to /usr/local/bin/yt-dlp; Try running as administrator
[debug] [SpankBang] Extracting URL: https://spankbang.com/6c6z5/video/fan+appreciation+event+english+sub
[SpankBang] 6c6z5: Downloading webpage
ERROR: [SpankBang] 6c6z5: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: 'Forbidden'>); please report this issue on https://github.com/yt-dlp/yt-dlp , filling out the "Broken site" issue template properly. Confirm you are on the latest version using -U (caused by <HTTPError 403: 'Forbidden'>); please report this issue on https://github.com/yt-dlp/yt-dlp , filling out the "Broken site" issue template properly. Confirm you are on the latest version using -U
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 730, in _request_webpage
return self._downloader.urlopen(url_or_request)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 3558, in urlopen return self._opener.open(req, timeout=self._socket_timeout) File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 569, in error
return self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
```
| > My yt-dlp version is completely up to date.
> [debug] yt-dlp version **2022.02.04** [c1653e9ef] (zip)
> Latest version: **2023.03.04**, Current version: **2022.02.04**
> Updating to version 2023.03.04 ...
> **ERROR: Unable to write to /usr/local/bin/yt-dlp; Try running as administrator**
I see in the logs that it shows my `yt-dlp` version as 2022.02.04 but when I try to update using python, it says that I'm up to date already.
And when I try to update via `yt-dlp -U`:
```
Available version: [email protected], Current version: [email protected]
Current Build Hash: 91cad9f121c1f6f0a81b747415c46ecba0ff331ed38cc6433040b4ac7b6e15ca
yt-dlp is up to date ([email protected])
```
```
> yt-dlp --version
2023.03.04
```
The log in the OP most definitely is **not** from version 2023.03.04. Are you sure you're not running two different versions? The first log looks like it's being run in WSL?
> The log in the OP most definitely is **not** from version 2023.03.04. Are you sure you're not running two different versions? The first log looks like it's being run in WSL?
Yep, looks like I was. I ran `pip3 uninstall yt-dlp` but the command was still accessible. So looks like I had two installations going. Removing the python version got everything working. Thanks, feel free to close this issue.
Hi @bashonly , I use the master version will get 403 too
```bash
[debug] Command-line config: ['-v', '--proxy', 'http://127.0.0.1:1080', 'https://spankbang.com/81sy6/video/japanese']
[debug] Encodings: locale cp936, fs utf-8, pref cp936, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] [392389b7d]
[debug] Lazy loading extractors is disabled
[debug] Python 3.8.10 (CPython AMD64 32bit) - Windows-10-10.0.22621-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 4.2.1 (fdk), ffprobe 4.2.1
[debug] Optional libraries: Cryptodome-3.17, brotli-1.0.9, certifi-2020.12.05, mutagen-1.46.0, sqlite3-2.6.0, websockets-10.4
[debug] Proxy map: {'http': 'http://127.0.0.1:1080', 'https': 'http://127.0.0.1:1080'}
[debug] Loaded 1791 extractors
[SpankBang] Extracting URL: https://spankbang.com/81sy6/video/japanese
[SpankBang] 81sy6: Downloading webpage
ERROR: [SpankBang] 81sy6: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: 'Forbidden'>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 694, in extract
ie_result = self._real_extract(url)
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\spankbang.py", line 74, in _real_extract
webpage = self._download_webpage(
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 1100, in _download_webpage
return self.__download_webpage(url_or_request, video_id, note, errnote, None, fatal, *args, **kwargs)
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 1051, in download_content
res = getattr(self, download_handle.__name__)(url_or_request, video_id, **kwargs)
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 885, in _download_webpage_handle
urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data, headers=headers, query=query, expected_status=expected_status)
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 842, in _request_webpage
raise ExtractorError(errmsg, cause=err)
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 824, in _request_webpage
return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query))
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\YoutubeDL.py", line 3745, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 531, in open
response = meth(req, response)
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 640, in http_response
response = self.parent.error(
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 569, in error
return self._call_chain(*args)
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 502, in _call_chain
result = func(*args)
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden
```
but the 2022.07.18 on my WSL will succeed
```bash
[debug] Command-line config: ['-v', '--proxy', 'http://127.0.0.1:1080', 'https://spankbang.com/81sy6/video/japanese']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out UTF-8, error UTF-8, screen UTF-8
[debug] yt-dlp version 2022.07.18 [135f05ef6]
[debug] Lazy loading extractors is disabled
[debug] Python 3.7.5 (CPython 64bit) - Linux-4.4.0-22621-Microsoft-x86_64-with-Ubuntu-18.04-bionic (glibc 2.26)
[debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs
[debug] exe versions: ffmpeg 3.4.8, ffprobe 3.4.8
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2020.12.05, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {'http': 'http://127.0.0.1:1080', 'https': 'http://127.0.0.1:1080'}
[debug] [SpankBang] Extracting URL: https://spankbang.com/81sy6/video/japanese
[SpankBang] 81sy6: Downloading webpage
[SpankBang] 81sy6: Downloading stream JSON
[SpankBang] 81sy6: Downloading m3u8 information
[SpankBang] 81sy6: Downloading m3u8 information
[SpankBang] 81sy6: Downloading m3u8 information
[SpankBang] 81sy6: Downloading m3u8 information
[SpankBang] 81sy6: Downloading m3u8 information
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] 81sy6: Downloading 1 format(s): hls-2564-1
[debug] Invoking hlsnative downloader on "https://hls-uranus.sb-cd.com/hls/1/3/13521102-1080p.mp4/index-v1-a1.m3u8?_tid=13521102&d=1&m=43&secure=nxKExkSSNg5q0juEWzONGA,1680835039"
[hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 810
[download] Destination: Japanese [81sy6].mp4
[download] 0.5% of ~662.91MiB at 151.59KiB/s ETA 27:28 (frag 5/810)
```
I can't reproduce the 403. Maybe it's due to a change in network/proxy code?
```
$ yt-dlp --ignore-config -vF "https://spankbang.com/81sy6/video/japanese"
[debug] Command-line config: ['--ignore-config', '-vF', 'https://spankbang.com/81sy6/video/japanese']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] [392389b7d]
[debug] Lazy loading extractors is disabled
[debug] Python 3.10.10 (CPython x86_64 64bit) - Linux-6.2.8-arch1-1-x86_64-with-glibc2.37 (OpenSSL 3.0.8 7 Feb 2023, glibc 2.37)
[debug] exe versions: ffmpeg 6.0 (setts), ffprobe 6.0
[debug] Optional libraries: Cryptodome-3.12.0, brotlicffi-1.0.9.2, certifi-2022.12.07, mutagen-1.46.0, sqlite3-2.6.0, websockets-10.4
[debug] Proxy map: {}
[debug] Loaded 1791 extractors
[SpankBang] Extracting URL: https://spankbang.com/81sy6/video/japanese
[SpankBang] 81sy6: Downloading webpage
[SpankBang] 81sy6: Downloading stream JSON
[SpankBang] 81sy6: Downloading m3u8 information
[SpankBang] 81sy6: Downloading m3u8 information
[SpankBang] 81sy6: Downloading m3u8 information
[SpankBang] 81sy6: Downloading m3u8 information
[SpankBang] 81sy6: Downloading m3u8 information
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id
[info] Available formats for 81sy6:
ID EXT RESOLUTION FPS │ FILESIZE TBR PROTO │ VCODEC VBR ACODEC ABR
────────────────────────────────────────────────────────────────────────────────────────
240p mp4 240p │ https │ unknown unknown
hls-379-0 mp4 426x240 25 │ ~224.74MiB 380k m3u8 │ avc1.42c01e 380k mp4a.40.2 0k
hls-379-1 mp4 426x240 25 │ ~224.74MiB 380k m3u8 │ avc1.42c01e 380k mp4a.40.2 0k
480p mp4 480p │ https │ unknown unknown
hls-1090-0 mp4 852x480 25 │ ~645.52MiB 1091k m3u8 │ avc1.4d401f 1091k mp4a.40.2 0k
hls-1090-1 mp4 852x480 25 │ ~645.52MiB 1091k m3u8 │ avc1.4d401f 1091k mp4a.40.2 0k
720p mp4 720p │ https │ unknown unknown
hls-1996-0 mp4 1280x720 25 │ ~ 1.15GiB 1996k m3u8 │ avc1.640020 1996k mp4a.40.2 0k
hls-1996-1 mp4 1280x720 25 │ ~ 1.15GiB 1996k m3u8 │ avc1.640020 1996k mp4a.40.2 0k
1080p mp4 1080p │ https │ unknown unknown
hls-2564-0 mp4 1920x1080 25 │ ~ 1.48GiB 2565k m3u8 │ avc1.64002a 2565k mp4a.40.2 0k
hls-2564-1 mp4 1920x1080 25 │ ~ 1.48GiB 2565k m3u8 │ avc1.64002a 2565k mp4a.40.2 0k
```
Termux also get `403`
```
yt-dlp --ignore-config -vF https://spankbang.com/782eu/video/band+girl+part+2+1
[debug] Command-line config: ['--ignore-config', '-vF', 'https://spankbang.com/782eu/video/band+girl+part+2+1']
[debug] Encodings: locale utf-8, fs utf-8, pref utf-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] [392389b7d] (pip)
[debug] Python 3.11.2 (CPython aarch64 64bit) - Linux-4.14.309-classified+-aarch64-with-libc (OpenSSL 3.1.0 14 Mar 2023, libc)
[debug] exe versions: none
[debug] Optional libraries: Cryptodome-3.17, brotli-1.0.9, certifi-2022.12.07, mutagen-1.46.0, sqlite3-2.6.0, websockets-10.4
[debug] Proxy map: {}
[debug] Loaded 1786 extractors
[SpankBang] Extracting URL: https://spankbang.com/782eu/video/band+girl+part+2+1
[SpankBang] 782eu: Downloading webpage
ERROR: [SpankBang] 782eu: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: 'Forbidden'>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/yt_dlp/extractor/common.py", line 694, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/yt_dlp/extractor/spankbang.py", line 74, in _real_extract
webpage = self._download_webpage(
^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/yt_dlp/extractor/common.py", line 1097, in _download_webpage
return self.__download_webpage(url_or_request, video_id, note, errnote, None, fatal, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/yt_dlp/extractor/common.py", line 1048, in download_content
res = getattr(self, download_handle.__name__)(url_or_request, video_id, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/yt_dlp/extractor/common.py", line 882, in _download_webpage_handle
urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data, headers=headers, query=query, expected_status=expected_status)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/yt_dlp/extractor/common.py", line 839, in _request_webpage
raise ExtractorError(errmsg, cause=err)
File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/yt_dlp/extractor/common.py", line 821, in _request_webpage
return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/yt_dlp/YoutubeDL.py", line 3742, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/lib/python3.11/urllib/request.py", line 525, in open
response = meth(req, response)
^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/lib/python3.11/urllib/request.py", line 634, in http_response
response = self.parent.error(
^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/lib/python3.11/urllib/request.py", line 563, in error
return self._call_chain(*args)
^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/lib/python3.11/urllib/request.py", line 496, in _call_chain
result = func(*args)
^^^^^^^^^^^
File "/data/data/com.termux/files/usr/lib/python3.11/urllib/request.py", line 643, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden
```
yt-dlp [email protected] isn't showing 403 for me with `81sy6`, but yt-dl (essentially identical extractor code) is getting 403 on the webpage itself, as is is _wget_, not showing any page content in the response. UA 'Mozilla/5.0' may break through CloudFlare: it works with yt-dl and _wget_ now, though not when I first tried.
Hi,
@bashonly the proxy network can work on my wsl with version 2022.07.18, the wsl on the same computer, so they use the same proxy
```bash
[debug] Command-line config: ['-v', '--proxy', 'http://127.0.0.1:1080', 'https://spankbang.com/81sy6/video/japanese']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out UTF-8, error UTF-8, screen UTF-8
[debug] yt-dlp version 2022.07.18 [135f05ef6]
[debug] Lazy loading extractors is disabled
[debug] Python 3.7.5 (CPython 64bit) - Linux-4.4.0-22621-Microsoft-x86_64-with-Ubuntu-18.04-bionic (glibc 2.26)
[debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs
[debug] exe versions: ffmpeg 3.4.8, ffprobe 3.4.8
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2020.12.05, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {'http': 'http://127.0.0.1:1080', 'https': 'http://127.0.0.1:1080'}
[debug] [SpankBang] Extracting URL: https://spankbang.com/81sy6/video/japanese
[SpankBang] 81sy6: Downloading webpage
[SpankBang] 81sy6: Downloading stream JSON
[SpankBang] 81sy6: Downloading m3u8 information
[SpankBang] 81sy6: Downloading m3u8 information
[SpankBang] 81sy6: Downloading m3u8 information
[SpankBang] 81sy6: Downloading m3u8 information
[SpankBang] 81sy6: Downloading m3u8 information
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] 81sy6: Downloading 1 format(s): hls-2564-1
[debug] Invoking hlsnative downloader on "https://hls-uranus.sb-cd.com/hls/1/3/13521102-1080p.mp4/index-v1-a1.m3u8?_tid=13521102&d=1&m=43&secure=nxKExkSSNg5q0juEWzONGA,1680835039"
[hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 810
[download] Destination: Japanese [81sy6].mp4
[download] 0.5% of ~662.91MiB at 151.59KiB/s ETA 27:28 (frag 5/810)
```
@dirkf if I add set UA to 'Mozilla/5.0', I will get this output
```bash
[debug] Command-line config: ['-v', '--proxy', '127.0.0.1:1080', '--add-header', 'User-Agent:Mozilla/5.0', 'http://127.0.0.1:1080', 'https://spankbang.com/81sy6/video/japanese']
[debug] Encodings: locale cp936, fs utf-8, pref cp936, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] [392389b7d]
[debug] Lazy loading extractors is disabled
[debug] Python 3.8.10 (CPython AMD64 32bit) - Windows-10-10.0.22621-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 4.2.1 (fdk), ffprobe 4.2.1
[debug] Optional libraries: Cryptodome-3.17, brotli-1.0.9, certifi-2020.12.05, mutagen-1.46.0, sqlite3-2.6.0, websockets-10.4
[debug] Proxy map: {'http': '127.0.0.1:1080', 'https': '127.0.0.1:1080'}
[debug] Loaded 1791 extractors
[generic] Extracting URL: http://127.0.0.1:1080
[generic] 127.0.0: Downloading webpage
ERROR: [generic] None: Unable to download webpage: HTTP Error 400: Invalid header received from client (caused by <HTTPError 400: 'Invalid header received from client'>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 694, in extract
ie_result = self._real_extract(url)
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\generic.py", line 2385, in _real_extract
full_response = self._request_webpage(url, video_id, headers={
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 842, in _request_webpage
raise ExtractorError(errmsg, cause=err)
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 824, in _request_webpage
return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query))
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\YoutubeDL.py", line 3745, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 531, in open
response = meth(req, response)
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 640, in http_response
response = self.parent.error(
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 569, in error
return self._call_chain(*args)
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 502, in _call_chain
result = func(*args)
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 400: Invalid header received from client
[SpankBang] Extracting URL: https://spankbang.com/81sy6/video/japanese
[SpankBang] 81sy6: Downloading webpage
ERROR: [SpankBang] 81sy6: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: 'Forbidden'>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 694, in extract
ie_result = self._real_extract(url)
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\spankbang.py", line 74, in _real_extract
webpage = self._download_webpage(
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 1100, in _download_webpage
return self.__download_webpage(url_or_request, video_id, note, errnote, None, fatal, *args, **kwargs)
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 1051, in download_content
res = getattr(self, download_handle.__name__)(url_or_request, video_id, **kwargs)
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 885, in _download_webpage_handle
urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data, headers=headers, query=query, expected_status=expected_status)
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 842, in _request_webpage
raise ExtractorError(errmsg, cause=err)
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 824, in _request_webpage
return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query))
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\YoutubeDL.py", line 3745, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 531, in open
response = meth(req, response)
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 640, in http_response
response = self.parent.error(
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 569, in error
return self._call_chain(*args)
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 502, in _call_chain
result = func(*args)
File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden
```
workaround: try adding `--legacy-server-connect` to your command
I was able to fix this by switching to a different proxy/vpn
In my case, `'Cookie': 'country=US'` in `_real_extract` of `extractor/spankbang.py` needs to be manually replaced with the Cookie header. Note that setting the user-agent will cause a 403 error, even after replacing `'Cookie': 'country=US'`. Additionally, the cookie must be added to the `self._download_json()` method. Also, the cookie needs to be renewed after it expires.
Details [in this comment](https://github.com/yt-dlp/yt-dlp/issues/6545#issuecomment-1609300876).
> In my case, `'Cookie': 'country=US'` in `_real_extract` of `extractor/spankbang.py` needs to be manually replaced with the Cookie header. Note that setting the user-agent will cause a 403 error, even after replacing `'Cookie': 'country=US'`. Additionally, the cookie must be added to the `self._download_json()` method. Also, the cookie needs to be renewed after it expires.
can you go into greater detail please?
> > In my case, `'Cookie': 'country=US'` in `_real_extract` of `extractor/spankbang.py` needs to be manually replaced with the Cookie header. Note that setting the user-agent will cause a 403 error, even after replacing `'Cookie': 'country=US'`. Additionally, the cookie must be added to the `self._download_json()` method. Also, the cookie needs to be renewed after it expires.
>
> can you go into greater detail please?
Simply right-click on video web page - >"Inspect" -> "Network" tab -> click upper left "Clear" button -> Ctrl+R Reload page -> click current page url item(normally first item or it has blue icon on left side) -> right panel -> "Headers" tab -> "Requests Headers" section), copy **Cookie**'s value(and copy **User-Agent**'s value later) and paste into the below **4 Lines to replace:** in the file `extractor/spankbang.py` ( you can find parent folder of `extractor/spankbang.py` with command `python3 -c "import yt_dlp; print(yt_dlp.__path__[0])"` ). -> save the edited file -> rerun yt-dlp should no more 403 error.
**4 Lines to edit:**
[1]
Add this line `MY_COOKIE = ` on top of line `class SpankBangIE(InfoExtractor):` with your copied cookie value, you only need to edit this line in future when renew cookie, e.g.:
```
MY_COOKIE = 'paste your copied cookie value, surrounded with single quotes. No extra space'
class SpankBangIE(InfoExtractor):
```
[2]
`url, playlist_id, headers={'Cookie': 'country=US; mobile=on'})`
edit it to:
`url, playlist_id, headers={'Cookie': MY_COOKIE})`
and:
[3]
`video_id, headers={'Cookie': 'country=US'})`
edit it to:
`video_id, headers={'Cookie': MY_COOKIE})`
[4]
Then under `self._download_json` need add cookie too, e.g.:
```
}), headers={
'Referer': url,
'X-Requested-With': 'XMLHttpRequest',
})
```
edit it to:
```
}), headers={
'Cookie': MY_COOKIE,
'Referer': url,
'X-Requested-With': 'XMLHttpRequest',
})
```
Note that the **spaces before the lines** need typing space instead of tab. And ensure total spaces same as original.
Note that the cookie needs renewal after it expires or the IP changes. You may need clear and reload page if first attempt of cookie not working. Ensure copy the complete cookie value. Update yt-dlp may replace this code and need redo.
You also need add your copied latest **User-Agent**'s value(same steps as get **Cookie**'s value. It probably updated when web browser update) to your command, e.g. `--add-headers 'User-Agent:paste your copied user-agent value, surrounded with single quotes'`:
`yt-dlp --add-headers 'User-Agent:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36' ...`
Note that you need to use user agent get from the **same web browser** of the cookie above.
[UPDATE 2024]: `extractor/spankbang.py` needs to disable .m3u8 and allow only .mp4 if you encounter `HTTP Error 502: Bad Gateway`:
```
ext = determine_ext(f_url)
''' # Add this to start disable m3u8
if format_id.startswith('m3u8') or ext == 'm3u8':
formats.extend(self._extract_m3u8_formats(
f_url, video_id, 'mp4', entry_protocol='m3u8_native',
m3u8_id='hls', fatal=False))
elif format_id.startswith('mpd') or ext == 'mpd':
formats.extend(self._extract_mpd_formats(
f_url, video_id, mpd_id='dash', fatal=False))
elif ext == 'mp4' or f.get('width') or f.get('height'):
''' # Add this to end disable
if ext == 'mp4' or f.get('width') or f.get('height'): # New Added
```
Tried the above and still getting 403 forbidden. Verified formatting and complete cookie value.
The extractor needs to be fixed so that it's not hardcoding a cookie header into the request. It should check if the user has passed the necessary cookies (via `--cookies` or `--cookies-from-browser`), and if not, then set cookie(s) to the cookiejar before the request(s)
> The extractor needs to be fixed so that it's not hardcoding a cookie header into the request. It should check if the user has passed the necessary cookies (via `--cookies` or `--cookies-from-browser`), and if not, then set cookie(s) to the cookiejar before the request(s)
I found one of your previous suggestions of adding --legacy-server-connect to my config file and it seems to working through my testing so far.
> adding --legacy-server-connect to my config file
don't do this. only use this option when needed
> > adding --legacy-server-connect to my config file
>
> don't do this. only use this option when needed
good call. created separate config and batch to call that option if needed.
Confirmed. Still happening on latest. Not happening on any other site.
> In my case, `'Cookie': 'country=US'` in `_real_extract` of `extractor/spankbang.py` needs to be manually replaced with the Cookie header. Note that setting the user-agent will cause a 403 error, even after replacing `'Cookie': 'country=US'`. Additionally, the cookie must be added to the `self._download_json()` method. Also, the cookie needs to be renewed after it expires.
This work but cookie expire really fast. Need to renew about after 10-20 downloads
not sure if its possible to add this import to the extractor, but using this package bypasses cloudflares 403 page and returns the real page source
`pip install cloudscraper`
```python
import cloudscraper
url = "https://spankbang.com/5icow/video/skylarvox08"
scraper = cloudscraper.create_scraper()
content = scraper.get(url).content
print(content) # bytes
#or
print(content.decode('utf-8'))
```
#7595, once completed, should fix this
Anyone got a work around, I've tried almost all the answers ? I'm using the last version of yt-dlp with python 3.11.4, youtube is working fine, but spankbang is not got like op " 8iidg: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: 'Forbidden'>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U"
One workaround I found is working perfectly.
Just change the URL from https://spankbang.com/5icow/video/skylarvox08 to https://spankbang.party/5icow/video/skylarvox08
Domain part from spankbang.com to spankbang.party works perfectly for all URLs I tested.
> One workaround I found is working perfectly.
>
> Just change the URL from https://spankbang.com/5icow/video/skylarvox08 to https://spankbang.party/5icow/video/skylarvox08
>
> Domain part from spankbang.com to spankbang.party works perfectly for all URLs I tested.
This will only allow 720p downloads
@cosify The above URL has 4K resolution but maybe it's a yt-dlp issue that it only gets up to 720p.
You can check by getting the HTML content of the URL and searching for **var stream_data = {**
Isn't the HTML5 format `4` found by the generic extractor 4320x2160 (actually 2x2160x2160 since this video is 3D)?
If the SB extractor is tweaked to recognise `.party` and to use that root domain for its `stream JSON` retrieval:
```console
$ yt-dlp -v -F 'https://spankbang.party/5icow/video/skylarvox08'
[debug] Command-line config: ['-v', '-F', 'https://spankbang.party/5icow/video/skylarvox08']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] [812cdfa06] (source)
[debug] Lazy loading extractors is disabled
[debug] Git HEAD: de4cf77ec
[debug] Python 3.9.16 (CPython i686 32bit) - Linux-4.4.0-210-generic-i686-with-glibc2.23 (OpenSSL 1.1.1v 1 Aug 2023, glibc 2.23)
[debug] exe versions: ffmpeg 4.3, ffprobe 4.3
[debug] Optional libraries: Cryptodome-3.11.0, certifi-2019.11.28, secretstorage-3.2.0, sqlite3-2.6.0
[debug] Proxy map: {}
[debug] Loaded 1851 extractors
[SpankBang] Extracting URL: https://spankbang.party/5icow/video/skylarvox08
[SpankBang] 5icow: Downloading webpage
[SpankBang] 5icow: Downloading stream JSON
[SpankBang] 5icow: Downloading m3u8 information
[SpankBang] 5icow: Downloading m3u8 information
[SpankBang] 5icow: Downloading m3u8 information
[SpankBang] 5icow: Downloading m3u8 information
[SpankBang] 5icow: Downloading m3u8 information
[SpankBang] 5icow: Downloading m3u8 information
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[info] Available formats for 5icow:
ID EXT RESOLUTION FPS │ FILESIZE TBR PROTO │ VCODEC ACODEC
──────────────────────────────────────────────────────────────────────────────
240p mp4 240p │ https │ unknown unknown
hls-231-0 mp4 480x240 30 │ ~ 60.42MiB 231k m3u8 │ avc1.42c01e mp4a.40.2
hls-231-1 mp4 480x240 30 │ ~ 60.42MiB 231k m3u8 │ avc1.42c01e mp4a.40.2
480p mp4 480p │ https │ unknown unknown
hls-1201-0 mp4 960x480 30 │ ~314.06MiB 1202k m3u8 │ avc1.4d401f mp4a.40.2
hls-1201-1 mp4 960x480 30 │ ~314.06MiB 1202k m3u8 │ avc1.4d401f mp4a.40.2
720p mp4 720p │ https │ unknown unknown
hls-2172-0 mp4 1440x720 30 │ ~567.77MiB 2172k m3u8 │ avc1.640020 mp4a.40.2
hls-2172-1 mp4 1440x720 30 │ ~567.77MiB 2172k m3u8 │ avc1.640020 mp4a.40.2
1080p mp4 1080p │ https │ unknown unknown
hls-3390-0 mp4 2160x1080 30 │ ~886.07MiB 3390k m3u8 │ avc1.64002a mp4a.40.2
hls-3390-1 mp4 2160x1080 30 │ ~886.07MiB 3390k m3u8 │ avc1.64002a mp4a.40.2
4k mp4 2160p │ https │ unknown unknown
hls-5543-0 mp4 4320x2160 30 │ ~ 1.41GiB 5543k m3u8 │ avc1.640034 mp4a.40.2
hls-5543-1 mp4 4320x2160 30 │ ~ 1.41GiB 5543k m3u8 │ avc1.640034 mp4a.40.2
$
```
@dirkf any plans to submit a PR? I would assume that it would be within scope to automatically use spankbang.party's server, unless that's an unofficial mirror, which I highly doubt
seems to work with the party server but it doesn't work with playlist links
I've noticed yt-dlp using TLSv1_2 while the web browser uses TLSv1_3.
You can **temporarily** add TLSv1_3 in `yt_dlp/networking/_helper.py` : _(Note that this affects all websites, so you should revert these changes if other websites stop working.)_
```
#context.minimum_version = ssl.TLSVersion.TLSv1_2
context.minimum_version = ssl.TLSVersion.TLSv1_3
```
Similar issue [#25437](https://github.com/ytdl-org/youtube-dl/issues/25437)
I've got the same issue. Logs:
```
yt-dlp.exe -v https://spankbang.com/7ihal/video/perfect+body+strip+tease+3+seductive+cam+show+dance+shows+off+everything+she+s+got
[debug] Command-line config: ['-v', '--proxy', 'socks5://127.0.0.1:9999', 'https://spankbang.com/7ihal/video/perfect+body+strip+tease+3+seductive+cam+show+dance+shows+off+everything+she+s+got']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] [088add956] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.22621-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: none
[debug] Optional libraries: Cryptodome-3.19.0, brotli-1.1.0, certifi-2023.07.22, mutagen-1.47.0, sqlite3-3.35.5, websockets-11.0.3
[debug] Proxy map: {'all': 'socks5://127.0.0.1:9999'}
[debug] Loaded 1886 extractors
[SpankBang] Extracting URL: https://spankbang.com/7ihal/video/perfect+body+strip+tease+3+seductive+cam+show+dance+shows+off+everything+she+s+got
[SpankBang] 7ihal: Downloading webpage
ERROR: [SpankBang] 7ihal: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: Forbidden>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.py", line 715, in extract
File "yt_dlp\extractor\spankbang.py", line 74, in _real_extract
File "yt_dlp\extractor\common.py", line 1118, in _download_webpage
File "yt_dlp\extractor\common.py", line 1069, in download_content
File "yt_dlp\extractor\common.py", line 903, in _download_webpage_handle
File "yt_dlp\extractor\common.py", line 860, in _request_webpage
File "yt_dlp\networking\_urllib.py", line 410, in _send
File "urllib\request.py", line 531, in open
File "urllib\request.py", line 640, in http_response
File "urllib\request.py", line 569, in error
File "urllib\request.py", line 502, in _call_chain
File "urllib\request.py", line 649, in http_error_default
urllib.error.HTTPError: HTTP Error 403: Forbidden
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 4051, in urlopen
File "yt_dlp\networking\common.py", line 114, in send
File "yt_dlp\networking\_helper.py", line 204, in wrapper
File "yt_dlp\networking\common.py", line 325, in send
File "yt_dlp\networking\_urllib.py", line 415, in _send
yt_dlp.networking.exceptions.HTTPError: HTTP Error 403: Forbidden
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "yt_dlp\extractor\common.py", line 847, in _request_webpage
File "yt_dlp\YoutubeDL.py", line 4070, in urlopen
yt_dlp.networking.exceptions._CompatHTTPError: HTTP Error 403: Forbidden
```
> I've noticed yt-dlp using TLSv1_2 while the web browser uses TLSv1_3.
>
> You can **temporarily** add TLSv1_3 in `yt_dlp/networking/_helper.py` (note that this affects all requests, of course):
>
> ```
>
> #context.minimum_version = ssl.TLSVersion.TLSv1_2
> context.minimum_version = ssl.TLSVersion.TLSv1_3
> ```
>
> Similar issue [#25437](https://github.com/ytdl-org/youtube-dl/issues/25437)
This worked for me. I think this library (_helper.py) is too global to change that value for the whole project, but it would be nice if specific extractors/downloaders could set that value to workaround this issue.
> I would assume that it would be within scope to automatically use spankbang.party's server, unless that's an unofficial mirror, which I highly doubt
Actually I'm not sure. Spankbang.party looks kinda dodgy, is it possible that it's an unofficial mirror?
The .party and .com domains are both registered via NameCheap from `Capital Region` (Reykjavik, apparently) with the same authoritative domain servers in ns.cloudflare.com. I wouldn't worry.
> The .party and .com domains are both registered via NameCheap from Capital Region (Reykjavik, apparently)
that doesn't necessarily mean anything, that's just namecheap's whois privacy thing


TRY => https://github.com/0xUndetectable/Spankbang_scraper/releases/tag/v0.1
I found a weird discrepency, that my windows 10 desktop on newer version would get the 403 error as expected in this thread. I tried bringing in my browser cookies as well as `--legacy-server-connect` option and it didn't change from a 403 error.
```
PS E:\> yt-dlp --version
2023.12.30
PS E:\> yt-dlp --verbose -S "res:1080" --add-metadata --no-check-certificates -N 4 --cookies cookies.txt https://spankbang.com/3o9ie/playlist/swaglord
[debug] Command-line config: ['--verbose', '-S', 'res:1080', '--add-metadata', '--no-check-certificates', '-N', '4', '--cookies', 'cookies.txt', 'https://spankbang.com/3o9ie/playlist/swaglord']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [f10589e34] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 6.1.1-essentials_build-www.gyan.dev (setts), ffprobe 6.1.1-essentials_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.19.1, brotli-1.1.0, certifi-2023.11.17, mutagen-1.47.0, requests-2.31.0, sqlite3-3.35.5, urllib3-2.1.0, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1798 extractors
[SpankBangPlaylist] Extracting URL: https://spankbang.com/3o9ie/playlist/swaglord
[SpankBangPlaylist] 3o9ie: Downloading webpage
ERROR: [SpankBangPlaylist] 3o9ie: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: Forbidden>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.py", line 718, in extract
File "yt_dlp\extractor\spankbang.py", line 181, in _real_extract
File "yt_dlp\extractor\common.py", line 1121, in _download_webpage
File "yt_dlp\extractor\common.py", line 1072, in download_content
File "yt_dlp\extractor\common.py", line 906, in _download_webpage_handle
File "yt_dlp\extractor\common.py", line 863, in _request_webpage
File "yt_dlp\YoutubeDL.py", line 4082, in urlopen
File "yt_dlp\networking\common.py", line 114, in send
File "yt_dlp\networking\_helper.py", line 204, in wrapper
File "yt_dlp\networking\common.py", line 325, in send
File "yt_dlp\networking\_requests.py", line 343, in _send
yt_dlp.networking.exceptions.HTTPError: HTTP Error 403: Forbidden
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "yt_dlp\extractor\common.py", line 850, in _request_webpage
File "yt_dlp\YoutubeDL.py", line 4114, in urlopen
yt_dlp.networking.exceptions._CompatHTTPError: HTTP Error 403: Forbidden
PS E:\> yt-dlp --verbose -S "res:1080" --add-metadata --no-check-certificates -N 4 --legacy-server-connect https://spankbang.com/3o9ie/playlist/swaglord
[debug] Command-line config: ['--verbose', '-S', 'res:1080', '--add-metadata', '--no-check-certificates', '-N', '4', '--legacy-server-connect', 'https://spankbang.com/3o9ie/playlist/swaglord']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [f10589e34] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 6.1.1-essentials_build-www.gyan.dev (setts), ffprobe 6.1.1-essentials_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.19.1, brotli-1.1.0, certifi-2023.11.17, mutagen-1.47.0, requests-2.31.0, sqlite3-3.35.5, urllib3-2.1.0, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1798 extractors
[SpankBangPlaylist] Extracting URL: https://spankbang.com/3o9ie/playlist/swaglord
[SpankBangPlaylist] 3o9ie: Downloading webpage
ERROR: [SpankBangPlaylist] 3o9ie: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: Forbidden>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.py", line 718, in extract
File "yt_dlp\extractor\spankbang.py", line 181, in _real_extract
File "yt_dlp\extractor\common.py", line 1121, in _download_webpage
File "yt_dlp\extractor\common.py", line 1072, in download_content
File "yt_dlp\extractor\common.py", line 906, in _download_webpage_handle
File "yt_dlp\extractor\common.py", line 863, in _request_webpage
File "yt_dlp\YoutubeDL.py", line 4082, in urlopen
File "yt_dlp\networking\common.py", line 114, in send
File "yt_dlp\networking\_helper.py", line 204, in wrapper
File "yt_dlp\networking\common.py", line 325, in send
File "yt_dlp\networking\_requests.py", line 343, in _send
yt_dlp.networking.exceptions.HTTPError: HTTP Error 403: Forbidden
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "yt_dlp\extractor\common.py", line 850, in _request_webpage
File "yt_dlp\YoutubeDL.py", line 4114, in urlopen
yt_dlp.networking.exceptions._CompatHTTPError: HTTP Error 403: Forbidden
```
However, weirdly the older version on my Centos 7 server was able to download the playlist and continue without issue using the `--legacy-server-connect` option. Below is a partial log as I was downloading the 4th video in the playlist.
```
$ yt-dlp --version
2022.07.18
$ yt-dlp --verbose -S "res:1080" --add-metadata --no-check-certificates -N 4 --legacy-server-connect https://spankbang.com/3o9ie/playlist/swaglord-N 4 --legacy-server-connect https://spankbang.com/3o9ie/playlist/s [debug] Command-line config: ['--verbose', '-S', 'res:1080', '--add-metadata', '--no-check-certificates', '-N', '4', '--legacy-server-connect', 'https://spankbang.com/3o9ie/playlist/swaglord']
DeprecationWarning: Support for Python version 3.6 has been deprecated. See https://github.com/yt-dlp/yt-dlp/issues/3764 for more details.
You will no longer receive updates on this version! Please update to Python 3.7 or above
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out UTF-8, error UTF-8, screen UTF-8
[debug] yt-dlp version 2022.07.18 [135f05e]
[debug] Python 3.6.8 (CPython 64bit) - Linux-3.10.0-1160.99.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core (glibc 2.3.4)
[debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs
[debug] exe versions: ffmpeg 2.8.15 (fdk,needs_adtstoasc), ffprobe 2.8.15, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.17, brotli-1.0.9, certifi-2021.05.30, mutagen-1.45.1, sqlite3-2.6.0, websockets-9.1
[debug] Proxy map: {}
[debug] [SpankBangPlaylist] Extracting URL: https://spankbang.com/3o9ie/playlist/swaglord
[SpankBangPlaylist] 3o9ie: Downloading webpage
WARNING: [SpankBangPlaylist] unable to extract playlist title; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
[download] Downloading playlist: 3o9ie
[SpankBangPlaylist] Playlist 3o9ie: Downloading 75 videos of 75
[download] Downloading video 1 of 75
[debug] [SpankBang] Extracting URL: https://spankbang.com/3o9ie-jvumw7/playlist/swaglord
[SpankBang] jvumw7: Downloading webpage
[SpankBang] jvumw7: Downloading stream JSON
[SpankBang] jvumw7: Downloading m3u8 information
[SpankBang] jvumw7: Downloading m3u8 information
[SpankBang] jvumw7: Downloading m3u8 information
[debug] Sort order given by user: res:1080
[debug] Formats sorted by: hasvid, ie_pref, res:1080(1080.0), lang, quality, fps, hdr:12(7), vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] jvumw7: Downloading 1 format(s): hls-869-1
[debug] Invoking hlsnative downloader on "https://hls-uranus.sb-cd.com/hls/5/0/5006742-480p.mp4/index-v1-a1.m3u8?_tid=5006742&d=6&m=6&secure=8mD5I3Hy9Vepse69EaBHoA,1708574985"
[hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 111
[download] Destination: Cam-Whore with puffy Nipples [jvumw7].mp4
WARNING: The download speed shown is only of one thread. This is a known issue and patches are welcome
[download] 100% of 70.00MiB in 00:19
[debug] ffprobe command line: ffprobe -hide_banner -show_format -show_streams -print_format json 'file:Cam-Whore with puffy Nipples [jvumw7].mp4'
[FixupM3u8] Fixing MPEG-TS in MP4 container of "Cam-Whore with puffy Nipples [jvumw7].mp4"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:Cam-Whore with puffy Nipples [jvumw7].mp4' -map 0 -dn -ignore_unknown -c copy -f mp4 -bsf:a aac_adtstoasc -movflags +faststart 'file:Cam-Whore with puffy Nipples [jvumw7].temp.mp4'
[Metadata] Adding metadata to "Cam-Whore with puffy Nipples [jvumw7].mp4"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:Cam-Whore with puffy Nipples [jvumw7].mp4' -map 0 -dn -ignore_unknown -c copy -write_id3v1 1 -metadata 'title=Cam-Whore with puffy Nipples' -metadata date=20190214 -metadata 'description= Girl Teasing and toying' -metadata 'synopsis= Girl Teasing and toying' -metadata purl=https://spankbang.com/3o9ie-jvumw7/playlist/swaglord -metadata comment=https://spankbang.com/3o9ie-jvumw7/playlist/swaglord -metadata artist=maxi-moll -movflags +faststart 'file:Cam-Whore with puffy Nipples [jvumw7].temp.mp4'
[download] Downloading video 2 of 75
[debug] [SpankBang] Extracting URL: https://spankbang.com/3o9ie-jvumw7/playlist/swaglord
[SpankBang] jvumw7: Downloading webpage
[SpankBang] jvumw7: Downloading stream JSON
[SpankBang] jvumw7: Downloading m3u8 information
[SpankBang] jvumw7: Downloading m3u8 information
[SpankBang] jvumw7: Downloading m3u8 information
[debug] Sort order given by user: res:1080
[debug] Formats sorted by: hasvid, ie_pref, res:1080(1080.0), lang, quality, fps, hdr:12(7), vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] jvumw7: Downloading 1 format(s): hls-869-1
[debug] Invoking hlsnative downloader on "https://hls-uranus.sb-cd.com/hls/5/0/5006742-480p.mp4/index-v1-a1.m3u8?_tid=5006742&d=6&m=6&secure=8mD5I3Hy9Vepse69EaBHoA,1708574985"
[download] Cam-Whore with puffy Nipples [jvumw7].mp4 has already been downloaded
[download] 100% of 68.08MiB
[Metadata] Adding metadata to "Cam-Whore with puffy Nipples [jvumw7].mp4"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:Cam-Whore with puffy Nipples [jvumw7].mp4' -map 0 -dn -ignore_unknown -c copy -write_id3v1 1 -metadata 'title=Cam-Whore with puffy Nipples' -metadata date=20190214 -metadata 'description= Girl Teasing and toying' -metadata 'synopsis= Girl Teasing and toying' -metadata purl=https://spankbang.com/3o9ie-jvumw7/playlist/swaglord -metadata comment=https://spankbang.com/3o9ie-jvumw7/playlist/swaglord -metadata artist=maxi-moll -movflags +faststart 'file:Cam-Whore with puffy Nipples [jvumw7].temp.mp4'
[download] Downloading video 3 of 75
[debug] [SpankBang] Extracting URL: https://spankbang.com/3o9ie-jvumw7/playlist/swaglord
[SpankBang] jvumw7: Downloading webpage
[SpankBang] jvumw7: Downloading stream JSON
[SpankBang] jvumw7: Downloading m3u8 information
[SpankBang] jvumw7: Downloading m3u8 information
[SpankBang] jvumw7: Downloading m3u8 information
[debug] Sort order given by user: res:1080
[debug] Formats sorted by: hasvid, ie_pref, res:1080(1080.0), lang, quality, fps, hdr:12(7), vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] jvumw7: Downloading 1 format(s): hls-869-1
[debug] Invoking hlsnative downloader on "https://hls-uranus.sb-cd.com/hls/5/0/5006742-480p.mp4/index-v1-a1.m3u8?_tid=5006742&d=6&m=6&secure=8mD5I3Hy9Vepse69EaBHoA,1708574985"
[download] Cam-Whore with puffy Nipples [jvumw7].mp4 has already been downloaded
[download] 100% of 68.08MiB
[Metadata] Adding metadata to "Cam-Whore with puffy Nipples [jvumw7].mp4"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:Cam-Whore with puffy Nipples [jvumw7].mp4' -map 0 -dn -ignore_unknown -c copy -write_id3v1 1 -metadata 'title=Cam-Whore with puffy Nipples' -metadata date=20190214 -metadata 'description= Girl Teasing and toying' -metadata 'synopsis= Girl Teasing and toying' -metadata purl=https://spankbang.com/3o9ie-jvumw7/playlist/swaglord -metadata comment=https://spankbang.com/3o9ie-jvumw7/playlist/swaglord -metadata artist=maxi-moll -movflags +faststart 'file:Cam-Whore with puffy Nipples [jvumw7].temp.mp4'
[download] Downloading video 4 of 75
[debug] [SpankBang] Extracting URL: https://spankbang.com/3o9ie-f51l7z/playlist/swaglord
[SpankBang] f51l7z: Downloading webpage
[SpankBang] f51l7z: Downloading stream JSON
[SpankBang] f51l7z: Downloading m3u8 information
[SpankBang] f51l7z: Downloading m3u8 information
[SpankBang] f51l7z: Downloading m3u8 information
[debug] Sort order given by user: res:1080
[debug] Formats sorted by: hasvid, ie_pref, res:1080(1080.0), lang, quality, fps, hdr:12(7), vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] f51l7z: Downloading 1 format(s): hls-746-1
[debug] Invoking hlsnative downloader on "https://hls-uranus.sb-cd.com/hls/1/3/13570233-480p.mp4/index-v1-a1.m3u8?_tid=13570233&d=1&m=44&secure=ihJPwH8nkrSViJeMRAfLVg,1708561358"
[hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 161
[download] Destination: Verababy mirror [f51l7z].mp4
WARNING: The download speed shown is only of one thread. This is a known issue and patches are welcome
[download] 100% of 87.84MiB in 02:59
[debug] ffprobe command line: ffprobe -hide_banner -show_format -show_streams -print_format json 'file:Verababy mirror [f51l7z].mp4'
[FixupM3u8] Fixing MPEG-TS in MP4 container of "Verababy mirror [f51l7z].mp4"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:Verababy mirror [f51l7z].mp4' -map 0 -dn -ignore_unknown -c copy -f mp4 -bsf:a aac_adtstoasc -movflags +faststart 'file:Verababy mirror [f51l7z].temp.mp4'
[Metadata] Adding metadata to "Verababy mirror [f51l7z].mp4"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:Verababy mirror [f51l7z].mp4' -map 0 -dn -ignore_unknown -c copy -write_id3v1 1 -metadata 'title=Verababy mirror' -metadata date=20230412 -metadata 'description=Watch Verababy mirror on SpankBang now! - Anal, Solo Masturbation, Solo Porn - SpankBang ' -metadata 'synopsis=Watch Verababy mirror on SpankBang now! - Anal, Solo Masturbation, Solo Porn - SpankBang ' -metadata purl=https://spankbang.com/3o9ie-f51l7z/playlist/swaglord -metadata comment=https://spankbang.com/3o9ie-f51l7z/playlist/swaglord -metadata artist=zenuasyter -movflags +faststart 'file:Verababy mirror [f51l7z].temp.mp4'
```
> I've noticed yt-dlp using TLSv1_2 while the web browser uses TLSv1_3.
>
> You can **temporarily** add TLSv1_3 in `yt_dlp/networking/_helper.py` (note that this affects all requests, of course):
>
> ```
>
> #context.minimum_version = ssl.TLSVersion.TLSv1_2
> context.minimum_version = ssl.TLSVersion.TLSv1_3
> ```
>
> Similar issue [#25437](https://github.com/ytdl-org/youtube-dl/issues/25437)
How do I find this file?
I installed yt-dlp via pip.
> I found a weird discrepency, that my windows 10 desktop on newer version would get the 403 error as expected in this thread. I tried bringing in my browser cookies as well as `--legacy-server-connect` option and it didn't change from a 403 error.
Piggy backing on this, i noticed that Ubunutu can run the command without issue
Windows
```
yt-dlp -vU --legacy-server-connect https://spankbang.com/8if5y/video/fucking+my+class+slut+hmv [debug] Command-line config: ['-vU', '--legacy-server-connect', 'https://spankbang.com/8if5y/video/fucking+my+class+slut+hmv']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [615a84447] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 6.1.1-full_build-www.gyan.dev (setts), ffprobe 6.1.1-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.02.02, mutagen-1.47.0, requests-2.31.0, sqlite3-3.35.5, urllib3-2.2.1, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1803 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[SpankBang] Extracting URL: https://spankbang.com/8if5y/video/fucking+my+class+slut+hmv
[SpankBang] 8if5y: Downloading webpage
ERROR: [SpankBang] 8if5y: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: Forbidden>)
File "yt_dlp\extractor\common.py", line 732, in extract
File "yt_dlp\extractor\spankbang.py", line 74, in _real_extract
File "yt_dlp\extractor\common.py", line 1135, in _download_webpage
File "yt_dlp\extractor\common.py", line 1086, in download_content
File "yt_dlp\extractor\common.py", line 920, in _download_webpage_handle
File "yt_dlp\extractor\common.py", line 877, in _request_webpage
File "yt_dlp\extractor\common.py", line 864, in _request_webpage
File "yt_dlp\YoutubeDL.py", line 4101, in urlopen
File "yt_dlp\networking\common.py", line 115, in send
File "yt_dlp\networking\_helper.py", line 204, in wrapper
File "yt_dlp\networking\common.py", line 326, in send
File "yt_dlp\networking\_requests.py", line 351, in _send
yt_dlp.networking.exceptions.HTTPError: HTTP Error 403: Forbidden
```
Ubunutu 20.04.6 LTS
```
mnt/Data$ yt-dlp -vU --legacy-server-connect https://spankbang.com/8if5y/video/fucking+my+class+slut+hmv
[debug] Command-line config: ['-vU', '--legacy-server-connect', 'https://spankbang.com/8if5y/video/fucking+my+class+slut+hmv']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [615a84447] (zip)
[debug] Python 3.8.10 (CPython x86_64 64bit) - Linux-5.15.0-101-generic-x86_64-with-glibc2.29 (OpenSSL 1.1.1f 31 Mar 2020, glibc 2.31)
[debug] exe versions: ffmpeg 4.2.7, ffprobe 4.2.7
[debug] Optional libraries: Cryptodome-3.6.1, brotli-1.0.7, certifi-2022.12.07, mutagen-1.44.0, requests-2.22.0, secretstorage-2.3.1, sqlite3-3.31.1, urllib3-1.25.8, websockets-10.4
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1803 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[SpankBang] Extracting URL: https://spankbang.com/8if5y/video/fucking+my+class+slut+hmv
[SpankBang] 8if5y: Downloading webpage
[SpankBang] 8if5y: Downloading stream JSON
[SpankBang] 8if5y: Downloading m3u8 information
WARNING: [SpankBang] Failed to download m3u8 information: The read operation timed out
[SpankBang] 8if5y: Downloading m3u8 information
[SpankBang] 8if5y: Downloading m3u8 information
[SpankBang] 8if5y: Downloading m3u8 information
[SpankBang] 8if5y: Downloading m3u8 information
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] 8if5y: Downloading 1 format(s): hls-5860
[debug] Invoking hlsnative downloader on "https://hls-uranus.sb-cd.com/hls/1/4/14296390-1080p.mp4/index-v1-a1.m3u8?_tid=14296390&d=1&m=41&secure=DWsnC7bmvE7mpz-NqIR9UA,1711309529"
```
Not sure if this points to a Windows specific bug or not
Not Windows-specific; affects multiple Linux clients here (Debian, mostly, but also embedded Linux systems like NAS).
FWIW, it's trivial to patch the spankbang extractor to recognize party URLs; this allows it to download from this mirror with full metadata. PR created.
yt_dlp/extractor/spankbang.py
``` diff
20c20
< (?:[^/]+\.)?spankbang\.com/
---
> (?:[^/]+\.)?spankbang\.(?:com|party)/
114c114,115
<
---
> stream_domain = re.search(r'https?://(?:[^/]+\.)?(spankbang\.(?:com|party))/', url).group(1)
> stream_url = 'https://' + stream_domain + '/api/videos/stream'
116c117
< 'https://spankbang.com/api/videos/stream', video_id,
---
> stream_url, video_id,
166c167
< _VALID_URL = r'https?://(?:[^/]+\.)?spankbang\.com/(?P<id>[\da-z]+)/playlist/(?P<display_id>[^/]+)'
---
> _VALID_URL = r'https?://(?:[^/]+\.)?spankbang\.(?:com|party)/(?P<id>[\da-z]+)/playlist/(?P<display_id>[^/]+)'
```
It would have been just the two matches, but the metadata request is currently hardcoded to use 'spankbang.com'; this modifies it to grab the domain from the request URL. Bit of ugly regex'ing there; feel free to modify to better suit project idioms.
`--impersonate Edge:Windows`
This seems to work for me.
> `--impersonate Edge:Windows` This seems to work for me.
It works, but it's really slow to download. Any tips?
> > `--impersonate Edge:Windows` This seems to work for me.
>
> It works, but it's really slow to download. Any tips?
How slow? The page download takes a little longer, but the actual file download once the video link is identified is just as fast. IMO I'd rather have consistent good connections and results than a page download 1 second faster.
I'm getting these errors on the latest version too and I'm not able to use the impersonate workaround, as I'm running this on my Synology NAS.
```
yt-dlp --list-impersonate-targets
[info] Available impersonate targets
Client OS Source
---------------------------------------
Chrome - curl_cffi (not available)
Edge - curl_cffi (not available)
Safari - curl_cffi (not available)
```
> I'm getting these errors on the latest version too and I'm not able to use the impersonate workaround, as I'm running this on my Synology NAS.
>
> ```
> yt-dlp --list-impersonate-targets
> [info] Available impersonate targets
> Client OS Source
> ---------------------------------------
> Chrome - curl_cffi (not available)
> Edge - curl_cffi (not available)
> Safari - curl_cffi (not available)
> ```
I was able to use `yt-dlp --legacy-server-connect --verbose -S "res:1080" --add-metadata --no-check-certificates -N 4 --impersonate edge` without specifying the OS, so give that one a try.
> > I'm getting these errors on the latest version too and I'm not able to use the impersonate workaround, as I'm running this on my Synology NAS.
> > ```
> > yt-dlp --list-impersonate-targets
> > [info] Available impersonate targets
> > Client OS Source
> > ---------------------------------------
> > Chrome - curl_cffi (not available)
> > Edge - curl_cffi (not available)
> > Safari - curl_cffi (not available)
> > ```
>
> I was able to use `yt-dlp --legacy-server-connect --verbose -S "res:1080" --add-metadata --no-check-certificates -N 4 --impersonate edge` without specifying the OS, so give that one a try.
Can you download from Spankbang normally? Or anyone else, if yes, can you share your settings and configuration?
> > > I'm getting these errors on the latest version too and I'm not able to use the impersonate workaround, as I'm running this on my Synology NAS.
> > > ```
> > > yt-dlp --list-impersonate-targets
> > > [info] Available impersonate targets
> > > Client OS Source
> > > ---------------------------------------
> > > Chrome - curl_cffi (not available)
> > > Edge - curl_cffi (not available)
> > > Safari - curl_cffi (not available)
> > > ```
> >
> >
> > I was able to use `yt-dlp --legacy-server-connect --verbose -S "res:1080" --add-metadata --no-check-certificates -N 4 --impersonate edge` without specifying the OS, so give that one a try.
>
> Can you download from Spankbang normally? Or anyone else, if yes, can you share your settings and configuration?
Just check the impersonation section in the README, from there you just need to install using `pip install "yt-dlp[default,curl-cffi]"`. then the impersonation methods will be available
When I use `--impersonate Edge:Windows` it makes some progress but all downloads die with a very small percentage of the completion. Highest I've reached is about 27%. Is anyone else running into this?
the patch for this should be as simple as:
```diff
diff --git a/yt_dlp/extractor/spankbang.py b/yt_dlp/extractor/spankbang.py
index 6805a72de..05f0bb146 100644
--- a/yt_dlp/extractor/spankbang.py
+++ b/yt_dlp/extractor/spankbang.py
@@ -71,9 +71,11 @@ class SpankBangIE(InfoExtractor):
def _real_extract(self, url):
mobj = self._match_valid_url(url)
video_id = mobj.group('id') or mobj.group('id_2')
+ country = self.get_param('geo_bypass_country') or 'US'
+ self._set_cookie('.spankbang.com', 'country', country.upper())
webpage = self._download_webpage(
url.replace(f'/{video_id}/embed', f'/{video_id}/video'),
- video_id, headers={'Cookie': 'country=US'})
+ video_id, impersonate=True)
if re.search(r'<[^>]+\b(?:id|class)=["\']video_removed', webpage):
raise ExtractorError(
```
someone just needs to PR it
> the patch for this should be as simple as:
>
> ```diff
> diff --git a/yt_dlp/extractor/spankbang.py b/yt_dlp/extractor/spankbang.py
> index 6805a72de..05f0bb146 100644
> --- a/yt_dlp/extractor/spankbang.py
> +++ b/yt_dlp/extractor/spankbang.py
> @@ -71,9 +71,11 @@ class SpankBangIE(InfoExtractor):
> def _real_extract(self, url):
> mobj = self._match_valid_url(url)
> video_id = mobj.group('id') or mobj.group('id_2')
> + country = self.get_param('geo_bypass_country') or 'US'
> + self._set_cookie('.spankbang.com', 'country', country.upper())
> webpage = self._download_webpage(
> url.replace(f'/{video_id}/embed', f'/{video_id}/video'),
> - video_id, headers={'Cookie': 'country=US'})
> + video_id, impersonate=True)
>
> if re.search(r'<[^>]+\b(?:id|class)=["\']video_removed', webpage):
> raise ExtractorError(
> ```
>
> someone just needs to PR it
This sounds promising. Needs someone with the source code ready to recompile and test this change when opening the PR. I'm not currently set up for that, but will do it at some point if nobody is ready to jump in. | 1,731,609,440,000 | null | Bug Report | [
"yt_dlp/extractor/spankbang.py:SpankBangIE._real_extract"
] | [] | 1 | 503 |
|
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11534 | f9d98509a898737c12977b2e2117277bada2c196 | diff --git a/yt_dlp/extractor/ctvnews.py b/yt_dlp/extractor/ctvnews.py
index 08d76d303b04..c3ddcdbee4ba 100644
--- a/yt_dlp/extractor/ctvnews.py
+++ b/yt_dlp/extractor/ctvnews.py
@@ -1,11 +1,24 @@
+import json
import re
+import urllib.parse
from .common import InfoExtractor
-from ..utils import orderedSet
+from .ninecninemedia import NineCNineMediaIE
+from ..utils import extract_attributes, orderedSet
+from ..utils.traversal import find_element, traverse_obj
class CTVNewsIE(InfoExtractor):
- _VALID_URL = r'https?://(?:.+?\.)?ctvnews\.ca/(?:video\?(?:clip|playlist|bin)Id=|.*?)(?P<id>[0-9.]+)(?:$|[#?&])'
+ _BASE_REGEX = r'https?://(?:[^.]+\.)?ctvnews\.ca/'
+ _VIDEO_ID_RE = r'(?P<id>\d{5,})'
+ _PLAYLIST_ID_RE = r'(?P<id>\d\.\d{5,})'
+ _VALID_URL = [
+ rf'{_BASE_REGEX}video/c{_VIDEO_ID_RE}',
+ rf'{_BASE_REGEX}video(?:-gallery)?/?\?clipId={_VIDEO_ID_RE}',
+ rf'{_BASE_REGEX}video/?\?(?:playlist|bin)Id={_PLAYLIST_ID_RE}',
+ rf'{_BASE_REGEX}(?!video/)[^?#]*?{_PLAYLIST_ID_RE}/?(?:$|[?#])',
+ rf'{_BASE_REGEX}(?!video/)[^?#]+\?binId={_PLAYLIST_ID_RE}',
+ ]
_TESTS = [{
'url': 'http://www.ctvnews.ca/video?clipId=901995',
'md5': 'b608f466c7fa24b9666c6439d766ab7e',
@@ -17,13 +30,32 @@ class CTVNewsIE(InfoExtractor):
'timestamp': 1467286284,
'upload_date': '20160630',
'categories': [],
+ 'season_number': 0,
+ 'season': 'Season 0',
'tags': [],
- 'season_id': 57981,
+ 'series': 'CTV News National | Archive | Stories 2',
+ 'season_id': '57981',
+ 'thumbnail': r're:https?://.*\.jpg$',
'duration': 764.631,
- 'series': 'CTV News National story',
- 'thumbnail': r're:^https?://.*\.jpg$',
- 'season': 'Season 0',
+ },
+ }, {
+ 'url': 'https://barrie.ctvnews.ca/video/c3030933-here_s-what_s-making-news-for-nov--15?binId=1272429',
+ 'md5': '8b8c2b33c5c1803e3c26bc74ff8694d5',
+ 'info_dict': {
+ 'id': '3030933',
+ 'ext': 'flv',
+ 'title': 'Here’s what’s making news for Nov. 15',
+ 'description': 'Here are the top stories we’re working on for CTV News at 11 for Nov. 15',
+ 'thumbnail': 'http://images2.9c9media.com/image_asset/2021_2_22_a602e68e-1514-410e-a67a-e1f7cccbacab_png_2000x1125.jpg',
+ 'season_id': '58104',
'season_number': 0,
+ 'tags': [],
+ 'season': 'Season 0',
+ 'categories': [],
+ 'series': 'CTV News Barrie',
+ 'upload_date': '20241116',
+ 'duration': 42.943,
+ 'timestamp': 1731722452,
},
}, {
'url': 'http://www.ctvnews.ca/video?playlistId=1.2966224',
@@ -46,6 +78,65 @@ class CTVNewsIE(InfoExtractor):
'id': '1.5736957',
},
'playlist_mincount': 6,
+ }, {
+ 'url': 'https://www.ctvnews.ca/business/respondents-to-bank-of-canada-questionnaire-largely-oppose-creating-a-digital-loonie-1.6665797',
+ 'md5': '24bc4b88cdc17d8c3fc01dfc228ab72c',
+ 'info_dict': {
+ 'id': '2695026',
+ 'ext': 'flv',
+ 'season_id': '89852',
+ 'series': 'From CTV News Channel',
+ 'description': 'md5:796a985a23cacc7e1e2fafefd94afd0a',
+ 'season': '2023',
+ 'title': 'Bank of Canada asks public about digital currency',
+ 'categories': [],
+ 'tags': [],
+ 'upload_date': '20230526',
+ 'season_number': 2023,
+ 'thumbnail': 'http://images2.9c9media.com/image_asset/2019_3_28_35f5afc3-10f6-4d92-b194-8b9a86f55c6a_png_1920x1080.jpg',
+ 'timestamp': 1685105157,
+ 'duration': 253.553,
+ },
+ }, {
+ 'url': 'https://stox.ctvnews.ca/video-gallery?clipId=582589',
+ 'md5': '135cc592df607d29dddc931f1b756ae2',
+ 'info_dict': {
+ 'id': '582589',
+ 'ext': 'flv',
+ 'categories': [],
+ 'timestamp': 1427906183,
+ 'season_number': 0,
+ 'duration': 125.559,
+ 'thumbnail': 'http://images2.9c9media.com/image_asset/2019_3_28_35f5afc3-10f6-4d92-b194-8b9a86f55c6a_png_1920x1080.jpg',
+ 'series': 'CTV News Stox',
+ 'description': 'CTV original footage of the rise and fall of the Berlin Wall.',
+ 'title': 'Berlin Wall',
+ 'season_id': '63817',
+ 'season': 'Season 0',
+ 'tags': [],
+ 'upload_date': '20150401',
+ },
+ }, {
+ 'url': 'https://ottawa.ctvnews.ca/features/regional-contact/regional-contact-archive?binId=1.1164587#3023759',
+ 'md5': 'a14c0603557decc6531260791c23cc5e',
+ 'info_dict': {
+ 'id': '3023759',
+ 'ext': 'flv',
+ 'season_number': 2024,
+ 'timestamp': 1731798000,
+ 'season': '2024',
+ 'episode': 'Episode 125',
+ 'description': 'CTV News Ottawa at Six',
+ 'duration': 2712.076,
+ 'episode_number': 125,
+ 'upload_date': '20241116',
+ 'title': 'CTV News Ottawa at Six for Saturday, November 16, 2024',
+ 'thumbnail': 'http://images2.9c9media.com/image_asset/2019_3_28_35f5afc3-10f6-4d92-b194-8b9a86f55c6a_png_1920x1080.jpg',
+ 'categories': [],
+ 'tags': [],
+ 'series': 'CTV News Ottawa at Six',
+ 'season_id': '92667',
+ },
}, {
'url': 'http://www.ctvnews.ca/1.810401',
'only_matching': True,
@@ -57,29 +148,35 @@ class CTVNewsIE(InfoExtractor):
'only_matching': True,
}]
+ def _ninecninemedia_url_result(self, clip_id):
+ return self.url_result(f'9c9media:ctvnews_web:{clip_id}', NineCNineMediaIE, clip_id)
+
def _real_extract(self, url):
page_id = self._match_id(url)
- def ninecninemedia_url_result(clip_id):
- return {
- '_type': 'url_transparent',
- 'id': clip_id,
- 'url': f'9c9media:ctvnews_web:{clip_id}',
- 'ie_key': 'NineCNineMedia',
- }
+ if mobj := re.fullmatch(self._VIDEO_ID_RE, urllib.parse.urlparse(url).fragment):
+ page_id = mobj.group('id')
+
+ if re.fullmatch(self._VIDEO_ID_RE, page_id):
+ return self._ninecninemedia_url_result(page_id)
+
+ webpage = self._download_webpage(f'https://www.ctvnews.ca/{page_id}', page_id, query={
+ 'ot': 'example.AjaxPageLayout.ot',
+ 'maxItemsPerPage': 1000000,
+ })
+ entries = [self._ninecninemedia_url_result(clip_id)
+ for clip_id in orderedSet(re.findall(r'clip\.id\s*=\s*(\d+);', webpage))]
+ if not entries:
+ webpage = self._download_webpage(url, page_id)
+ if 'getAuthStates("' in webpage:
+ entries = [self._ninecninemedia_url_result(clip_id) for clip_id in
+ self._search_regex(r'getAuthStates\("([\d+,]+)"', webpage, 'clip ids').split(',')]
+ else:
+ entries = [
+ self._ninecninemedia_url_result(clip_id) for clip_id in
+ traverse_obj(webpage, (
+ {find_element(tag='jasper-player-container', html=True)},
+ {extract_attributes}, 'axis-ids', {json.loads}, ..., 'axisId'))
+ ]
- if page_id.isdigit():
- return ninecninemedia_url_result(page_id)
- else:
- webpage = self._download_webpage(f'http://www.ctvnews.ca/{page_id}', page_id, query={
- 'ot': 'example.AjaxPageLayout.ot',
- 'maxItemsPerPage': 1000000,
- })
- entries = [ninecninemedia_url_result(clip_id) for clip_id in orderedSet(
- re.findall(r'clip\.id\s*=\s*(\d+);', webpage))]
- if not entries:
- webpage = self._download_webpage(url, page_id)
- if 'getAuthStates("' in webpage:
- entries = [ninecninemedia_url_result(clip_id) for clip_id in
- self._search_regex(r'getAuthStates\("([\d+,]+)"', webpage, 'clip ids').split(',')]
- return self.playlist_result(entries, page_id)
+ return self.playlist_result(entries, page_id)
| [CTVNews] Does not find video on page
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
canada
### Provide a description that is worded well enough to be understood
URL: https://www.ctvnews.ca/business/respondents-to-bank-of-canada-questionnaire-largely-oppose-creating-a-digital-loonie-1.6665797
Downloads a play list but finds zero items (videos)
No actual error, just no resulting video.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
>yt-dlp.py -vU "https://www.ctvnews.ca/business/respondents-to-bank-of-canada-questionnaire-largely-oppose-creating-a-digital-loonie-1.6665797 "
[debug] Command-line config: ['-vU', 'https://www.ctvnews.ca/business/respondents-to-bank-of-canada-questionnaire-largely-oppose-creating-a-digital-loonie-1.6665797 ']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-nightly-builds [6a9c7a2b5] (zip)
[debug] Python 3.10.5 (CPython AMD64 64bit) - Windows-10-10.0.14393-SP0 (OpenSSL 1.1.1n 15 Mar 2022)
[debug] exe versions: ffmpeg 6.1-full_build-www.gyan.dev (setts), ffprobe 4.3.2-2021-02-02-full_build-www.gyan.dev, rtmpdump 2.4
[debug] Optional libraries: sqlite3-3.37.2
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1792 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp-nightly-builds)
[CTVNews] Extracting URL: https://www.ctvnews.ca/business/respondents-to-bank-of-canada-questionnaire-largely-oppose-creating-a-digital-loonie-1.6665797
[CTVNews] 1.6665797: Downloading webpage
[CTVNews] 1.6665797: Downloading webpage
[download] Downloading playlist: 1.6665797
[CTVNews] Playlist 1.6665797: Downloading 0 items
[download] Finished downloading playlist: 1.6665797
```
| This patch gets the problem video.
```diff
--- old/yt_dlp/extractor/ctvnews.py
+++ new/yt_dlp/extractor/ctvnews.py
if 'getAuthStates("' in webpage:
entries = [ninecninemedia_url_result(clip_id) for clip_id in
self._search_regex(r'getAuthStates\("([\d+,]+)"', webpage, 'clip ids').split(',')]
+ else:
+ entries = [ninecninemedia_url_result(clip_id) for clip_id in orderedSet(
+ re.findall(r'axisId":"(\d+)', webpage))]
return self.playlist_result(entries, page_id)
```
It's not clear that `orderedSet()` is necessary: it's used in the original line 63 but not in the newer l.68 shown above. | 1,731,543,182,000 | null | Bug Report | [
"yt_dlp/extractor/ctvnews.py:CTVNewsIE._real_extract"
] | [
"yt_dlp/extractor/ctvnews.py:CTVNewsIE._ninecninemedia_url_result"
] | 1 | 504 |
|
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11530 | f2a4983df7a64c4e93b56f79dbd16a781bd90206 | diff --git a/yt_dlp/extractor/patreon.py b/yt_dlp/extractor/patreon.py
index 4d668cd37dc0..6bdeaf15710d 100644
--- a/yt_dlp/extractor/patreon.py
+++ b/yt_dlp/extractor/patreon.py
@@ -16,10 +16,10 @@
parse_iso8601,
smuggle_url,
str_or_none,
- traverse_obj,
url_or_none,
urljoin,
)
+from ..utils.traversal import traverse_obj, value
class PatreonBaseIE(InfoExtractor):
@@ -252,6 +252,27 @@ class PatreonIE(PatreonBaseIE):
'thumbnail': r're:^https?://.+',
},
'skip': 'Patron-only content',
+ }, {
+ # Contains a comment reply in the 'included' section
+ 'url': 'https://www.patreon.com/posts/114721679',
+ 'info_dict': {
+ 'id': '114721679',
+ 'ext': 'mp4',
+ 'upload_date': '20241025',
+ 'uploader': 'Japanalysis',
+ 'like_count': int,
+ 'thumbnail': r're:^https?://.+',
+ 'comment_count': int,
+ 'title': 'Karasawa Part 2',
+ 'description': 'Part 2 of this video https://www.youtube.com/watch?v=Azms2-VTASk',
+ 'uploader_url': 'https://www.patreon.com/japanalysis',
+ 'uploader_id': '80504268',
+ 'channel_url': 'https://www.patreon.com/japanalysis',
+ 'channel_follower_count': int,
+ 'timestamp': 1729897015,
+ 'channel_id': '9346307',
+ },
+ 'params': {'getcomments': True},
}]
_RETURN_TYPE = 'video'
@@ -404,26 +425,24 @@ def _get_comments(self, post_id):
f'posts/{post_id}/comments', post_id, query=params, note=f'Downloading comments page {page}')
cursor = None
- for comment in traverse_obj(response, (('data', ('included', lambda _, v: v['type'] == 'comment')), ...)):
+ for comment in traverse_obj(response, (('data', 'included'), lambda _, v: v['type'] == 'comment' and v['id'])):
count += 1
- comment_id = comment.get('id')
- attributes = comment.get('attributes') or {}
- if comment_id is None:
- continue
author_id = traverse_obj(comment, ('relationships', 'commenter', 'data', 'id'))
- author_info = traverse_obj(
- response, ('included', lambda _, v: v['id'] == author_id and v['type'] == 'user', 'attributes'),
- get_all=False, expected_type=dict, default={})
yield {
- 'id': comment_id,
- 'text': attributes.get('body'),
- 'timestamp': parse_iso8601(attributes.get('created')),
- 'parent': traverse_obj(comment, ('relationships', 'parent', 'data', 'id'), default='root'),
- 'author_is_uploader': attributes.get('is_by_creator'),
+ **traverse_obj(comment, {
+ 'id': ('id', {str_or_none}),
+ 'text': ('attributes', 'body', {str}),
+ 'timestamp': ('attributes', 'created', {parse_iso8601}),
+ 'parent': ('relationships', 'parent', 'data', ('id', {value('root')}), {str}, any),
+ 'author_is_uploader': ('attributes', 'is_by_creator', {bool}),
+ }),
+ **traverse_obj(response, (
+ 'included', lambda _, v: v['id'] == author_id and v['type'] == 'user', 'attributes', {
+ 'author': ('full_name', {str}),
+ 'author_thumbnail': ('image_url', {url_or_none}),
+ }), get_all=False),
'author_id': author_id,
- 'author': author_info.get('full_name'),
- 'author_thumbnail': author_info.get('image_url'),
}
if count < traverse_obj(response, ('meta', 'count')):
| Patreon: --write-comments is broken
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Provide a description that is worded well enough to be understood
Downloading comments from Patreon videos is broken.
Note: I didn't run the update_version script, but I built yt-dlp from the current master, be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.patreon.com/posts/114721679', '--write-comments']
[debug] User config "/home/mateon/.config/yt-dlp/config": ['--compat-options=no-certifi']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [197d0b03b]
[debug] Lazy loading extractors is disabled
[debug] Compatibility options: no-certifi
[debug] Python 3.12.7 (CPython x86_64 64bit) - Linux-6.6.53-x86_64-with-glibc2.40 (OpenSSL 3.3.2 3 Sep 2024, glibc 2.40)
[debug] exe versions: ffmpeg 7.1 (setts), ffprobe 7.1, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.20.0, brotlicffi-1.1.0.0, certifi-2024.08.30, curl_cffi-0.7.2 (unsupported), mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.46.1, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1839 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[patreon] Extracting URL: https://www.patreon.com/posts/114721679
[patreon] 114721679: Downloading API JSON
[patreon] 114721679: Downloading m3u8 information
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[patreon] 114721679: Downloading comments page 1
ERROR: 'str' object has no attribute 'get'
Traceback (most recent call last):
File "/nix/store/y3q4p7gjj7kz37gkiyw17vanglji0kry-yt-dlp-be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1625, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/y3q4p7gjj7kz37gkiyw17vanglji0kry-yt-dlp-be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1781, in __extract_info
return self.process_ie_result(ie_result, download, extra_info)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/y3q4p7gjj7kz37gkiyw17vanglji0kry-yt-dlp-be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1840, in process_ie_result
ie_result = self.process_video_result(ie_result, download=download)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/y3q4p7gjj7kz37gkiyw17vanglji0kry-yt-dlp-be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 2926, in process_video_result
self.post_extract(info_dict)
File "/nix/store/y3q4p7gjj7kz37gkiyw17vanglji0kry-yt-dlp-be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 3694, in post_extract
actual_post_extract(info_dict or {})
File "/nix/store/y3q4p7gjj7kz37gkiyw17vanglji0kry-yt-dlp-be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 3692, in actual_post_extract
info_dict.update(post_extractor())
^^^^^^^^^^^^^^^^
File "/nix/store/y3q4p7gjj7kz37gkiyw17vanglji0kry-yt-dlp-be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 3734, in extractor
comments.append(next(generator))
^^^^^^^^^^^^^^^
File "/nix/store/y3q4p7gjj7kz37gkiyw17vanglji0kry-yt-dlp-be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8/lib/python3.12/site-packages/yt_dlp/extractor/patreon.py", line 409, in _get_comments
comment_id = comment.get('id')
^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'get'
```
| 1,731,521,166,000 | null | Bug Report | [
"yt_dlp/extractor/patreon.py:PatreonIE._get_comments"
] | [] | 1 | 505 |
||
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11527 | a9f85670d03ab993dc589f21a9ffffcad61392d5 | diff --git a/yt_dlp/extractor/archiveorg.py b/yt_dlp/extractor/archiveorg.py
index f5a55efc4ff1..2849d9fd5b0d 100644
--- a/yt_dlp/extractor/archiveorg.py
+++ b/yt_dlp/extractor/archiveorg.py
@@ -205,6 +205,26 @@ class ArchiveOrgIE(InfoExtractor):
},
},
],
+ }, {
+ # The reviewbody is None for one of the reviews; just need to extract data without crashing
+ 'url': 'https://archive.org/details/gd95-04-02.sbd.11622.sbeok.shnf/gd95-04-02d1t04.shn',
+ 'info_dict': {
+ 'id': 'gd95-04-02.sbd.11622.sbeok.shnf/gd95-04-02d1t04.shn',
+ 'ext': 'mp3',
+ 'title': 'Stuck Inside of Mobile with the Memphis Blues Again',
+ 'creators': ['Grateful Dead'],
+ 'duration': 338.31,
+ 'track': 'Stuck Inside of Mobile with the Memphis Blues Again',
+ 'description': 'md5:764348a470b986f1217ffd38d6ac7b72',
+ 'display_id': 'gd95-04-02d1t04.shn',
+ 'location': 'Pyramid Arena',
+ 'uploader': '[email protected]',
+ 'album': '1995-04-02 - Pyramid Arena',
+ 'upload_date': '20040519',
+ 'track_number': 4,
+ 'release_date': '19950402',
+ 'timestamp': 1084927901,
+ },
}]
@staticmethod
@@ -335,7 +355,7 @@ def _real_extract(self, url):
info['comments'].append({
'id': review.get('review_id'),
'author': review.get('reviewer'),
- 'text': str_or_none(review.get('reviewtitle'), '') + '\n\n' + review.get('reviewbody'),
+ 'text': join_nonempty('reviewtitle', 'reviewbody', from_dict=review, delim='\n\n'),
'timestamp': unified_timestamp(review.get('createdate')),
'parent': 'root'})
| [archive.org] ERROR: can only concatenate str (not "NoneType") to str - sporadic, only on certain URLs
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
United States
### Provide a description that is worded well enough to be understood
I'm getting the error in the title, and in the verbose output, when attempting to download certain archive.org URLs, but not others.
Downloading https://archive.org/details/gd95-04-02.sbd.11622.sbeok.shnf fails with the provided log output; however, downloading, say, https://archive.org/details/gd1995-04-02.nak300.holtz.91056.flac16 is successful.
However, I was able to "fix" the bug by editing my local installation of `yt-dlp`. Apparently, in some cases, the "reviewbody" attribute might be missing from the review, which causes a `TypeError` when an attempt at string concatenation is made. Forcing the body to empty-string in these cases was enough to bypass the crash and allow the download to proceed.
```diff
diff --git a/yt_dlp/extractor/archiveorg.py b/yt_dlp/extractor/archiveorg.py
index f5a55efc4..2869e5233 100644
--- a/yt_dlp/extractor/archiveorg.py
+++ b/yt_dlp/extractor/archiveorg.py
@@ -335,7 +335,7 @@ def _real_extract(self, url):
info['comments'].append({
'id': review.get('review_id'),
'author': review.get('reviewer'),
- 'text': str_or_none(review.get('reviewtitle'), '') + '\n\n' + review.get('reviewbody'),
+ 'text': str_or_none(review.get('reviewtitle'), '') + '\n\n' + str_or_none(review.get('reviewbody'), ''),
'timestamp': unified_timestamp(review.get('createdate')),
'parent': 'root'})
```
I put "fix" in quotes, though, because I'm not familiar enough with the yt-dlp codebase as a whole to know whether this "fix" doesn't cause its own problems; I'm decent at Python and I figured I might as well take a stab at patching over the "obvious" problem, and it did work in my case.
However, it might well be the case that this hack breaks other components of the archive.org extractor - for example, some sort of other advanced functionality that I'm not familiar with (that my simple download request didn't invoke), which depends on a correctly-parsed review body in order to do its job.
That said, I can certainly file that PR if a maintainer indicates that the change wouldn't have unintended consequences.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--playlist-items', '4:5', 'https://archive.org/details/gd95-04-02.sbd.11622.sbeok.shnf']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [197d0b03b]
[debug] Python 3.12.7 (CPython x86_64 64bit) - Linux-6.11.6-arch1-1-x86_64-with-glibc2.40 (OpenSSL 3.4.0 22 Oct 2024, glibc 2.40)
[debug] exe versions: ffmpeg 7.1 (setts), ffprobe 7.1, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.46.1, urllib3-1.26.20, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[archive.org] Extracting URL: https://archive.org/details/gd95-04-02.sbd.11622.sbeok.shnf
[archive.org] gd95-04-02.sbd.11622.sbeok.shnf: Downloading webpage
[archive.org] gd95-04-02.sbd.11622.sbeok.shnf: Downloading JSON metadata
ERROR: can only concatenate str (not "NoneType") to str
Traceback (most recent call last):
File "/usr/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1625, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1760, in __extract_info
ie_result = ie.extract(url)
^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/yt_dlp/extractor/archiveorg.py", line 338, in _real_extract
'text': str_or_none(review.get('reviewtitle'), '') + '\n\n' + review.get('reviewbody'),
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~
TypeError: can only concatenate str (not "NoneType") to str
```
| ideally, `join_nonempty` would've been used here
```diff
diff --git a/yt_dlp/extractor/archiveorg.py b/yt_dlp/extractor/archiveorg.py
index f5a55efc4..52fd02acc 100644
--- a/yt_dlp/extractor/archiveorg.py
+++ b/yt_dlp/extractor/archiveorg.py
@@ -335,7 +335,7 @@ def _real_extract(self, url):
info['comments'].append({
'id': review.get('review_id'),
'author': review.get('reviewer'),
- 'text': str_or_none(review.get('reviewtitle'), '') + '\n\n' + review.get('reviewbody'),
+ 'text': join_nonempty('reviewtitle', 'reviewbody', from_dict=review, delim='\n\n'),
'timestamp': unified_timestamp(review.get('createdate')),
'parent': 'root'})
``` | 1,731,452,423,000 | null | Bug Report | [
"yt_dlp/extractor/archiveorg.py:ArchiveOrgIE._real_extract"
] | [] | 1 | 506 |
|
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11513 | a9f85670d03ab993dc589f21a9ffffcad61392d5 | diff --git a/yt_dlp/extractor/facebook.py b/yt_dlp/extractor/facebook.py
index 2bcb5a8411f1..91e2f3489cea 100644
--- a/yt_dlp/extractor/facebook.py
+++ b/yt_dlp/extractor/facebook.py
@@ -563,13 +563,13 @@ def extract_from_jsmods_instances(js_data):
return extract_video_data(try_get(
js_data, lambda x: x['jsmods']['instances'], list) or [])
- def extract_dash_manifest(video, formats):
+ def extract_dash_manifest(vid_data, formats, mpd_url=None):
dash_manifest = traverse_obj(
- video, 'dash_manifest', 'playlist', 'dash_manifest_xml_string', expected_type=str)
+ vid_data, 'dash_manifest', 'playlist', 'dash_manifest_xml_string', 'manifest_xml', expected_type=str)
if dash_manifest:
formats.extend(self._parse_mpd_formats(
compat_etree_fromstring(urllib.parse.unquote_plus(dash_manifest)),
- mpd_url=url_or_none(video.get('dash_manifest_url'))))
+ mpd_url=url_or_none(video.get('dash_manifest_url')) or mpd_url))
def process_formats(info):
# Downloads with browser's User-Agent are rate limited. Working around
@@ -619,9 +619,12 @@ def parse_graphql_video(video):
video = video['creation_story']
video['owner'] = traverse_obj(video, ('short_form_video_context', 'video_owner'))
video.update(reel_info)
- fmt_data = traverse_obj(video, ('videoDeliveryLegacyFields', {dict})) or video
+
formats = []
q = qualities(['sd', 'hd'])
+
+ # Legacy formats extraction
+ fmt_data = traverse_obj(video, ('videoDeliveryLegacyFields', {dict})) or video
for key, format_id in (('playable_url', 'sd'), ('playable_url_quality_hd', 'hd'),
('playable_url_dash', ''), ('browser_native_hd_url', 'hd'),
('browser_native_sd_url', 'sd')):
@@ -629,7 +632,7 @@ def parse_graphql_video(video):
if not playable_url:
continue
if determine_ext(playable_url) == 'mpd':
- formats.extend(self._extract_mpd_formats(playable_url, video_id))
+ formats.extend(self._extract_mpd_formats(playable_url, video_id, fatal=False))
else:
formats.append({
'format_id': format_id,
@@ -638,6 +641,28 @@ def parse_graphql_video(video):
'url': playable_url,
})
extract_dash_manifest(fmt_data, formats)
+
+ # New videoDeliveryResponse formats extraction
+ fmt_data = traverse_obj(video, ('videoDeliveryResponseFragment', 'videoDeliveryResponseResult'))
+ mpd_urls = traverse_obj(fmt_data, ('dash_manifest_urls', ..., 'manifest_url', {url_or_none}))
+ dash_manifests = traverse_obj(fmt_data, ('dash_manifests', lambda _, v: v['manifest_xml']))
+ for idx, dash_manifest in enumerate(dash_manifests):
+ extract_dash_manifest(dash_manifest, formats, mpd_url=traverse_obj(mpd_urls, idx))
+ if not dash_manifests:
+ # Only extract from MPD URLs if the manifests are not already provided
+ for mpd_url in mpd_urls:
+ formats.extend(self._extract_mpd_formats(mpd_url, video_id, fatal=False))
+ for prog_fmt in traverse_obj(fmt_data, ('progressive_urls', lambda _, v: v['progressive_url'])):
+ format_id = traverse_obj(prog_fmt, ('metadata', 'quality', {str.lower}))
+ formats.append({
+ 'format_id': format_id,
+ # sd, hd formats w/o resolution info should be deprioritized below DASH
+ 'quality': q(format_id) - 3,
+ 'url': prog_fmt['progressive_url'],
+ })
+ for m3u8_url in traverse_obj(fmt_data, ('hls_playlist_urls', ..., 'hls_playlist_url', {url_or_none})):
+ formats.extend(self._extract_m3u8_formats(m3u8_url, video_id, 'mp4', fatal=False, m3u8_id='hls'))
+
if not formats:
# Do not append false positive entry w/o any formats
return
| [facebook] ERROR: No video formats found (on >= 2024.11.04)
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
US
### Provide a description that is worded well enough to be understood
Please note this is not related to #11337 and is new. I had the problem reported in that issue, and it went away for about a week or so after updating to the nightly build. This problem is new in the past week.
I've duplicated this on Windows and Mac, and it appears as though it's specifically related to private group videos. I've tested `--cookies-from-browser` for `chrome`, `firefox`, and `safari`, all with the same results.
If needed, I can invite any developers to the group for troubleshooting, videos are SFW (youth hockey video).
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['--cookies-from-browser', 'safari', 'https://www.facebook.com/1358150084/videos/7350931248365050/', '-vU']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-nightly-builds [f13df591d] (pip)
[debug] Python 3.12.2 (CPython arm64 64bit) - macOS-14.6.1-arm64-arm-64bit (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.1 (setts), ffprobe 7.1, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.43.2, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Trying secondary cookie location
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 2[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 2[debug] skipping 4 bytes (unknown record field 1): b'\x01\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x01\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 4[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x02\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 3[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 2[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 4[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 3[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 2[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x02\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 2[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 9[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 7[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 2[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 3[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 3[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 2[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 8[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 3[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 3[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 2[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00'
[Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00'
[debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00'
[debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00'
[debug] skipping 87 bytes (footer): b'\x00\x02\xe4R\x07\x17 \x05\x00\x00\x00Kbplist00\xd1\x01\x02_\x10\x18NSHTTPCookieAcceptPolicy\x10\x02\x08\x0b&\x00\x00\x00\x00\x00\x00\x01\x01\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00('
Extracted 82 cookies from safari
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1839 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp-nightly-builds)
[facebook] Extracting URL: https://www.facebook.com/1358150084/videos/7350931248365050/
[facebook] 7350931248365050: Downloading webpage
ERROR: [facebook] 7350931248365050: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
Traceback (most recent call last):
File "/Users/justine/.pyenv/versions/3.12.2/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1625, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/justine/.pyenv/versions/3.12.2/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1781, in __extract_info
return self.process_ie_result(ie_result, download, extra_info)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/justine/.pyenv/versions/3.12.2/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1840, in process_ie_result
ie_result = self.process_video_result(ie_result, download=download)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/justine/.pyenv/versions/3.12.2/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 2846, in process_video_result
self.raise_no_formats(info_dict)
File "/Users/justine/.pyenv/versions/3.12.2/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1122, in raise_no_formats
raise ExtractorError(msg, video_id=info['id'], ie=info['extractor'],
yt_dlp.utils.ExtractorError: [facebook] 7350931248365050: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
```
| The debug output during cookies extraction is a bit concerning; are you sure the facebook cookies are being successfully extracted/passed? Have you tried with `--cookies` instead?
I didn't, but here's the debug output pulling cookies from chrome giving the same end result without all the cookie parsing output:
```
[debug] Command-line config: ['--cookies-from-browser', 'chrome', 'https://www.facebook.com/1358150084/videos/7350931248365050/', '-vU']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-nightly-builds [f13df591d] (pip)
[debug] Python 3.12.2 (CPython arm64 64bit) - macOS-14.6.1-arm64-arm-64bit (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.1 (setts), ffprobe 7.1, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.43.2, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
Extracting cookies from chrome
[debug] Extracting cookies from: "/Users/justine/Library/Application Support/Google/Chrome/Default/Cookies"
[debug] using find-generic-password to obtain password from OSX keychain
Extracted 308 cookies from chrome
[debug] cookie version breakdown: {'v10': 314, 'other': 0, 'unencrypted': 0}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1839 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/latest/download/_update_spec
Current version: [email protected] from yt-dlp/yt-dlp-nightly-builds
Latest version: [email protected] from yt-dlp/yt-dlp-nightly-builds
ERROR: You installed yt-dlp with pip or using the wheel from PyPi; Use that to update
[facebook] Extracting URL: https://www.facebook.com/1358150084/videos/7350931248365050/
[facebook] 7350931248365050: Downloading webpage
ERROR: [facebook] 7350931248365050: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
Traceback (most recent call last):
File "/Users/justine/.pyenv/versions/3.12.2/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1625, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/justine/.pyenv/versions/3.12.2/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1781, in __extract_info
return self.process_ie_result(ie_result, download, extra_info)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/justine/.pyenv/versions/3.12.2/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1840, in process_ie_result
ie_result = self.process_video_result(ie_result, download=download)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/justine/.pyenv/versions/3.12.2/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 2846, in process_video_result
self.raise_no_formats(info_dict)
File "/Users/justine/.pyenv/versions/3.12.2/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1122, in raise_no_formats
raise ExtractorError(msg, video_id=info['id'], ie=info['extractor'],
yt_dlp.utils.ExtractorError: [facebook] 7350931248365050: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
```
If you want to add `--write-pages` to your command (with `--cookies-from-browser chrome`) and send me the resulting `.dump` file(s), I could take a look at what can be done (if they tell me anything at all). I'd advise not to post them publicly, since they could contain personal information (e.g. your FB username / ID / display name). You could send them to me over [discord](https://discord.gg/H5MNcFW63r) (same username) or via email: `bashonly ( a t) proton mail [d o t] com`
Sent to your proton mail.
I have a similar but maybe different issue. Same kind of response but with a publicly available video.
https://www.facebook.com/watch/?v=1085099419908696&rdid=tfjgd4h6VuK74V0w
[1085099419908696-560p-סוף שבוע טוב וגם מצחיק קצת 🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣 | By שלמה טל.info.json](https://github.com/user-attachments/files/17709270/1085099419908696-560p-.By.info.json)
[fb1085099419908696.http.txt](https://github.com/user-attachments/files/17709274/fb1085099419908696.http.txt)
[1085099419908696_https_-_www.facebook.com_100037646286664_videos_1085099419908696_rdid=tfjgd4h6VuK74V0w.dump.html.txt](https://github.com/user-attachments/files/17709277/1085099419908696_https_-_www.facebook.com_100037646286664_videos_1085099419908696_rdid.tfjgd4h6VuK74V0w.dump.html.txt)
and this patch get the job done:
```patch
diff --git a/yt_dlp/extractor/facebook.py b/yt_dlp/extractor/facebook.py
index 2bcb5a841..c4fa88c05 100644
--- a/yt_dlp/extractor/facebook.py
+++ b/yt_dlp/extractor/facebook.py
@@ -566,6 +566,10 @@ def extract_from_jsmods_instances(js_data):
def extract_dash_manifest(video, formats):
dash_manifest = traverse_obj(
video, 'dash_manifest', 'playlist', 'dash_manifest_xml_string', expected_type=str)
+ if not dash_manifest:
+ videoDeliveryResponseFragment = (
+ 'videoDeliveryResponseFragment', 'videoDeliveryResponseResult', 'dash_manifests', 0, 'manifest_xml')
+ dash_manifest = traverse_obj(video, videoDeliveryResponseFragment, expected_type=str)
if dash_manifest:
formats.extend(self._parse_mpd_formats(
compat_etree_fromstring(urllib.parse.unquote_plus(dash_manifest)),
```
Is it PR worthy, or too kludgy. Give me some feedback and I'll submit this
I manually applied the fix from @refack above to my installation and can confirm my videos are downloading now | 1,731,379,665,000 | null | Bug Report | [
"yt_dlp/extractor/facebook.py:FacebookIE._extract_from_url"
] | [] | 1 | 507 |
|
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11478 | be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8 | diff --git a/yt_dlp/extractor/cloudflarestream.py b/yt_dlp/extractor/cloudflarestream.py
index 8a409461a8bc..9e9e89a801fa 100644
--- a/yt_dlp/extractor/cloudflarestream.py
+++ b/yt_dlp/extractor/cloudflarestream.py
@@ -8,7 +8,7 @@ class CloudflareStreamIE(InfoExtractor):
_DOMAIN_RE = r'(?:cloudflarestream\.com|(?:videodelivery|bytehighway)\.net)'
_EMBED_RE = rf'(?:embed\.|{_SUBDOMAIN_RE}){_DOMAIN_RE}/embed/[^/?#]+\.js\?(?:[^#]+&)?video='
_ID_RE = r'[\da-f]{32}|eyJ[\w-]+\.[\w-]+\.[\w-]+'
- _VALID_URL = rf'https?://(?:{_SUBDOMAIN_RE}{_DOMAIN_RE}/|{_EMBED_RE})(?P<id>{_ID_RE})'
+ _VALID_URL = rf'https?://(?:{_SUBDOMAIN_RE}(?P<domain>{_DOMAIN_RE})/|{_EMBED_RE})(?P<id>{_ID_RE})'
_EMBED_REGEX = [
rf'<script[^>]+\bsrc=(["\'])(?P<url>(?:https?:)?//{_EMBED_RE}(?:{_ID_RE})(?:(?!\1).)*)\1',
rf'<iframe[^>]+\bsrc=["\'](?P<url>https?://{_SUBDOMAIN_RE}{_DOMAIN_RE}/[\da-f]{{32}})',
@@ -19,7 +19,7 @@ class CloudflareStreamIE(InfoExtractor):
'id': '31c9291ab41fac05471db4e73aa11717',
'ext': 'mp4',
'title': '31c9291ab41fac05471db4e73aa11717',
- 'thumbnail': 'https://videodelivery.net/31c9291ab41fac05471db4e73aa11717/thumbnails/thumbnail.jpg',
+ 'thumbnail': 'https://cloudflarestream.com/31c9291ab41fac05471db4e73aa11717/thumbnails/thumbnail.jpg',
},
'params': {
'skip_download': 'm3u8',
@@ -30,7 +30,7 @@ class CloudflareStreamIE(InfoExtractor):
'id': '0e8e040aec776862e1d632a699edf59e',
'ext': 'mp4',
'title': '0e8e040aec776862e1d632a699edf59e',
- 'thumbnail': 'https://videodelivery.net/0e8e040aec776862e1d632a699edf59e/thumbnails/thumbnail.jpg',
+ 'thumbnail': 'https://cloudflarestream.com/0e8e040aec776862e1d632a699edf59e/thumbnails/thumbnail.jpg',
},
}, {
'url': 'https://watch.cloudflarestream.com/9df17203414fd1db3e3ed74abbe936c1',
@@ -54,7 +54,7 @@ class CloudflareStreamIE(InfoExtractor):
'id': 'eaef9dea5159cf968be84241b5cedfe7',
'ext': 'mp4',
'title': 'eaef9dea5159cf968be84241b5cedfe7',
- 'thumbnail': 'https://videodelivery.net/eaef9dea5159cf968be84241b5cedfe7/thumbnails/thumbnail.jpg',
+ 'thumbnail': 'https://cloudflarestream.com/eaef9dea5159cf968be84241b5cedfe7/thumbnails/thumbnail.jpg',
},
'params': {
'skip_download': 'm3u8',
@@ -62,8 +62,9 @@ class CloudflareStreamIE(InfoExtractor):
}]
def _real_extract(self, url):
- video_id = self._match_id(url)
- domain = 'bytehighway.net' if 'bytehighway.net/' in url else 'videodelivery.net'
+ video_id, domain = self._match_valid_url(url).group('id', 'domain')
+ if domain != 'bytehighway.net':
+ domain = 'cloudflarestream.com'
base_url = f'https://{domain}/{video_id}/'
if '.' in video_id:
video_id = self._parse_json(base64.urlsafe_b64decode(
| CloudFlareStream "No video formats found!"
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Spain
### Used URL
https://watch.cloudflarestream.com/e8766bfd7771e53b5dc26c76b650d91a/
### Provide a description that is worded well enough to be understood
Can't download a public CloudFlareStream video.
I get CERTIFICATE_VERIFY_FAILED warnings and then an error stating "No video formats found!"
Expected result: A video download from the provided link.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://watch.cloudflarestream.com/e8766bfd7771e53b5dc26c76b650d91a/']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [197d0b03b] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 5.1.2-essentials_build-www.gyan.dev (setts), ffprobe 5.1.2-essentials_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[CloudflareStream] Extracting URL: https://watch.cloudflarestream.com/e8766bfd7771e53b5dc26c76b650d91a/
[CloudflareStream] e8766bfd7771e53b5dc26c76b650d91a: Downloading m3u8 information
WARNING: [CloudflareStream] Failed to download m3u8 information: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1007)
[CloudflareStream] e8766bfd7771e53b5dc26c76b650d91a: Downloading MPD manifest
WARNING: [CloudflareStream] Failed to download MPD manifest: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1007)
ERROR: [CloudflareStream] e8766bfd7771e53b5dc26c76b650d91a: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1625, in wrapper
File "yt_dlp\YoutubeDL.py", line 1781, in __extract_info
File "yt_dlp\YoutubeDL.py", line 1840, in process_ie_result
File "yt_dlp\YoutubeDL.py", line 2846, in process_video_result
File "yt_dlp\YoutubeDL.py", line 1122, in raise_no_formats
yt_dlp.utils.ExtractorError: [CloudflareStream] e8766bfd7771e53b5dc26c76b650d91a: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
```
| i still meet this issue too
I am able to manually download the video.mpd file with
https://videodelivery.net/eaef9dea5159cf968be84241b5cedfe7/manifest/video.mpd
So I'm not sure what's going wrong, maybe the extractor is malforming the url?
When running the command with "--no-check-certificate" I get a 404 error when trying to fetch m3u8 and mpd files:
```shell
[debug] Command-line config: ['-vU', 'https://watch.cloudflarestream.com/e8766bfd7771e53b5dc26c76b650d91a/', '--no-check-certificate']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [197d0b03b] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 5.1.2-essentials_build-www.gyan.dev (setts), ffprobe 5.1.2-essentials_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[CloudflareStream] Extracting URL: https://watch.cloudflarestream.com/e8766bfd7771e53b5dc26c76b650d91a/
[CloudflareStream] e8766bfd7771e53b5dc26c76b650d91a: Downloading m3u8 information
WARNING: [CloudflareStream] Failed to download m3u8 information: HTTP Error 404: Not Found
[CloudflareStream] e8766bfd7771e53b5dc26c76b650d91a: Downloading MPD manifest
WARNING: [CloudflareStream] Failed to download MPD manifest: HTTP Error 404: Not Found
ERROR: [CloudflareStream] e8766bfd7771e53b5dc26c76b650d91a: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1625, in wrapper
File "yt_dlp\YoutubeDL.py", line 1781, in __extract_info
File "yt_dlp\YoutubeDL.py", line 1840, in process_ie_result
File "yt_dlp\YoutubeDL.py", line 2846, in process_video_result
File "yt_dlp\YoutubeDL.py", line 1122, in raise_no_formats
yt_dlp.utils.ExtractorError: [CloudflareStream] e8766bfd7771e53b5dc26c76b650d91a: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
```
Playing around with Postman and GET requests to videodelivery.net don't go through but they do for cloudflarestream.com
GET https://videodelivery.net/eaef9dea5159cf968be84241b5cedfe7/manifest/video.mpd
```
<html>
<body>Object not found</body>
</html>
```
GET https://cloudflarestream.com/eaef9dea5159cf968be84241b5cedfe7/manifest/video.mpd
```
<?xml version="1.0" encoding="UTF-8"?>
<MPD xmlns="urn:mpeg:dash:schema:mpd:2011" profiles="urn:mpeg:dash:profile:isoff-live:2011" type="static" mediaPresentationDuration="PT38S" minBufferTime="PT8S">
<Period id="0">
<AdaptationSet id="800962650" mimeType="audio/mp4" segmentAlignment="true" lang="original">
<Representation id="449134446" audioSamplingRate="44100" bandwidth="142554" codecs="mp4a.40.2">
<AudioChannelConfiguration schemeIdUri="urn:mpeg:dash:23003:3:audio_channel_configuration:2011" value="1"></AudioChannelConfiguration>
<SegmentTemplate duration="172695" initialization="../../eaef9dea5159cf968be84241b5cedfe7/audio/142/init.mp4?p=eyJ0eXBlIjoiaW5pdCIsInZpZGVvSUQiOiJlYWVmOWRlYTUxNTljZjk2OGJlODQyNDFiNWNlZGZlNyIsIm93bmVySUQiOjY2MzQxNTgsImNyZWF0b3JJRCI6InVwcmlkZSIsInVzZVZPRE9URkUiOmZhbHNlLCJmcm9tTWV6emFuaW5lIjpmYWxzZSwic3RvcmFnZVByb3ZpZGVyIjoyLCJ0cmFjayI6ImFjZjUxZDAwYTlkNmNiODNmNGNhNzI1ZDZiOTM2MjI3IiwicmVuZGl0aW9uIjoiNDQ5MTM0NDQ2IiwibXV4aW5nIjoiNDk5ODMzNjE1In0&s=L3jChcK8L8Kgwo0zwrTDhU_DpncjwpzCqMO0esOCC8O5wonCvRTCohMOQsOQwpBMAg" media="../../eaef9dea5159cf968be84241b5cedfe7/audio/142/seg_$Number$.mp4?p=eyJ0eXBlIjoic2VnbWVudCIsInZpZGVvSUQiOiJlYWVmOWRlYTUxNTljZjk2OGJlODQyNDFiNWNlZGZlNyIsIm93bmVySUQiOjY2MzQxNTgsImNyZWF0b3JJRCI6InVwcmlkZSIsInNlZ21lbnREdXJhdGlvblNlY3MiOjMuOTE1OTgzMTU1MTY2ODI4NSwidXNlVk9ET1RGRSI6ZmFsc2UsImZyb21NZXp6YW5pbmUiOmZhbHNlLCJzdG9yYWdlUHJvdmlkZXIiOjIsInRyYWNrIjoiYWNmNTFkMDBhOWQ2Y2I4M2Y0Y2E3MjVkNmI5MzYyMjciLCJyZW5kaXRpb24iOiI0NDkxMzQ0NDYiLCJtdXhpbmciOiI0OTk4MzM2MTUifQ&s=XW0iRcO7w4zCvsO5wqbCrxo0TMO3w5bDgcKhb8Oaw7dtXMKFScKfwpHCt8OBwpJaOcOv" startNumber="1" timescale="44100"></SegmentTemplate>
</Representation>
</AdaptationSet>
<AdaptationSet id="386409604" mimeType="video/mp4" segmentAlignment="true" >
<Representation id="449134449" bandwidth="405430" codecs="avc1.42c015" frameRate="30/1" height="240" width="426">
<SegmentTemplate duration="120000" initialization="../../eaef9dea5159cf968be84241b5cedfe7/video/240/init.mp4?p=eyJ0eXBlIjoiaW5pdCIsInZpZGVvSUQiOiJlYWVmOWRlYTUxNTljZjk2OGJlODQyNDFiNWNlZGZlNyIsIm93bmVySUQiOjY2MzQxNTgsImNyZWF0b3JJRCI6InVwcmlkZSIsInVzZVZPRE9URkUiOmZhbHNlLCJmcm9tTWV6emFuaW5lIjpmYWxzZSwic3RvcmFnZVByb3ZpZGVyIjoyLCJ0cmFjayI6IjI5ODBlNzIyOTQ5NjkzOGQ4OGQyODEwZmRjNWM5ZWQ5IiwicmVuZGl0aW9uIjoiNDQ5MTM0NDQ5IiwibXV4aW5nIjoiNDk5ODMzNjE4In0&s=w6XCjsKzw7fCmsKKwo_DoAvDq0LCniYHa39Tw6JEw6BUwojDiENvDMO9wqw7ccOwwpM" media="../../eaef9dea5159cf968be84241b5cedfe7/video/240/seg_$Number$.mp4?p=eyJ0eXBlIjoic2VnbWVudCIsInZpZGVvSUQiOiJlYWVmOWRlYTUxNTljZjk2OGJlODQyNDFiNWNlZGZlNyIsIm93bmVySUQiOjY2MzQxNTgsImNyZWF0b3JJRCI6InVwcmlkZSIsInNlZ21lbnREdXJhdGlvblNlY3MiOjQsInVzZVZPRE9URkUiOmZhbHNlLCJmcm9tTWV6emFuaW5lIjpmYWxzZSwic3RvcmFnZVByb3ZpZGVyIjoyLCJ0cmFjayI6IjI5ODBlNzIyOTQ5NjkzOGQ4OGQyODEwZmRjNWM5ZWQ5IiwicmVuZGl0aW9uIjoiNDQ5MTM0NDQ5IiwibXV4aW5nIjoiNDk5ODMzNjE4In0&s=bcOaIsOHPV_Cu8KiwqppCzkvw70Uwp8XHTbDqztcY8KVfkg6wqIHS8Ktw54" startNumber="1" timescale="30000"></SegmentTemplate>
</Representation>
<Representation id="449134457" bandwidth="680674" codecs="avc1.4d401e" frameRate="30/1" height="360" width="640">
<SegmentTemplate duration="120000" initialization="../../eaef9dea5159cf968be84241b5cedfe7/video/360/init.mp4?p=eyJ0eXBlIjoiaW5pdCIsInZpZGVvSUQiOiJlYWVmOWRlYTUxNTljZjk2OGJlODQyNDFiNWNlZGZlNyIsIm93bmVySUQiOjY2MzQxNTgsImNyZWF0b3JJRCI6InVwcmlkZSIsInVzZVZPRE9URkUiOmZhbHNlLCJmcm9tTWV6emFuaW5lIjpmYWxzZSwic3RvcmFnZVByb3ZpZGVyIjoyLCJ0cmFjayI6IjI5ODBlNzIyOTQ5NjkzOGQ4OGQyODEwZmRjNWM5ZWQ5IiwicmVuZGl0aW9uIjoiNDQ5MTM0NDU3IiwibXV4aW5nIjoiNDk5ODMzNjI2In0&s=w5_DpDHCncOVwpVtw4HDnMOOwqfDl8Oyw6PDrMO1w5Y5PmlSJnUxFyHCq8KHwpbDqMO0w4M" media="../../eaef9dea5159cf968be84241b5cedfe7/video/360/seg_$Number$.mp4?p=eyJ0eXBlIjoic2VnbWVudCIsInZpZGVvSUQiOiJlYWVmOWRlYTUxNTljZjk2OGJlODQyNDFiNWNlZGZlNyIsIm93bmVySUQiOjY2MzQxNTgsImNyZWF0b3JJRCI6InVwcmlkZSIsInNlZ21lbnREdXJhdGlvblNlY3MiOjQsInVzZVZPRE9URkUiOmZhbHNlLCJmcm9tTWV6emFuaW5lIjpmYWxzZSwic3RvcmFnZVByb3ZpZGVyIjoyLCJ0cmFjayI6IjI5ODBlNzIyOTQ5NjkzOGQ4OGQyODEwZmRjNWM5ZWQ5IiwicmVuZGl0aW9uIjoiNDQ5MTM0NDU3IiwibXV4aW5nIjoiNDk5ODMzNjI2In0&s=wp07RcKIN8ORB8Oww5fCkB02AMO2dEEgwqpRAhVcZMOywqbCnxhuwqDCqHrDrg" startNumber="1" timescale="30000"></SegmentTemplate>
</Representation>
<Representation id="449134468" bandwidth="1113178" codecs="avc1.4d401f" frameRate="30/1" height="480" width="854">
<SegmentTemplate duration="120000" initialization="../../eaef9dea5159cf968be84241b5cedfe7/video/480/init.mp4?p=eyJ0eXBlIjoiaW5pdCIsInZpZGVvSUQiOiJlYWVmOWRlYTUxNTljZjk2OGJlODQyNDFiNWNlZGZlNyIsIm93bmVySUQiOjY2MzQxNTgsImNyZWF0b3JJRCI6InVwcmlkZSIsInVzZVZPRE9URkUiOmZhbHNlLCJmcm9tTWV6emFuaW5lIjpmYWxzZSwic3RvcmFnZVByb3ZpZGVyIjoyLCJ0cmFjayI6IjI5ODBlNzIyOTQ5NjkzOGQ4OGQyODEwZmRjNWM5ZWQ5IiwicmVuZGl0aW9uIjoiNDQ5MTM0NDY4IiwibXV4aW5nIjoiNDk5ODMzNjM3In0&s=w4LCtMKUw5InVcKkRcKJw6dNw55_USATD8KCw5zDi0rCpMOyWjXDqsKAwqDCnQzDrMKd" media="../../eaef9dea5159cf968be84241b5cedfe7/video/480/seg_$Number$.mp4?p=eyJ0eXBlIjoic2VnbWVudCIsInZpZGVvSUQiOiJlYWVmOWRlYTUxNTljZjk2OGJlODQyNDFiNWNlZGZlNyIsIm93bmVySUQiOjY2MzQxNTgsImNyZWF0b3JJRCI6InVwcmlkZSIsInNlZ21lbnREdXJhdGlvblNlY3MiOjQsInVzZVZPRE9URkUiOmZhbHNlLCJmcm9tTWV6emFuaW5lIjpmYWxzZSwic3RvcmFnZVByb3ZpZGVyIjoyLCJ0cmFjayI6IjI5ODBlNzIyOTQ5NjkzOGQ4OGQyODEwZmRjNWM5ZWQ5IiwicmVuZGl0aW9uIjoiNDQ5MTM0NDY4IiwibXV4aW5nIjoiNDk5ODMzNjM3In0&s=SBvCiTjDkMKfw7YTcsOeJcOhw6ltbEtMVXTCv8KGBBTCkFwUKsOyVFl2Bg" startNumber="1" timescale="30000"></SegmentTemplate>
</Representation>
<Representation id="449134513" bandwidth="2380128" codecs="avc1.4d401f" frameRate="30/1" height="720" width="1280">
<SegmentTemplate duration="120000" initialization="../../eaef9dea5159cf968be84241b5cedfe7/video/720/init.mp4?p=eyJ0eXBlIjoiaW5pdCIsInZpZGVvSUQiOiJlYWVmOWRlYTUxNTljZjk2OGJlODQyNDFiNWNlZGZlNyIsIm93bmVySUQiOjY2MzQxNTgsImNyZWF0b3JJRCI6InVwcmlkZSIsInVzZVZPRE9URkUiOmZhbHNlLCJmcm9tTWV6emFuaW5lIjpmYWxzZSwic3RvcmFnZVByb3ZpZGVyIjoyLCJ0cmFjayI6IjI5ODBlNzIyOTQ5NjkzOGQ4OGQyODEwZmRjNWM5ZWQ5IiwicmVuZGl0aW9uIjoiNDQ5MTM0NTEzIiwibXV4aW5nIjoiNDk5ODMzNjgyIn0&s=XcOPGjHClhTDn8KcBMKXw5HCo8ONIcKuQMKqbSp8wpLCnMOZXMODw6zDqMKew4_CoEpu" media="../../eaef9dea5159cf968be84241b5cedfe7/video/720/seg_$Number$.mp4?p=eyJ0eXBlIjoic2VnbWVudCIsInZpZGVvSUQiOiJlYWVmOWRlYTUxNTljZjk2OGJlODQyNDFiNWNlZGZlNyIsIm93bmVySUQiOjY2MzQxNTgsImNyZWF0b3JJRCI6InVwcmlkZSIsInNlZ21lbnREdXJhdGlvblNlY3MiOjQsInVzZVZPRE9URkUiOmZhbHNlLCJmcm9tTWV6emFuaW5lIjpmYWxzZSwic3RvcmFnZVByb3ZpZGVyIjoyLCJ0cmFjayI6IjI5ODBlNzIyOTQ5NjkzOGQ4OGQyODEwZmRjNWM5ZWQ5IiwicmVuZGl0aW9uIjoiNDQ5MTM0NTEzIiwibXV4aW5nIjoiNDk5ODMzNjgyIn0&s=SMOOEMOcw5HDq8KTw67CqUMhYzVUGWobQXVvK8OYLMKdw4TDk8KMw5_DnV5ow48" startNumber="1" timescale="30000"></SegmentTemplate>
</Representation>
</AdaptationSet>
</Period>
</MPD>
``` | 1,731,071,079,000 | null | Bug Report | [
"yt_dlp/extractor/cloudflarestream.py:CloudflareStreamIE._real_extract"
] | [] | 1 | 509 |
|
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11472 | 282e19db827f0951c783ac946429f662bcf2200c | diff --git a/yt_dlp/extractor/adobepass.py b/yt_dlp/extractor/adobepass.py
index 7cc15ec7b6f2..f1b87792713f 100644
--- a/yt_dlp/extractor/adobepass.py
+++ b/yt_dlp/extractor/adobepass.py
@@ -1362,7 +1362,7 @@ class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should en
def _download_webpage_handle(self, *args, **kwargs):
headers = self.geo_verification_headers()
- headers.update(kwargs.get('headers', {}))
+ headers.update(kwargs.get('headers') or {})
kwargs['headers'] = headers
return super()._download_webpage_handle(
*args, **kwargs)
| [NBC]/[adobepass] ERROR: 'NoneType' object is not iterable
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
United States
### Provide a description that is worded well enough to be understood
Unable to pull Law and Order SVU or any NBC shows. Getting ERROR: 'NoneType' object is not iterable. I also tried cookies-from-browser but the application still returns to use ap-mso credentials.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['https://www.nbc.com/law-and-order-special-victims-unit/video/economics-of-shame/9000392650', '--ap-mso', 'Verizon', '--ap-username', 'PRIVATE', '--ap-password', 'PRIVATE', '-vU']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [197d0b03b] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg N-93302-g147ef1d947, ffprobe N-93302-g147ef1d947
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[NBC] Extracting URL: https://www.nbc.com/law-and-order-special-victims-unit/video/economics-of-shame/9000392650
[NBC] 9000392650: Downloading JSON metadata
[NBC] 9000392650: Downloading JSON metadata
ERROR: 'NoneType' object is not iterable
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1625, in wrapper
File "yt_dlp\YoutubeDL.py", line 1760, in __extract_info
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\nbc.py", line 212, in _real_extract
File "yt_dlp\extractor\adobepass.py", line 1449, in _extract_mvpd_auth
File "yt_dlp\extractor\adobepass.py", line 1365, in _download_webpage_handle
TypeError: 'NoneType' object is not iterable
```
| Regression introduced in dcfeea4dd5e5686821350baa6c7767a011944867
This should be the fix:
```diff
diff --git a/yt_dlp/extractor/adobepass.py b/yt_dlp/extractor/adobepass.py
index 7cc15ec7b..f1b877927 100644
--- a/yt_dlp/extractor/adobepass.py
+++ b/yt_dlp/extractor/adobepass.py
@@ -1362,7 +1362,7 @@ class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should en
def _download_webpage_handle(self, *args, **kwargs):
headers = self.geo_verification_headers()
- headers.update(kwargs.get('headers', {}))
+ headers.update(kwargs.get('headers') or {})
kwargs['headers'] = headers
return super()._download_webpage_handle(
*args, **kwargs)
```
| 1,730,927,829,000 | null | Bug Report | [
"yt_dlp/extractor/adobepass.py:AdobePassIE._download_webpage_handle"
] | [] | 1 | 510 |
|
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11466 | 282e19db827f0951c783ac946429f662bcf2200c | diff --git a/yt_dlp/extractor/goplay.py b/yt_dlp/extractor/goplay.py
index dfe5afe63514..32300f75c2f5 100644
--- a/yt_dlp/extractor/goplay.py
+++ b/yt_dlp/extractor/goplay.py
@@ -5,56 +5,63 @@
import hmac
import json
import os
+import re
+import urllib.parse
from .common import InfoExtractor
from ..utils import (
ExtractorError,
+ int_or_none,
+ js_to_json,
+ remove_end,
traverse_obj,
- unescapeHTML,
)
class GoPlayIE(InfoExtractor):
- _VALID_URL = r'https?://(www\.)?goplay\.be/video/([^/]+/[^/]+/|)(?P<display_id>[^/#]+)'
+ _VALID_URL = r'https?://(www\.)?goplay\.be/video/([^/?#]+/[^/?#]+/|)(?P<id>[^/#]+)'
_NETRC_MACHINE = 'goplay'
_TESTS = [{
- 'url': 'https://www.goplay.be/video/de-container-cup/de-container-cup-s3/de-container-cup-s3-aflevering-2#autoplay',
+ 'url': 'https://www.goplay.be/video/de-slimste-mens-ter-wereld/de-slimste-mens-ter-wereld-s22/de-slimste-mens-ter-wereld-s22-aflevering-1',
'info_dict': {
- 'id': '9c4214b8-e55d-4e4b-a446-f015f6c6f811',
+ 'id': '2baa4560-87a0-421b-bffc-359914e3c387',
'ext': 'mp4',
- 'title': 'S3 - Aflevering 2',
- 'series': 'De Container Cup',
- 'season': 'Season 3',
- 'season_number': 3,
- 'episode': 'Episode 2',
- 'episode_number': 2,
+ 'title': 'S22 - Aflevering 1',
+ 'description': r're:In aflevering 1 nemen Daan Alferink, Tess Elst en Xander De Rycke .{66}',
+ 'series': 'De Slimste Mens ter Wereld',
+ 'episode': 'Episode 1',
+ 'season_number': 22,
+ 'episode_number': 1,
+ 'season': 'Season 22',
},
+ 'params': {'skip_download': True},
'skip': 'This video is only available for registered users',
}, {
- 'url': 'https://www.goplay.be/video/a-family-for-thr-holidays-s1-aflevering-1#autoplay',
+ 'url': 'https://www.goplay.be/video/1917',
'info_dict': {
- 'id': '74e3ed07-748c-49e4-85a0-393a93337dbf',
+ 'id': '40cac41d-8d29-4ef5-aa11-75047b9f0907',
'ext': 'mp4',
- 'title': 'A Family for the Holidays',
+ 'title': '1917',
+ 'description': r're:Op het hoogtepunt van de Eerste Wereldoorlog krijgen twee jonge .{94}',
},
+ 'params': {'skip_download': True},
'skip': 'This video is only available for registered users',
}, {
'url': 'https://www.goplay.be/video/de-mol/de-mol-s11/de-mol-s11-aflevering-1#autoplay',
'info_dict': {
- 'id': '03eb8f2f-153e-41cb-9805-0d3a29dab656',
+ 'id': 'ecb79672-92b9-4cd9-a0d7-e2f0250681ee',
'ext': 'mp4',
'title': 'S11 - Aflevering 1',
+ 'description': r're:Tien kandidaten beginnen aan hun verovering van Amerika en ontmoeten .{102}',
'episode': 'Episode 1',
'series': 'De Mol',
'season_number': 11,
'episode_number': 1,
'season': 'Season 11',
},
- 'params': {
- 'skip_download': True,
- },
+ 'params': {'skip_download': True},
'skip': 'This video is only available for registered users',
}]
@@ -69,27 +76,42 @@ def _real_initialize(self):
if not self._id_token:
raise self.raise_login_required(method='password')
+ def _find_json(self, s):
+ return self._search_json(
+ r'\w+\s*:\s*', s, 'next js data', None, contains_pattern=r'\[(?s:.+)\]', default=None)
+
def _real_extract(self, url):
- url, display_id = self._match_valid_url(url).group(0, 'display_id')
+ display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id)
- video_data_json = self._html_search_regex(r'<div\s+data-hero="([^"]+)"', webpage, 'video_data')
- video_data = self._parse_json(unescapeHTML(video_data_json), display_id).get('data')
-
- movie = video_data.get('movie')
- if movie:
- video_id = movie['videoUuid']
- info_dict = {
- 'title': movie.get('title'),
- }
- else:
- episode = traverse_obj(video_data, ('playlists', ..., 'episodes', lambda _, v: v['pageInfo']['url'] == url), get_all=False)
- video_id = episode['videoUuid']
- info_dict = {
- 'title': episode.get('episodeTitle'),
- 'series': traverse_obj(episode, ('program', 'title')),
- 'season_number': episode.get('seasonNumber'),
- 'episode_number': episode.get('episodeNumber'),
- }
+
+ nextjs_data = traverse_obj(
+ re.findall(r'<script[^>]*>\s*self\.__next_f\.push\(\s*(\[.+?\])\s*\);?\s*</script>', webpage),
+ (..., {js_to_json}, {json.loads}, ..., {self._find_json}, ...))
+ meta = traverse_obj(nextjs_data, (
+ ..., lambda _, v: v['meta']['path'] == urllib.parse.urlparse(url).path, 'meta', any))
+
+ video_id = meta['uuid']
+ info_dict = traverse_obj(meta, {
+ 'title': ('title', {str}),
+ 'description': ('description', {str.strip}),
+ })
+
+ if traverse_obj(meta, ('program', 'subtype')) != 'movie':
+ for season_data in traverse_obj(nextjs_data, (..., 'children', ..., 'playlists', ...)):
+ episode_data = traverse_obj(
+ season_data, ('videos', lambda _, v: v['videoId'] == video_id, any))
+ if not episode_data:
+ continue
+
+ episode_title = traverse_obj(
+ episode_data, 'contextualTitle', 'episodeTitle', expected_type=str)
+ info_dict.update({
+ 'title': episode_title or info_dict.get('title'),
+ 'series': remove_end(info_dict.get('title'), f' - {episode_title}'),
+ 'season_number': traverse_obj(season_data, ('season', {int_or_none})),
+ 'episode_number': traverse_obj(episode_data, ('episodeNumber', {int_or_none})),
+ })
+ break
api = self._download_json(
f'https://api.goplay.be/web/v1/videos/long-form/{video_id}',
| [GoPlay] ERROR: [GoPlay] Unable to extract video_data
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Belgium
### Provide a description that is worded well enough to be understood
I cannot download a video from Goplay.be. ERROR: [GoPlay] Unable to extract video_data
Thank you in advance.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.goplay.be/video/nonkels/nonkels-s2/nonkels-2-s2-aflevering-4', '--username', 'PRIVATE', '--password', 'PRIVATE']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-master-builds [41be32e78] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: none
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-13.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1831 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-master-builds/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp-master-builds
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp-master-builds)
[GoPlay] Logging in
[GoPlay] Authenticating username
[GoPlay] Authenticating password
[GoPlay] Extracting URL: https://www.goplay.be/video/nonkels/nonkels-s2/nonkels-2-s2-aflevering-4
[GoPlay] nonkels-2-s2-aflevering-4: Downloading webpage
ERROR: [GoPlay] Unable to extract video_data; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.py", line 740, in extract
File "yt_dlp\extractor\goplay.py", line 75, in _real_extract
File "yt_dlp\extractor\common.py", line 1369, in _html_search_regex
File "yt_dlp\extractor\common.py", line 1333, in _search_regex
```
| Do **not** download the above spam links, they are malware
> Do not download these spam links, they are malware
The respond was too quick, so I didn't downloaded them. Thx :-)
Maybe some more information:
The video has many advertising at the begin and in the middle of the video.
> The video is protected by DRM
Then it won't be downloadable anyways. Does downloading non-DRM videos from this site still work?
> > The video is protected by DRM
>
> Then it won't be downloadable anyways. Does downloading non-DRM videos from this site still work?
After some researche, the video is actually NOT protected by DRM. So the error must be from something else. I tried 2 other videos, I got the same error: "ERROR: [GoPlay] Unable to extract video_data"... | 1,730,837,991,000 | null | Bug Report | [
"yt_dlp/extractor/goplay.py:GoPlayIE._real_extract"
] | [
"yt_dlp/extractor/goplay.py:GoPlayIE._find_json"
] | 1 | 511 |
|
gaogaotiantian/viztracer | gaogaotiantian__viztracer-528 | 2ed22b5b16dc232f966235a6a89fa678515a50a4 | diff --git a/src/viztracer/main.py b/src/viztracer/main.py
index 7cbf972c..eb996124 100644
--- a/src/viztracer/main.py
+++ b/src/viztracer/main.py
@@ -676,7 +676,7 @@ def exit_routine(self) -> None:
self.save()
if self.options.open: # pragma: no cover
import subprocess
- subprocess.run(["vizviewer", "--once", os.path.abspath(self.ofile)])
+ subprocess.run([sys.executable, "-m", "viztracer.viewer", "--once", os.path.abspath(self.ofile)])
def main():
| Cannot import name 'viewer_main' from 'viztracer' in 1.0.0
### Phenomenon:
I've been using viztracer through the viztracer plugin in vscode, but after upgrading to 1.0.0 ,viztracer doesn't work.
### Error message:
```powershell
C:\ProgramData\anaconda3\python.exe -m viztracer --ignore_frozen --open --log_print --quiet -u -- c:\...\something.py
Traceback (most recent call last):
File "C:\ProgramData\anaconda3\Scripts\vizviewer-script.py", line 6, in <module>
from viztracer import viewer_main
ImportError: cannot import name 'viewer_main' from 'viztracer' (C:\Users\29267\AppData\Roaming\Python\Python311\site-packages\viztracer\__init__.py)
```
### What I tried:
1. downgraded to 0.17.1 : **works fine**
2. upgraded to 1.0.0 : **bugs still there**
| You have multiple versions of viztracers. The `vizviewer` viztracer tried to use is a different version. `viztracer` is from conda but seems like `vizviewer` used the version from your system Python.
But this is still partially my fault, `viztracer` should always use the same version `vizviewer`. For now you can either upgrade both version, or do not use `--open` option. Just do a vizviewer in the same environment as viztracer, which should work. | 1,733,202,811,000 | null | Bug Report | [
"src/viztracer/main.py:VizUI.exit_routine"
] | [] | 1 | 512 |
|
locustio/locust | locustio__locust-2976 | a8510a466dd358a5d2956079cf10f25dc9beb380 | diff --git a/locust/runners.py b/locust/runners.py
index 9552d519c7..a4165cfa40 100644
--- a/locust/runners.py
+++ b/locust/runners.py
@@ -1025,7 +1025,9 @@ def client_listener(self) -> NoReturn:
# if abs(time() - msg.data["time"]) > 5.0:
# warnings.warn("The worker node's clock seem to be out of sync. For the statistics to be correct the different locust servers need to have synchronized clocks.")
elif msg.type == "locustfile":
- if msg.data["version"][0:4] == __version__[0:4]:
+ if not msg.data["version"]:
+ logger.error("A very old worker version requested locustfile. This probably won't work.")
+ elif msg.data["version"][0:4] == __version__[0:4]:
logger.debug(
f"A worker ({msg.node_id}) running a different patch version ({msg.data['version']}) connected, master version is {__version__}"
)
| master crash with different version worker
### Prerequisites
- [X] I am using [the latest version of Locust](https://github.com/locustio/locust/releases/)
- [X] I am reporting a bug, not asking a question
### Description
I ran distributed locust with master node locust version 2.32.2 and some worker node locust version 2.25.0 (python3.8 default version).
The master node crash with the following message
```
➜ load-test locust -f locust.py --master
[2024-11-09 14:41:07,519] nasa33/INFO/locust.main: Starting Locust 2.32.2
[2024-11-09 14:41:07,524] nasa33/INFO/locust.main: Starting web interface at http://0.0.0.0:8089
Traceback (most recent call last):
File "src/gevent/greenlet.py", line 906, in gevent._gevent_cgreenlet.Greenlet.run
File "/home/uniform64/.local/lib/python3.10/site-packages/locust/runners.py", line 1030, in client_listener
if msg.data["version"][0:4] == __version__[0:4]:
TypeError: 'NoneType' object is not subscriptable
2024-11-09T06:41:13Z <Greenlet at 0x7f254a980cc0: <bound method MasterRunner.client_listener of <locust.runners.MasterRunner object at 0x7f254a963100>>> failed with TypeError
[2024-11-09 14:41:13,652] nasa33/CRITICAL/locust.runners: Unhandled exception in greenlet: <Greenlet at 0x7f254a980cc0: <bound method MasterRunner.client_listener of <locust.runners.MasterRunner object at 0x7f254a963100>>>
Traceback (most recent call last):
File "src/gevent/greenlet.py", line 906, in gevent._gevent_cgreenlet.Greenlet.run
File "/home/uniform64/.local/lib/python3.10/site-packages/locust/runners.py", line 1030, in client_listener
if msg.data["version"][0:4] == __version__[0:4]:
TypeError: 'NoneType' object is not subscriptable
```
when I use the following command on worker node.
```
~/.local/bin/locust -f - --worker --master-host 172.16.0.33 --processes -1
```
### Command line
locust -f locust.py --master
### Locustfile contents
```python3
import random
import string
from locust import HttpUser, task
def generate_random_string(length):
return "".join(random.choices(string.ascii_lowercase+string.digits,
k=length))
def generate_random_bytes(length):
return random.randbytes(length)
class SimpleClient(HttpUser):
@task
def upload(self):
# random generate a index and some data (both string)
index = generate_random_string(random.randint(10, 20))
data = generate_random_bytes(random.randint(100, 200))
self.client.post("/upload", headers={"Index": index}, data=data)
```
### Python version
3.10
### Locust version
2.32.2
### Operating system
ubuntu22.04
| 1,731,139,675,000 | null | Bug Report | [
"locust/runners.py:MasterRunner.client_listener"
] | [] | 1 | 513 |
||
ranaroussi/yfinance | ranaroussi__yfinance-2173 | 3ac85397cbaee4b28baea8e900e1de6e7b2fbe52 | diff --git a/yfinance/base.py b/yfinance/base.py
index 81733ba9..c3150759 100644
--- a/yfinance/base.py
+++ b/yfinance/base.py
@@ -30,7 +30,7 @@
import pandas as pd
import requests
-from . import utils, cache, Search
+from . import utils, cache
from .data import YfData
from .exceptions import YFEarningsDateMissing
from .scrapers.analysis import Analysis
@@ -534,19 +534,45 @@ def get_isin(self, proxy=None) -> Optional[str]:
self._isin = data.split(search_str)[1].split('"')[0].split('|')[0]
return self._isin
- def get_news(self, proxy=None) -> list:
+ def get_news(self, count=10, tab="news", proxy=None) -> list:
+ """Allowed options for tab: "news", "all", "press releases"""
if self._news:
return self._news
- search = Search(
- query=self.ticker,
- news_count=10,
- session=self.session,
- proxy=proxy,
- raise_errors=True
- )
- self._news = search.news
+ logger = utils.get_yf_logger()
+
+ tab_queryrefs = {
+ "all": "newsAll",
+ "news": "latestNews",
+ "press releases": "pressRelease",
+ }
+
+ query_ref = tab_queryrefs.get(tab.lower())
+ if not query_ref:
+ raise ValueError(f"Invalid tab name '{tab}'. Choose from: {', '.join(tab_queryrefs.keys())}")
+
+ url = f"{_ROOT_URL_}/xhr/ncp?queryRef={query_ref}&serviceKey=ncp_fin"
+ payload = {
+ "serviceConfig": {
+ "snippetCount": count,
+ "s": [self.ticker]
+ }
+ }
+
+ data = self._data.post(url, body=payload, proxy=proxy)
+ if data is None or "Will be right back" in data.text:
+ raise RuntimeError("*** YAHOO! FINANCE IS CURRENTLY DOWN! ***\n"
+ "Our engineers are working quickly to resolve "
+ "the issue. Thank you for your patience.")
+ try:
+ data = data.json()
+ except _json.JSONDecodeError:
+ logger.error(f"{self.ticker}: Failed to retrieve the news and received faulty response instead.")
+ data = {}
+
+ news = data.get("data", {}).get("tickerStream", {}).get("stream", [])
+ self._news = [article for article in news if not article.get('ad', [])]
return self._news
@utils.log_indent_decorator
| Any way to get more news?
`ticker.news` seems to return 8 to 10 news articles.
However, Yahoo Finance can offer many more than 8 to 10 news articles per ticker: https://finance.yahoo.com/quote/MSFT/news/ (keep scrolling down).
Is there a way to get more than 8 to 10 news articles with yfinance?
| Someone began working on a solution but abandoned it: #1949 | 1,733,699,514,000 | null | Feature Request | [
"yfinance/base.py:TickerBase.get_news"
] | [] | 1 | 514 |
|
ranaroussi/yfinance | ranaroussi__yfinance-2122 | f05f99c2b8101576911b35cbd3129afb04fb140d | diff --git a/yfinance/utils.py b/yfinance/utils.py
index 0968f9d1..ebc8b99a 100644
--- a/yfinance/utils.py
+++ b/yfinance/utils.py
@@ -613,7 +613,7 @@ def fix_Yahoo_returning_live_separate(quotes, interval, tz_exchange, repair=Fals
# - exception is volume, *slightly* greater on final row (and matches website)
if dt1.date() == dt2.date():
# Last two rows are on same day. Drop second-to-last row
- quotes = quotes.drop(quotes.index[n - 2])
+ quotes = _pd.concat([quotes.iloc[:-2], quotes.iloc[-1:]])
else:
if interval == "1wk":
last_rows_same_interval = dt1.year == dt2.year and dt1.week == dt2.week
| 0.2.42 and onwards fails to pull most recent trading days data for ASX stocks
### Describe bug
Pulling stock data using versions 0.2.42 and onwards fails to pull the last trading days data for ASX stocks. This could be related to timezones but the issue doesn't exist in 0.2.41.
### Simple code that reproduces your problem
`stock_data_daily = yf.download('CSL.AX', period='1y', interval='1d')`
### Debug log
DEBUG Entering download()
DEBUG:yfinance:Entering download()
DEBUG Disabling multithreading because DEBUG logging enabled
DEBUG:yfinance: Disabling multithreading because DEBUG logging enabled
DEBUG Entering history()
DEBUG:yfinance: Entering history()
DEBUG Entering history()
DEBUG:yfinance: Entering history()
DEBUG CSL.AX: Yahoo GET parameters: {'range': '1y', 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'}
DEBUG:yfinance: CSL.AX: Yahoo GET parameters: {'range': '1y', 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'}
DEBUG Entering get()
DEBUG:yfinance: Entering get()
DEBUG url=https://query2.finance.yahoo.com/v8/finance/chart/CSL.AX
DEBUG:yfinance: url=https://query2.finance.yahoo.com/v8/finance/chart/CSL.AX
DEBUG params={'range': '1y', 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'}
DEBUG:yfinance: params={'range': '1y', 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'}
DEBUG Entering _get_cookie_and_crumb()
DEBUG:yfinance: Entering _get_cookie_and_crumb()
DEBUG cookie_mode = 'basic'
DEBUG:yfinance: cookie_mode = 'basic'
DEBUG Entering _get_cookie_and_crumb_basic()
DEBUG:yfinance: Entering _get_cookie_and_crumb_basic()
DEBUG reusing cookie
DEBUG:yfinance: reusing cookie
DEBUG reusing crumb
DEBUG:yfinance: reusing crumb
DEBUG Exiting _get_cookie_and_crumb_basic()
DEBUG:yfinance: Exiting _get_cookie_and_crumb_basic()
DEBUG Exiting _get_cookie_and_crumb()
DEBUG:yfinance: Exiting _get_cookie_and_crumb()
DEBUG response code=200
DEBUG:yfinance: response code=200
DEBUG Exiting get()
DEBUG:yfinance: Exiting get()
DEBUG CSL.AX: yfinance received OHLC data: 2023-11-07 23:00:00 -> 2024-11-08 05:10:12
DEBUG:yfinance: CSL.AX: yfinance received OHLC data: 2023-11-07 23:00:00 -> 2024-11-08 05:10:12
DEBUG CSL.AX: OHLC after cleaning: 2023-11-08 10:00:00+11:00 -> 2024-11-08 16:10:12+11:00
DEBUG:yfinance: CSL.AX: OHLC after cleaning: 2023-11-08 10:00:00+11:00 -> 2024-11-08 16:10:12+11:00
DEBUG CSL.AX: OHLC after combining events: 2023-11-08 00:00:00+11:00 -> 2024-11-08 00:00:00+11:00
DEBUG:yfinance: CSL.AX: OHLC after combining events: 2023-11-08 00:00:00+11:00 -> 2024-11-08 00:00:00+11:00
DEBUG CSL.AX: yfinance returning OHLC: 2023-11-08 00:00:00+11:00 -> 2024-11-07 00:00:00+11:00
DEBUG:yfinance: CSL.AX: yfinance returning OHLC: 2023-11-08 00:00:00+11:00 -> 2024-11-07 00:00:00+11:00
DEBUG Exiting history()
DEBUG:yfinance: Exiting history()
DEBUG Exiting history()
DEBUG:yfinance: Exiting history()
DEBUG Exiting download()
DEBUG:yfinance:Exiting download()
### Bad data proof
_No response_
### `yfinance` version
>= 0.2.42
### Python version
_No response_
### Operating system
_No response_
| 1,731,237,392,000 | null | Bug Report | [
"yfinance/utils.py:fix_Yahoo_returning_live_separate"
] | [] | 1 | 515 |
||
scipy/scipy | scipy__scipy-22106 | 15d6284e5a0f3333394ca4498eb56bce14a6245b | diff --git a/scipy/sparse/_construct.py b/scipy/sparse/_construct.py
index 0326c9963f0b..f483976badb7 100644
--- a/scipy/sparse/_construct.py
+++ b/scipy/sparse/_construct.py
@@ -349,7 +349,7 @@ def eye_array(m, n=None, *, k=0, dtype=float, format=None):
Parameters
----------
- m : int or tuple of ints
+ m : int
Number of rows requested.
n : int, optional
Number of columns. Default: `m`.
| DOC: sparse: `sparse.eye_array` does not accept `tuple[int, int]` as the docs say that it should
### Describe your issue.
`scipy.sparse.eye_array` does not accept `m: tuple[int, int]` as the docs suggest is should:
https://github.com/scipy/scipy/blob/964f0bb6701dc17b51b842382ced0fa2ee318377/scipy/sparse/_construct.py#L350-L353
This is the case with at least `1.14.1` and `1.15.0rc1`
### Reproducing Code Example
```python
from scipy.sparse import eye_array
eye_array((1, 1))
```
### Error message
```shell
Traceback (most recent call last):
File "<python-input-2>", line 1, in <module>
eye_array((1, 1))
~~~~~~~~~^^^^^^^^
File "/home/joren/.pyenv/versions/3.13.1/lib/python3.13/site-packages/scipy/sparse/_construct.py", line 377, in eye_array
return _eye(m, n, k, dtype, format)
File "/home/joren/.pyenv/versions/3.13.1/lib/python3.13/site-packages/scipy/sparse/_construct.py", line 394, in _eye
m, n = int(m), int(n)
~~~^^^
TypeError: int() argument must be a string, a bytes-like object or a real number, not 'tuple'
```
### SciPy/NumPy/Python version and system information
```shell
1.15.0rc1 2.2.0 sys.version_info(major=3, minor=13, micro=1, releaselevel='final', serial=0)
Build Dependencies:
blas:
detection method: pkgconfig
found: true
include directory: /opt/_internal/cpython-3.13.0/lib/python3.13/site-packages/scipy_openblas32/include
lib directory: /opt/_internal/cpython-3.13.0/lib/python3.13/site-packages/scipy_openblas32/lib
name: scipy-openblas
openblas configuration: OpenBLAS 0.3.28 DYNAMIC_ARCH NO_AFFINITY Haswell MAX_THREADS=64
pc file directory: /project
version: 0.3.28
lapack:
detection method: pkgconfig
found: true
include directory: /opt/_internal/cpython-3.13.0/lib/python3.13/site-packages/scipy_openblas32/include
lib directory: /opt/_internal/cpython-3.13.0/lib/python3.13/site-packages/scipy_openblas32/lib
name: scipy-openblas
openblas configuration: OpenBLAS 0.3.28 DYNAMIC_ARCH NO_AFFINITY Haswell MAX_THREADS=64
pc file directory: /project
version: 0.3.28
pybind11:
detection method: config-tool
include directory: unknown
name: pybind11
version: 2.13.6
Compilers:
c:
commands: cc
linker: ld.bfd
name: gcc
version: 10.2.1
c++:
commands: c++
linker: ld.bfd
name: gcc
version: 10.2.1
cython:
commands: cython
linker: cython
name: cython
version: 3.0.11
fortran:
commands: gfortran
linker: ld.bfd
name: gcc
version: 10.2.1
pythran:
include directory: ../../tmp/pip-build-env-fa6gfmf0/overlay/lib/python3.13/site-packages/pythran
version: 0.17.0
Machine Information:
build:
cpu: x86_64
endian: little
family: x86_64
system: linux
cross-compiled: false
host:
cpu: x86_64
endian: little
family: x86_64
system: linux
Python Information:
path: /opt/python/cp313-cp313/bin/python
version: '3.13'
```
DOC: sparse: `sparse.eye_array` does not accept `tuple[int, int]` as the docs say that it should
### Describe your issue.
`scipy.sparse.eye_array` does not accept `m: tuple[int, int]` as the docs suggest is should:
https://github.com/scipy/scipy/blob/964f0bb6701dc17b51b842382ced0fa2ee318377/scipy/sparse/_construct.py#L350-L353
This is the case with at least `1.14.1` and `1.15.0rc1`
### Reproducing Code Example
```python
from scipy.sparse import eye_array
eye_array((1, 1))
```
### Error message
```shell
Traceback (most recent call last):
File "<python-input-2>", line 1, in <module>
eye_array((1, 1))
~~~~~~~~~^^^^^^^^
File "/home/joren/.pyenv/versions/3.13.1/lib/python3.13/site-packages/scipy/sparse/_construct.py", line 377, in eye_array
return _eye(m, n, k, dtype, format)
File "/home/joren/.pyenv/versions/3.13.1/lib/python3.13/site-packages/scipy/sparse/_construct.py", line 394, in _eye
m, n = int(m), int(n)
~~~^^^
TypeError: int() argument must be a string, a bytes-like object or a real number, not 'tuple'
```
### SciPy/NumPy/Python version and system information
```shell
1.15.0rc1 2.2.0 sys.version_info(major=3, minor=13, micro=1, releaselevel='final', serial=0)
Build Dependencies:
blas:
detection method: pkgconfig
found: true
include directory: /opt/_internal/cpython-3.13.0/lib/python3.13/site-packages/scipy_openblas32/include
lib directory: /opt/_internal/cpython-3.13.0/lib/python3.13/site-packages/scipy_openblas32/lib
name: scipy-openblas
openblas configuration: OpenBLAS 0.3.28 DYNAMIC_ARCH NO_AFFINITY Haswell MAX_THREADS=64
pc file directory: /project
version: 0.3.28
lapack:
detection method: pkgconfig
found: true
include directory: /opt/_internal/cpython-3.13.0/lib/python3.13/site-packages/scipy_openblas32/include
lib directory: /opt/_internal/cpython-3.13.0/lib/python3.13/site-packages/scipy_openblas32/lib
name: scipy-openblas
openblas configuration: OpenBLAS 0.3.28 DYNAMIC_ARCH NO_AFFINITY Haswell MAX_THREADS=64
pc file directory: /project
version: 0.3.28
pybind11:
detection method: config-tool
include directory: unknown
name: pybind11
version: 2.13.6
Compilers:
c:
commands: cc
linker: ld.bfd
name: gcc
version: 10.2.1
c++:
commands: c++
linker: ld.bfd
name: gcc
version: 10.2.1
cython:
commands: cython
linker: cython
name: cython
version: 3.0.11
fortran:
commands: gfortran
linker: ld.bfd
name: gcc
version: 10.2.1
pythran:
include directory: ../../tmp/pip-build-env-fa6gfmf0/overlay/lib/python3.13/site-packages/pythran
version: 0.17.0
Machine Information:
build:
cpu: x86_64
endian: little
family: x86_64
system: linux
cross-compiled: false
host:
cpu: x86_64
endian: little
family: x86_64
system: linux
Python Information:
path: /opt/python/cp313-cp313/bin/python
version: '3.13'
```
| Thank you for pointing this out!!
We should be using the [array_api specification](https://data-apis.org/array-api/latest/API_specification) for the [`eye` function](https://data-apis.org/array-api/latest/API_specification/generated/array_api.eye.html). That should also align us with the numpy interface. The function name is not `eye`, but we intend to rename it after the spmatrix functions are deprecated and removed.
Luckily it looks like the code does the right thing. The docs are not what we want. So we need to change the docs by removing ` or tuple of ints`.
And it would be good to backport the change to the maintenance branch (I believe it is far too late to update the 1.14.1 docs).
> Thank you for pointing this out!! We should be using the [array_api specification](https://data-apis.org/array-api/latest/API_specification) for the [`eye` function](https://data-apis.org/array-api/latest/API_specification/generated/array_api.eye.html). That should also align us with the numpy interface. The function name is not `eye`, but we intend to rename it after the spmatrix functions are deprecated and removed.
>
> Luckily it looks like the code does the right thing. The docs are not what we want. So we need to change the docs by removing ` or tuple of ints`.
>
> And it would be good to backport the change to the maintenance branch (I believe it is far too late to update the 1.14.1 docs).
I like that; it's easier to annotate that way :)
Thank you for pointing this out!!
We should be using the [array_api specification](https://data-apis.org/array-api/latest/API_specification) for the [`eye` function](https://data-apis.org/array-api/latest/API_specification/generated/array_api.eye.html). That should also align us with the numpy interface. The function name is not `eye`, but we intend to rename it after the spmatrix functions are deprecated and removed.
Luckily it looks like the code does the right thing. The docs are not what we want. So we need to change the docs by removing ` or tuple of ints`.
And it would be good to backport the change to the maintenance branch (I believe it is far too late to update the 1.14.1 docs).
> Thank you for pointing this out!! We should be using the [array_api specification](https://data-apis.org/array-api/latest/API_specification) for the [`eye` function](https://data-apis.org/array-api/latest/API_specification/generated/array_api.eye.html). That should also align us with the numpy interface. The function name is not `eye`, but we intend to rename it after the spmatrix functions are deprecated and removed.
>
> Luckily it looks like the code does the right thing. The docs are not what we want. So we need to change the docs by removing ` or tuple of ints`.
>
> And it would be good to backport the change to the maintenance branch (I believe it is far too late to update the 1.14.1 docs).
I like that; it's easier to annotate that way :) | 1,734,439,741,000 | null | Bug Report | [
"scipy/sparse/_construct.py:eye_array"
] | [] | 1 | 516 |
|
scipy/scipy | scipy__scipy-22103 | caa7e2ab245a808a1c55a20fb5d5b49daf8bad93 | diff --git a/scipy/stats/_stats_py.py b/scipy/stats/_stats_py.py
index de7be104289b..71ae19acabc2 100644
--- a/scipy/stats/_stats_py.py
+++ b/scipy/stats/_stats_py.py
@@ -4298,7 +4298,7 @@ def pearsonr(x, y, *, alternative='two-sided', method=None, axis=0):
Axis along which to perform the calculation. Default is 0.
If None, ravel both arrays before performing the calculation.
- .. versionadded:: 1.13.0
+ .. versionadded:: 1.14.0
alternative : {'two-sided', 'greater', 'less'}, optional
Defines the alternative hypothesis. Default is 'two-sided'.
The following options are available:
| DOC: stats.pearsonr: incorrect `versionadded` for `axis` param
### Issue with current documentation:
Regarding the documentation of function scipy.stats.pearsonr. Typo in the version reference. The axis option is not in v1.13.0. It first appears in v1.14.0
### Idea or request for content:
Correct the version reference in the docstring. 1.13.0 --> 1.14.0
### Additional context (e.g. screenshots, GIFs)
```
def pearsonr(x, y, *, alternative='two-sided', method=None, axis=0):
r"""
Pearson correlation coefficient and p-value for testing non-correlation.
The Pearson correlation coefficient [1]_ measures the linear relationship
between two datasets. Like other correlation
coefficients, this one varies between -1 and +1 with 0 implying no
correlation. Correlations of -1 or +1 imply an exact linear relationship.
Positive correlations imply that as x increases, so does y. Negative
correlations imply that as x increases, y decreases.
This function also performs a test of the null hypothesis that the
distributions underlying the samples are uncorrelated and normally
distributed. (See Kowalski [3]_
for a discussion of the effects of non-normality of the input on the
distribution of the correlation coefficient.)
The p-value roughly indicates the probability of an uncorrelated system
producing datasets that have a Pearson correlation at least as extreme
as the one computed from these datasets.
Parameters
----------
x : array_like
Input array.
y : array_like
Input array.
axis : int or None, default
Axis along which to perform the calculation. Default is 0.
If None, ravel both arrays before performing the calculation.
.. versionadded:: 1.13.0
```
| Thanks @biopzhang, agreed that this is a typo. Would you like to submit a PR to fix this? | 1,734,406,832,000 | null | Bug Report | [
"scipy/stats/_stats_py.py:pearsonr"
] | [] | 1 | 517 |
|
scipy/scipy | scipy__scipy-22052 | 7f03fbaf30c400ff4bb14020f7f284ec2703c4d1 | diff --git a/scipy/sparse/linalg/_dsolve/linsolve.py b/scipy/sparse/linalg/_dsolve/linsolve.py
index d1ab77883163..560cb75bbf99 100644
--- a/scipy/sparse/linalg/_dsolve/linsolve.py
+++ b/scipy/sparse/linalg/_dsolve/linsolve.py
@@ -371,6 +371,10 @@ def splu(A, permc_spec=None, diag_pivot_thresh=None,
Notes
-----
+ When a real array is factorized and the returned SuperLU object's ``solve()`` method
+ is used with complex arguments an error is generated. Instead, cast the initial
+ array to complex and then factorize.
+
This function uses the SuperLU library.
References
@@ -468,6 +472,10 @@ def spilu(A, drop_tol=None, fill_factor=None, drop_rule=None, permc_spec=None,
Notes
-----
+ When a real array is factorized and the returned SuperLU object's ``solve()`` method
+ is used with complex arguments an error is generated. Instead, cast the initial
+ array to complex and then factorize.
+
To improve the better approximation to the inverse, you may need to
increase `fill_factor` AND decrease `drop_tol`.
| sparse LU decomposition does not solve with complex right-hand side
The `solve` method of the sparse LU-decomposition `splu` or `spilu` throws a `TypeError` if called with a `numpy.array` of type `numpy.complex`. I am actually using `spilu` for preconditioning a gmres-solver required to perform a linear solve in a non-symmetric generalized eigenvalue problem. The eigenvectors are complex and hence the right-hand side for the linear solve can be complex.
### Reproducing code example:
```
import numpy as np
from scipy.sparse import csr_matrix
import scipy.sparse.linalg as sp_sparse_la
A = csr_matrix([[2.,-1.],[-1.,2.]])
n = A.shape[0]
v_real = np.random.randn(n)
v_cmplx = np.random.randn(n) + 1.0J * np.random.randn(n)
luA = sp_sparse_la.splu(A)
x_real = luA.solve(v_real)
x_cmplx = luA.solve(v_cmplx)
```
### Error message:
```
Traceback (most recent call last):
File "dump.py", line 20, in <module>
x_cmplx = luA.solve(v_cmplx)
TypeError: Cannot cast array data from dtype('complex128') to dtype('float64') according to the rule 'safe'
```
### Scipy/Numpy/Python version information:
```
('1.0.0', '1.13.3', sys.version_info(major=2, minor=7, micro=12, releaselevel='final', serial=0))
```
sparse LU decomposition does not solve with complex right-hand side
The `solve` method of the sparse LU-decomposition `splu` or `spilu` throws a `TypeError` if called with a `numpy.array` of type `numpy.complex`. I am actually using `spilu` for preconditioning a gmres-solver required to perform a linear solve in a non-symmetric generalized eigenvalue problem. The eigenvectors are complex and hence the right-hand side for the linear solve can be complex.
### Reproducing code example:
```
import numpy as np
from scipy.sparse import csr_matrix
import scipy.sparse.linalg as sp_sparse_la
A = csr_matrix([[2.,-1.],[-1.,2.]])
n = A.shape[0]
v_real = np.random.randn(n)
v_cmplx = np.random.randn(n) + 1.0J * np.random.randn(n)
luA = sp_sparse_la.splu(A)
x_real = luA.solve(v_real)
x_cmplx = luA.solve(v_cmplx)
```
### Error message:
```
Traceback (most recent call last):
File "dump.py", line 20, in <module>
x_cmplx = luA.solve(v_cmplx)
TypeError: Cannot cast array data from dtype('complex128') to dtype('float64') according to the rule 'safe'
```
### Scipy/Numpy/Python version information:
```
('1.0.0', '1.13.3', sys.version_info(major=2, minor=7, micro=12, releaselevel='final', serial=0))
```
| if you cast your A matrix as complex, then it works in both cases. So probably when the LHS is real it selects a real-typed solver and complains.
Thank you, you are right. Maybe some comments regarding this issue should be added in the documentation.
Good first issue, depending on familiarity with the math.
Hi I'm working on it I'll try to do it by the end of next week
A note such as the following in the docstring of `splu` and `spilu` would close this issue
````When a real array is factorized and the returned SuperLU object ``solve()`` method is used with complex arguments an error is generated. Instead cast the initial matrix to complex and then factorize.````
Hi @j-bowhay, thanks for the comment, I am a first-time contributor to scipy, I would like to start from this issue
We don't assign issues to specific people but please feel free to have a go
if you cast your A matrix as complex, then it works in both cases. So probably when the LHS is real it selects a real-typed solver and complains.
Thank you, you are right. Maybe some comments regarding this issue should be added in the documentation.
Good first issue, depending on familiarity with the math.
Hi I'm working on it I'll try to do it by the end of next week
A note such as the following in the docstring of `splu` and `spilu` would close this issue
````When a real array is factorized and the returned SuperLU object ``solve()`` method is used with complex arguments an error is generated. Instead cast the initial matrix to complex and then factorize.````
Hi @j-bowhay, thanks for the comment, I am a first-time contributor to scipy, I would like to start from this issue
We don't assign issues to specific people but please feel free to have a go | 1,733,917,709,000 | null | Bug Report | [
"scipy/sparse/linalg/_dsolve/linsolve.py:splu",
"scipy/sparse/linalg/_dsolve/linsolve.py:spilu"
] | [] | 2 | 518 |
|
DS4SD/docling | DS4SD__docling-528 | c830b92b2e043ea63d216f65b3f9d88d2a8c33f7 | diff --git a/docling/backend/msword_backend.py b/docling/backend/msword_backend.py
index 05508712..bab956a7 100644
--- a/docling/backend/msword_backend.py
+++ b/docling/backend/msword_backend.py
@@ -133,7 +133,6 @@ def get_level(self) -> int:
def walk_linear(self, body, docx_obj, doc) -> DoclingDocument:
for element in body:
tag_name = etree.QName(element).localname
-
# Check for Inline Images (blip elements)
namespaces = {
"a": "http://schemas.openxmlformats.org/drawingml/2006/main",
@@ -153,6 +152,7 @@ def walk_linear(self, body, docx_obj, doc) -> DoclingDocument:
self.handle_pictures(element, docx_obj, drawing_blip, doc)
# Check for Text
elif tag_name in ["p"]:
+ # "tcPr", "sectPr"
self.handle_text_elements(element, docx_obj, doc)
else:
_log.debug(f"Ignoring element in DOCX with tag: {tag_name}")
@@ -219,7 +219,6 @@ def handle_text_elements(self, element, docx_obj, doc):
if paragraph.text is None:
return
text = paragraph.text.strip()
- # if len(text)==0 # keep empty paragraphs, they seperate adjacent lists!
# Common styles for bullet and numbered lists.
# "List Bullet", "List Number", "List Paragraph"
@@ -291,9 +290,7 @@ def handle_text_elements(self, element, docx_obj, doc):
def add_header(self, element, docx_obj, doc, curr_name, curr_level, text: str):
level = self.get_level()
if isinstance(curr_level, int):
-
if curr_level > level:
-
# add invisible group
for i in range(level, curr_level):
self.parents[i] = doc.add_group(
@@ -301,9 +298,7 @@ def add_header(self, element, docx_obj, doc, curr_name, curr_level, text: str):
label=GroupLabel.SECTION,
name=f"header-{i}",
)
-
elif curr_level < level:
-
# remove the tail
for key, val in self.parents.items():
if key >= curr_level:
@@ -314,7 +309,6 @@ def add_header(self, element, docx_obj, doc, curr_name, curr_level, text: str):
text=text,
level=curr_level,
)
-
else:
self.parents[self.level] = doc.add_heading(
parent=self.parents[self.level - 1],
@@ -346,7 +340,7 @@ def add_listitem(
label=GroupLabel.LIST, name="list", parent=self.parents[level - 1]
)
- # TODO: Set marker and enumerated arguments if this is an enumeration element.
+ # Set marker and enumerated arguments if this is an enumeration element.
self.listIter += 1
if is_numbered:
enum_marker = str(self.listIter) + "."
@@ -365,8 +359,8 @@ def add_listitem(
self.level_at_new_list + self.prev_indent() + 1,
self.level_at_new_list + ilevel + 1,
):
- # TODO: determine if this is an unordered list or an ordered list.
- # Set GroupLabel.ORDERED_LIST when it fits.
+ # Determine if this is an unordered list or an ordered list.
+ # Set GroupLabel.ORDERED_LIST when it fits.
self.listIter = 0
if is_numbered:
self.parents[i] = doc.add_group(
@@ -467,6 +461,19 @@ def get_rowspan(cell):
row_span = get_rowspan(cell)
col_span = get_colspan(cell)
+ cell_text = cell.text
+ # In case cell doesn't return text via docx library:
+ if len(cell_text) == 0:
+ cell_xml = cell._element
+
+ texts = [""]
+ for elem in cell_xml.iter():
+ if elem.tag.endswith("t"): # <w:t> tags that contain text
+ if elem.text:
+ texts.append(elem.text)
+ # Join the collected text
+ cell_text = " ".join(texts).strip()
+
# Find the next available column in the grid
while table_grid[row_idx][col_idx] is not None:
col_idx += 1
@@ -477,15 +484,15 @@ def get_rowspan(cell):
table_grid[row_idx + i][col_idx + j] = ""
cell = TableCell(
- text=cell.text,
+ text=cell_text,
row_span=row_span,
col_span=col_span,
start_row_offset_idx=row_idx,
end_row_offset_idx=row_idx + row_span,
start_col_offset_idx=col_idx,
end_col_offset_idx=col_idx + col_span,
- col_header=False, # col_header,
- row_header=False, # ((not col_header) and html_cell.name=='th')
+ col_header=False,
+ row_header=False,
)
data.table_cells.append(cell)
| What is the meaning of `missing-text`?
### Question
When exporting docx documents as text, I always seem to get some `missing-text` in the output. I was not able to find this string in the project repository, `python-docx`, or documentation.
Snippet:
```py
doc_converter = DocumentConverter(allowed_formats=[InputFormat.DOCX])
conv_res = doc_converter.convert(input_doc_path)
print(conv_res.document.export_to_text())
```
Output:
```py
<missing-text>
<missing-text>
<missing-text>
<missing-text>
<missing-text>
<missing-text>
<missing-text>
<missing-text>
<missing-text>
<missing-text>
<missing-text>
<missing-text>
<missing-text>
<missing-text>
<missing-text>
```
Documents:
- Complete failure, all text is "missing-text": [doc.docx](https://github.com/user-attachments/files/17983955/doc.docx)
- Partial failure, only some of the text is "missing-text": [doc2.docx](https://github.com/user-attachments/files/17983962/doc2.docx)
Both documents are public.
What causes `missing-text`? What should be my mental model for it when processing documents?
Thanks!
| @Belval, thanks for sharing with sample documents, I will check this! | 1,733,475,107,000 | null | Bug Report | [
"docling/backend/msword_backend.py:MsWordDocumentBackend.handle_tables"
] | [] | 1 | 519 |
|
DS4SD/docling | DS4SD__docling-472 | cc46c938b66b2d24f601acc9646782dc83326e1f | diff --git a/docling/models/tesseract_ocr_cli_model.py b/docling/models/tesseract_ocr_cli_model.py
index 9a50eee0..a6b2f7fb 100644
--- a/docling/models/tesseract_ocr_cli_model.py
+++ b/docling/models/tesseract_ocr_cli_model.py
@@ -1,3 +1,4 @@
+import csv
import io
import logging
import tempfile
@@ -95,7 +96,7 @@ def _run_tesseract(self, ifilename: str):
# _log.info(decoded_data)
# Read the TSV file generated by Tesseract
- df = pd.read_csv(io.StringIO(decoded_data), sep="\t")
+ df = pd.read_csv(io.StringIO(decoded_data), quoting=csv.QUOTE_NONE, sep="\t")
# Display the dataframe (optional)
# _log.info("df: ", df.head())
| pandas.errors.ParserError: Error tokenizing data. C error: EOF inside string starting at row 656
### Bug
Trying to convert a PDF I get the following error, the same options works on other PDFs.
**Seems related to `pandas.read_csv()` on the TSV output of Tesseract.**
```
Encountered an error during conversion of document b137be2685712845d8afee55fe6327d2901290f9a852a25b3f7b19010df64e10:
Traceback (most recent call last):
File ".../docling/pipeline/base_pipeline.py", line 149, in _build_document
for p in pipeline_pages: # Must exhaust!
^^^^^^^^^^^^^^
File ".../docling/pipeline/base_pipeline.py", line 116, in _apply_on_pages
yield from page_batch
File ".../docling/models/page_assemble_model.py", line 59, in __call__
for page in page_batch:
^^^^^^^^^^
File ".../docling/models/table_structure_model.py", line 93, in __call__
for page in page_batch:
^^^^^^^^^^
File ".../docling/models/layout_model.py", line 281, in __call__
for page in page_batch:
^^^^^^^^^^
File ".../docling/models/tesseract_ocr_cli_model.py", line 140, in __call__
df = self._run_tesseract(fname)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../docling/models/tesseract_ocr_cli_model.py", line 98, in _run_tesseract
df = pd.read_csv(io.StringIO(decoded_data), sep="\t")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../pandas/io/parsers/readers.py", line 1026, in read_csv
return _read(filepath_or_buffer, kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../pandas/io/parsers/readers.py", line 626, in _read
return parser.read(nrows)
^^^^^^^^^^^^^^^^^^
File ".../pandas/io/parsers/readers.py", line 1923, in read
) = self._engine.read( # type: ignore[attr-defined]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../pandas/io/parsers/c_parser_wrapper.py", line 234, in read
chunks = self._reader.read_low_memory(nrows)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "parsers.pyx", line 838, in pandas._libs.parsers.TextReader.read_low_memory
File "parsers.pyx", line 905, in pandas._libs.parsers.TextReader._read_rows
File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
File "parsers.pyx", line 2061, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: EOF inside string starting at row 656
```
### Steps to reproduce
```
ocr_options = TesseractCliOcrOptions(force_full_page_ocr=True)
pipeline_options = PdfPipelineOptions()
pipeline_options.do_ocr = True
pipeline_options.do_table_structure = True
pipeline_options.table_structure_options.do_cell_matching = True
pipeline_options.ocr_options = ocr_options
converter = DocumentConverter(
format_options={
InputFormat.PDF: PdfFormatOption(
pipeline_options=pipeline_options,
)
}
)
conv_res = converter.convert(Path(my_pdf_path))
```
### Docling version
```
Docling version: 2.5.2
Docling Core version: 2.4.0
Docling IBM Models version: 2.0.3
Docling Parse version: 2.0.4
```
### Python version
`Python 3.12.7`
| 1,732,897,993,000 | null | Bug Report | [
"docling/models/tesseract_ocr_cli_model.py:TesseractOcrCliModel._run_tesseract"
] | [] | 1 | 520 |
||
DS4SD/docling | DS4SD__docling-442 | 6666d9ec070650df35a8b156643a78c32dcfefb5 | diff --git a/docling/backend/msword_backend.py b/docling/backend/msword_backend.py
index 496bdb7b..05508712 100644
--- a/docling/backend/msword_backend.py
+++ b/docling/backend/msword_backend.py
@@ -507,18 +507,19 @@ def get_docx_image(element, drawing_blip):
image_data = get_docx_image(element, drawing_blip)
image_bytes = BytesIO(image_data)
+ level = self.get_level()
# Open the BytesIO object with PIL to create an Image
try:
pil_image = Image.open(image_bytes)
doc.add_picture(
- parent=self.parents[self.level],
+ parent=self.parents[level - 1],
image=ImageRef.from_pil(image=pil_image, dpi=72),
caption=None,
)
except (UnidentifiedImageError, OSError) as e:
_log.warning("Warning: image cannot be loaded by Pillow")
doc.add_picture(
- parent=self.parents[self.level],
+ parent=self.parents[level - 1],
caption=None,
)
return
| Image location in Word Document is wrong
### Bug
The image placeholder in parsed docx documents is wrong. An incorrect index is used resulting in a wrong location for images in downstream export formats like markdown.
### Steps to reproduce
Parsing a simple .docx with docling
[image_within_text.docx](https://github.com/user-attachments/files/17919742/image_within_text.docx)
### Docling version
Docling version: 2.7.0
Docling Core version: 2.4.1
Docling IBM Models version: 2.0.6
Docling Parse version: 2.1.2
### Python version
3.12.4
<!-- ⚠️ ATTENTION: When sharing screenshots, attachments, or other data make sure not to include any sensitive information. -->
| 1,732,630,531,000 | null | Bug Report | [
"docling/backend/msword_backend.py:MsWordDocumentBackend.handle_pictures"
] | [] | 1 | 521 |
||
DS4SD/docling | DS4SD__docling-322 | 2c0c439a4417d87aa712964acadb8618ea96ee65 | diff --git a/docling/models/ds_glm_model.py b/docling/models/ds_glm_model.py
index e63bad3a..0a066bfa 100644
--- a/docling/models/ds_glm_model.py
+++ b/docling/models/ds_glm_model.py
@@ -43,7 +43,8 @@ class GlmModel:
def __init__(self, options: GlmOptions):
self.options = options
- load_pretrained_nlp_models()
+ if self.options.model_names != "":
+ load_pretrained_nlp_models()
self.model = init_nlp_model(model_names=self.options.model_names)
def _to_legacy_document(self, conv_res) -> DsDocument:
| Unable to run.
### Bug
<!-- Describe the buggy behavior you have observed. -->
PS C:\Users\genco> & C:/ProgramData/anaconda3/envs/docling/python.exe c:/Users/genco/OneDrive/Documents/marker_new/docling_convertor_testing.py
Fetching 9 files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:00<?, ?it/s]
Traceback (most recent call last):
File "c:\Users\genco\OneDrive\Documents\marker_new\docling_convertor_testing.py", line 5, in <module>
result = converter.convert(source)
File "C:\ProgramData\anaconda3\envs\docling\lib\site-packages\pydantic\validate_call_decorator.py", line 60, in wrapper_function
return validate_call_wrapper(*args, **kwargs)
File "C:\ProgramData\anaconda3\envs\docling\lib\site-packages\pydantic\_internal\_validate_call.py", line 96, in __call__
res = self.__pydantic_validator__.validate_python(pydantic_core.ArgsKwargs(args, kwargs))
File "C:\ProgramData\anaconda3\envs\docling\lib\site-packages\docling\document_converter.py", line 161, in convert
return next(all_res)
File "C:\ProgramData\anaconda3\envs\docling\lib\site-packages\docling\document_converter.py", line 180, in convert_all
for conv_res in conv_res_iter:
File "C:\ProgramData\anaconda3\envs\docling\lib\site-packages\docling\document_converter.py", line 211, in _convert
for item in map(
File "C:\ProgramData\anaconda3\envs\docling\lib\site-packages\docling\document_converter.py", line 255, in _process_document
conv_res = self._execute_pipeline(in_doc, raises_on_error=raises_on_error)
File "C:\ProgramData\anaconda3\envs\docling\lib\site-packages\docling\document_converter.py", line 263, in _execute_pipeline
pipeline = self._get_pipeline(in_doc.format)
File "C:\ProgramData\anaconda3\envs\docling\lib\site-packages\docling\document_converter.py", line 244, in _get_pipeline
self.initialized_pipelines[pipeline_class] = pipeline_class(
File "C:\ProgramData\anaconda3\envs\docling\lib\site-packages\docling\pipeline\standard_pdf_pipeline.py", line 54, in __init__
self.glm_model = GlmModel(options=GlmOptions())
File "C:\ProgramData\anaconda3\envs\docling\lib\site-packages\docling\models\ds_glm_model.py", line 46, in __init__
load_pretrained_nlp_models()
File "C:\ProgramData\anaconda3\envs\docling\lib\site-packages\deepsearch_glm\utils\load_pretrained_models.py", line 120, in load_pretrained_nlp_models
done, data = download_items(downloads)
File "C:\ProgramData\anaconda3\envs\docling\lib\site-packages\deepsearch_glm\utils\load_pretrained_models.py", line 50, in download_items
with target.open("wb") as fw:
File "C:\ProgramData\anaconda3\envs\docling\lib\pathlib.py", line 1119, in open
return self._accessor.open(self, mode, buffering, encoding, errors,
PermissionError: [Errno 13] Permission denied: 'C:\\ProgramData\\anaconda3\\envs\\docling\\lib\\site-packages\\deepsearch_glm\\resources\\models\\crf\\part-of-speech\\crf_pos_model_en.bin'
### Steps to reproduce
<!-- Describe the sequence of steps for reproducing the bug. -->
run code:
from docling.document_converter import DocumentConverter
source = "https://arxiv.org/pdf/2408.09869" # PDF path or URL
converter = DocumentConverter()
result = converter.convert(source)
print(result.document.export_to_markdown()) # output: "### Docling Technical Report[...]"
### Docling version
<!-- Copy the output of `docling --version`. -->
latest version.
### Python version
<!-- Copy the output of `python --version`. -->
3.10.15
<!-- ⚠️ ATTENTION: When sharing screenshots, attachments, or other data make sure not to include any sensitive information. -->
| @ashunaveed Can you please tell us the exact version. There should be no need to download `crf_pos_model_en.bin`.
Please run,
```
docling --version
```
We suspect that you have by chance an older version, but we want to be 100% sure.
I'm trying to run Docling on a server without internet connection so I have downloaded the layout model and tableformer from Hugging Face and then I try to run with custom artifact path as per your documentation:
```
pipeline_options = PdfPipelineOptions(artifacts_path=artifacts_path)
doc_converter = DocumentConverter(
format_options={
InputFormat.PDF: PdfFormatOption(pipeline_options=pipeline_options)
}
)
```
But I get an error similar to the OP (though for me the problem is timeout due to connection error).
I have tried with these versions:
Docling version: 2.5.1
Docling Core version: 2.3.2
Docling IBM Models version: 2.0.3
Docling Parse version: 2.0.3
and an older version:
Docling version: 2.3.1
Docling Core version: 2.3.1
Docling IBM Models version: 2.0.3
Docling Parse version: 2.0.2
And it tries to download the glm files in both versions.
I'm mostly curious to understand if the GLM files are needed as your answer above indicates that, at least crf_pos_model_en.bin, shouldn't be needed at all. | 1,731,480,648,000 | null | Bug Report | [
"docling/models/ds_glm_model.py:GlmModel.__init__"
] | [] | 1 | 523 |
|
DS4SD/docling | DS4SD__docling-307 | 1239ade2750349d13d4e865d88449b232bbad944 | diff --git a/docling/backend/mspowerpoint_backend.py b/docling/backend/mspowerpoint_backend.py
index cbec761c..b71cd859 100644
--- a/docling/backend/mspowerpoint_backend.py
+++ b/docling/backend/mspowerpoint_backend.py
@@ -358,41 +358,36 @@ def walk_linear(self, pptx_obj, doc) -> DoclingDocument:
size = Size(width=slide_width, height=slide_height)
parent_page = doc.add_page(page_no=slide_ind + 1, size=size)
- # parent_page = doc.add_page(page_no=slide_ind, size=size, hash=hash)
-
- # Loop through each shape in the slide
- for shape in slide.shapes:
+ def handle_shapes(shape, parent_slide, slide_ind, doc):
+ handle_groups(shape, parent_slide, slide_ind, doc)
if shape.has_table:
# Handle Tables
self.handle_tables(shape, parent_slide, slide_ind, doc)
-
if shape.shape_type == MSO_SHAPE_TYPE.PICTURE:
- # Handle Tables
+ # Handle Pictures
self.handle_pictures(shape, parent_slide, slide_ind, doc)
-
# If shape doesn't have any text, move on to the next shape
if not hasattr(shape, "text"):
- continue
+ return
if shape.text is None:
- continue
+ return
if len(shape.text.strip()) == 0:
- continue
+ return
if not shape.has_text_frame:
- _log.warn("Warning: shape has text but not text_frame")
- continue
-
- # if shape.is_placeholder:
- # Handle Titles (Headers) and Subtitles
- # Check if the shape is a placeholder (titles are placeholders)
- # self.handle_title(shape, parent_slide, slide_ind, doc)
- # self.handle_text_elements(shape, parent_slide, slide_ind, doc)
- # else:
-
+ _log.warning("Warning: shape has text but not text_frame")
+ return
# Handle other text elements, including lists (bullet lists, numbered lists)
self.handle_text_elements(shape, parent_slide, slide_ind, doc)
+ return
+
+ def handle_groups(shape, parent_slide, slide_ind, doc):
+ if shape.shape_type == MSO_SHAPE_TYPE.GROUP:
+ for groupedshape in shape.shapes:
+ handle_shapes(groupedshape, parent_slide, slide_ind, doc)
- # figures...
- # doc.add_figure(data=BaseFigureData(), parent=self.parents[self.level], caption=None)
+ # Loop through each shape in the slide
+ for shape in slide.shapes:
+ handle_shapes(shape, parent_slide, slide_ind, doc)
return doc
| In a specific PowerPoint, an issue with missing text occurred during parsing.
### Bug
<!-- In a specific PowerPoint, an issue with missing text occurred during parsing. -->
...
[specific PowerPoint]
[powerpoint_sample.pptx](https://github.com/user-attachments/files/17694015/powerpoint_sample.pptx)
...
### Python version
docling 2.4.0
Python version: 3.12.7
...
| @Crespo522 I'm working on the fix, in short - we need to handle grouped elements correctly. | 1,731,333,112,000 | null | Bug Report | [
"docling/backend/mspowerpoint_backend.py:MsPowerpointDocumentBackend.walk_linear"
] | [] | 1 | 524 |
|
DS4SD/docling | DS4SD__docling-302 | 97f214efddcf66f0734a95c17c08936f6111d113 | diff --git a/docling/backend/html_backend.py b/docling/backend/html_backend.py
index 7d14c2eb..9cd1e29b 100644
--- a/docling/backend/html_backend.py
+++ b/docling/backend/html_backend.py
@@ -120,6 +120,8 @@ def analyse_element(self, element, idx, doc):
self.handle_header(element, idx, doc)
elif element.name in ["p"]:
self.handle_paragraph(element, idx, doc)
+ elif element.name in ["pre"]:
+ self.handle_code(element, idx, doc)
elif element.name in ["ul", "ol"]:
self.handle_list(element, idx, doc)
elif element.name in ["li"]:
@@ -205,6 +207,16 @@ def handle_header(self, element, idx, doc):
level=hlevel,
)
+ def handle_code(self, element, idx, doc):
+ """Handles monospace code snippets (pre)."""
+ if element.text is None:
+ return
+ text = element.text.strip()
+ label = DocItemLabel.CODE
+ if len(text) == 0:
+ return
+ doc.add_text(parent=self.parents[self.level], label=label, text=text)
+
def handle_paragraph(self, element, idx, doc):
"""Handles paragraph tags (p)."""
if element.text is None:
| Unable to extract code block in HTML page
When I try to extract the content from a webpage using ```docling```, I found it cannot extract **code blocks** in the webpage.
# Reproduce steps
HTML URL: https://requests.readthedocs.io/en/latest/user/quickstart/
```python
from docling.document_converter import DocumentConverter
converter = DocumentConverter()
result = converter.convert('https://requests.readthedocs.io/en/latest/user/quickstart/')
print(result.document.export_to_markdown())
````
The code blocks in the following picture cannot be extracted in the result markdown:
<img width="754" alt="image" src="https://github.com/user-attachments/assets/0175ddce-7516-4786-bdc7-95c3e830fad2">
The result markdown of this part in the above picture is :
```markdown
## Make a Request¶
Making a request with Requests is very simple.
Begin by importing the Requests module:
Now, let’s try to get a webpage. For this example, let’s get GitHub’s public
timeline:
Now, we have a Response object called r. We can
get all the information we need from this object.
Requests’ simple API means that all forms of HTTP request are as obvious. For
example, this is how you make an HTTP POST request:
Nice, right? What about the other HTTP request types: PUT, DELETE, HEAD and
OPTIONS? These are all just as simple:
That’s all well and good, but it’s also only the start of what Requests can
do.
```
| 1,731,328,071,000 | null | Bug Report | [
"docling/backend/html_backend.py:HTMLDocumentBackend.analyse_element"
] | [
"docling/backend/html_backend.py:HTMLDocumentBackend.handle_code"
] | 1 | 525 |
||
certbot/certbot | certbot__certbot-10043 | 0e225dcba293441e7b8d420c9a210480f8c707d8 | diff --git a/tools/finish_release.py b/tools/finish_release.py
index 958d7672bc..56b92d2a1d 100755
--- a/tools/finish_release.py
+++ b/tools/finish_release.py
@@ -111,7 +111,7 @@ def get_snap_revisions(snap, channel, version):
print('Getting revision numbers for', snap, version)
cmd = ['snapcraft', 'status', snap]
process = subprocess.run(cmd, check=True, stdout=subprocess.PIPE, universal_newlines=True)
- pattern = f'^\s+{channel}\s+{version}\s+(\d+)\s*'
+ pattern = f'^\\s+{channel}\\s+{version}\\s+(\\d+)\\s*'
revisions = re.findall(pattern, process.stdout, re.MULTILINE)
assert len(revisions) == SNAP_ARCH_COUNT, f'Unexpected number of snaps found for {channel} {snap} {version} (expected {SNAP_ARCH_COUNT}, found {len(revisions)})'
return revisions
| Fix regex in finish_release.py
```
(venv) certbot [3.0.0] » python3 tools/finish_release.py
certbot/tools/finish_release.py:114: SyntaxWarning: invalid escape sequence '\s'
pattern = f'^\s+{channel}\s+{version}\s+(\d+)\s*'
certbot/tools/finish_release.py:114: SyntaxWarning: invalid escape sequence '\s'
pattern = f'^\s+{channel}\s+{version}\s+(\d+)\s*'
certbot/tools/finish_release.py:114: SyntaxWarning: invalid escape sequence '\s'
pattern = f'^\s+{channel}\s+{version}\s+(\d+)\s*'
```
| 1,730,849,552,000 | null | Bug Report | [
"tools/finish_release.py:get_snap_revisions"
] | [] | 1 | 526 |
||
vitalik/django-ninja | vitalik__django-ninja-1349 | 97ef2914a7fffd058a311394a25af1fe489df722 | diff --git a/ninja/responses.py b/ninja/responses.py
index babd366e..6a0fd4ca 100644
--- a/ninja/responses.py
+++ b/ninja/responses.py
@@ -1,10 +1,11 @@
from enum import Enum
-from ipaddress import IPv4Address, IPv6Address
+from ipaddress import IPv4Address, IPv4Network, IPv6Address, IPv6Network
from typing import Any, FrozenSet
from django.core.serializers.json import DjangoJSONEncoder
from django.http import JsonResponse
from pydantic import BaseModel
+from pydantic_core import Url
__all__ = [
"NinjaJSONEncoder",
@@ -21,7 +22,9 @@ class NinjaJSONEncoder(DjangoJSONEncoder):
def default(self, o: Any) -> Any:
if isinstance(o, BaseModel):
return o.model_dump()
- if isinstance(o, (IPv4Address, IPv6Address)):
+ if isinstance(o, Url):
+ return str(o)
+ if isinstance(o, (IPv4Address, IPv4Network, IPv6Address, IPv6Network)):
return str(o)
if isinstance(o, Enum):
return str(o)
| [BUG] Object of type Url is not JSON serializable
**Describe the bug**
django-ninja = "^1.3.0"
Using `HttpUrl` (or, I suspect, any *Url class) for a schema used in a response results in json serialization error. This is the same type of issue as #717.
```pytb
Traceback (most recent call last):
File "/home/adys/.cache/pypoetry/virtualenvs/fabrile-backend-fklT6p4J-py3.10/lib/python3.10/site-packages/ninja/operation.py", line 121, in run
return self._result_to_response(request, result, temporal_response)
File "/home/adys/.cache/pypoetry/virtualenvs/fabrile-backend-fklT6p4J-py3.10/lib/python3.10/site-packages/ninja/operation.py", line 278, in _result_to_response
return self.api.create_response(
File "/home/adys/.cache/pypoetry/virtualenvs/fabrile-backend-fklT6p4J-py3.10/lib/python3.10/site-packages/ninja/main.py", line 453, in create_response
content = self.renderer.render(request, data, response_status=status)
File "/home/adys/.cache/pypoetry/virtualenvs/fabrile-backend-fklT6p4J-py3.10/lib/python3.10/site-packages/ninja/renderers.py", line 25, in render
return json.dumps(data, cls=self.encoder_class, **self.json_dumps_params)
File "/usr/lib/python3.10/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/lib/python3.10/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python3.10/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/home/adys/.cache/pypoetry/virtualenvs/fabrile-backend-fklT6p4J-py3.10/lib/python3.10/site-packages/ninja/responses.py", line 28, in default
return super().default(o)
File "/home/adys/.cache/pypoetry/virtualenvs/fabrile-backend-fklT6p4J-py3.10/lib/python3.10/site-packages/django/core/serializers/json.py", line 106, in default
return super().default(o)
File "/usr/lib/python3.10/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type Url is not JSON serializable
```
| 1,733,135,333,000 | null | Bug Report | [
"ninja/responses.py:NinjaJSONEncoder.default"
] | [] | 1 | 527 |
||
pandas-dev/pandas | pandas-dev__pandas-60577 | b0192c70610a9db593968374ea60d189daaaccc7 | diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 3c0c5cc64c24c..5652d7fab0c7c 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -241,7 +241,7 @@ def read_sql_table( # pyright: ignore[reportOverlappingOverload]
schema=...,
index_col: str | list[str] | None = ...,
coerce_float=...,
- parse_dates: list[str] | dict[str, str] | None = ...,
+ parse_dates: list[str] | dict[str, str] | dict[str, dict[str, Any]] | None = ...,
columns: list[str] | None = ...,
chunksize: None = ...,
dtype_backend: DtypeBackend | lib.NoDefault = ...,
@@ -255,7 +255,7 @@ def read_sql_table(
schema=...,
index_col: str | list[str] | None = ...,
coerce_float=...,
- parse_dates: list[str] | dict[str, str] | None = ...,
+ parse_dates: list[str] | dict[str, str] | dict[str, dict[str, Any]] | None = ...,
columns: list[str] | None = ...,
chunksize: int = ...,
dtype_backend: DtypeBackend | lib.NoDefault = ...,
@@ -268,7 +268,7 @@ def read_sql_table(
schema: str | None = None,
index_col: str | list[str] | None = None,
coerce_float: bool = True,
- parse_dates: list[str] | dict[str, str] | None = None,
+ parse_dates: list[str] | dict[str, str] | dict[str, dict[str, Any]] | None = None,
columns: list[str] | None = None,
chunksize: int | None = None,
dtype_backend: DtypeBackend | lib.NoDefault = lib.no_default,
@@ -372,7 +372,7 @@ def read_sql_query( # pyright: ignore[reportOverlappingOverload]
index_col: str | list[str] | None = ...,
coerce_float=...,
params: list[Any] | Mapping[str, Any] | None = ...,
- parse_dates: list[str] | dict[str, str] | None = ...,
+ parse_dates: list[str] | dict[str, str] | dict[str, dict[str, Any]] | None = ...,
chunksize: None = ...,
dtype: DtypeArg | None = ...,
dtype_backend: DtypeBackend | lib.NoDefault = ...,
@@ -386,7 +386,7 @@ def read_sql_query(
index_col: str | list[str] | None = ...,
coerce_float=...,
params: list[Any] | Mapping[str, Any] | None = ...,
- parse_dates: list[str] | dict[str, str] | None = ...,
+ parse_dates: list[str] | dict[str, str] | dict[str, dict[str, Any]] | None = ...,
chunksize: int = ...,
dtype: DtypeArg | None = ...,
dtype_backend: DtypeBackend | lib.NoDefault = ...,
@@ -399,7 +399,7 @@ def read_sql_query(
index_col: str | list[str] | None = None,
coerce_float: bool = True,
params: list[Any] | Mapping[str, Any] | None = None,
- parse_dates: list[str] | dict[str, str] | None = None,
+ parse_dates: list[str] | dict[str, str] | dict[str, dict[str, Any]] | None = None,
chunksize: int | None = None,
dtype: DtypeArg | None = None,
dtype_backend: DtypeBackend | lib.NoDefault = lib.no_default,
| BUG: Type Annotation Inconsistency in read_sql_* Functions
### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import sqlite3
date_params = {"date_col": {"utc": True}}
with sqlite3.connect("blah") as con:
# Fails type check.
df = pd.read_sql_query("SELECT * FROM tablename", con, parse_dates=date_params)
print(df)
```
### Issue Description
The pandas type annotations for the `parse_dates` argument in `read_sql_table()` and `read_sql_query()` is overly restrictive. It incorrectly causes type checkers to complain when using the `parse_dates` argument to pass keyword arguments to `to_datetime()` as documented [here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_sql_query.html).
To solve this problem, the annotated type for `parse_date` just needs to be switched from `parse_dates: list[str] | dict[str, str] | None` to `list[str] | dict[str, str] | dict[str, dict[str, Any]] | None`.
This problem is not always visible because the corresponding `pandas-stubs` already does this. The inconsistency appears however in some type checkers when additional stubs are not available or configured though.
To illustrate, take the provided (valid) example and run `pyright` on it (with no arguments). It will output the following.
```
(bug_venv)$ pyright example.py
/home/user/Code/pandas_bug/example.py
/home/user/Code/pandas_bug/example.py:8:10 - error: No overloads for "read_sql_query" match the provided arguments (reportCallIssue)
/home/user/Code/pandas_bug/example.py:8:72 - error: Argument of type "dict[str, dict[str, bool]]" cannot be assigned to parameter "parse_dates" of type "list[str] |dict[str, str] | None" in function "read_sql_query"
Type "dict[str, dict[str, bool]]" is not assignable to type "list[str] | dict[str, str] | None"
"dict[str, dict[str, bool]]" is not assignable to "list[str]"
"dict[str, dict[str, bool]]" is not assignable to "dict[str, str]"
Type parameter "_VT@dict" is invariant, but "dict[str, bool]" is not the same as "str"
Consider switching from "dict" to "Mapping" which is covariant in the value type
"dict[str, dict[str, bool]]" is not assignable to "None" (reportArgumentType)
2 errors, 0 warnings, 0 informations
```
I am more than happy to submit a pull request for this is desired, but thought it best to put in this issue first in case I am missing something.
### Expected Behavior
import pandas as pd
import sqlite3
date_params = {"date_col": {"utc": True}}
with sqlite3.connect("blah") as con:
# Type checks correctly
df = pd.read_sql_query("SELECT * FROM tablename", con, parse_dates=date_params)
print(df)
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.6
python-bits : 64
OS : Linux
OS-release : 6.11.2-arch1-1
Version : #1 SMP PREEMPT_DYNAMIC Fri, 04 Oct 2024 21:51:11 +0000
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.1.2
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 24.2
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
| Thanks for the report!
> This problem is not always visible because the corresponding `pandas-stubs` already does this. The inconsistency appears however in some type checkers when additional stubs are not available or configured though.
It seems to me this is not appropriate. PEP 561 makes this quite clear I think:
> Package maintainers who wish to support type checking of their code MUST add a marker file named py.typed to their package supporting typing.
Since pandas does not have a `py.typed` file, its type-hints should not be considered public. I only mention this to say that I think pandas should not be obligated to spend unnecessary effort in order to support third parties that use its internal type-hints.
Of course, in cases where the change would benefit pandas internal typing (as is the case here I believe), PRs are welcome! | 1,734,286,166,000 | null | Bug Report | [
"pandas/io/sql.py:read_sql_table",
"pandas/io/sql.py:read_sql_query"
] | [] | 2 | 528 |
|
pandas-dev/pandas | pandas-dev__pandas-60543 | 659eecf22a2e4c4a8f023c655a75a7135614a409 | diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index 6fa21d9410187..b0c8ec1ffc083 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -430,7 +430,7 @@ def is_period_dtype(arr_or_dtype) -> bool:
Check whether an array-like or dtype is of the Period dtype.
.. deprecated:: 2.2.0
- Use isinstance(dtype, pd.Period) instead.
+ Use isinstance(dtype, pd.PeriodDtype) instead.
Parameters
----------
| DOC: Incorrect deprecation example for `is_period_dtype`
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.api.types.is_period_dtype.html#pandas.api.types.is_period_dtype
### Documentation problem
Suggests the user use `isinstance(dtype, pd.Period)` instead, when they really need to use `Use isinstance(dtype, pd.PeriodDtype)`
### Suggested fix for documentation
Update message to reference correct class
| 1,733,944,385,000 | null | Bug Report | [
"pandas/core/dtypes/common.py:is_period_dtype"
] | [] | 1 | 529 |
||
pandas-dev/pandas | pandas-dev__pandas-60526 | 8a286fa16f3160e939b192cbe8e218992a84e6fc | diff --git a/pandas/core/computation/expressions.py b/pandas/core/computation/expressions.py
index e2acd9a2c97c2..a2c3a706ae29c 100644
--- a/pandas/core/computation/expressions.py
+++ b/pandas/core/computation/expressions.py
@@ -65,23 +65,23 @@ def set_numexpr_threads(n=None) -> None:
ne.set_num_threads(n)
-def _evaluate_standard(op, op_str, a, b):
+def _evaluate_standard(op, op_str, left_op, right_op):
"""
Standard evaluation.
"""
if _TEST_MODE:
_store_test_result(False)
- return op(a, b)
+ return op(left_op, right_op)
-def _can_use_numexpr(op, op_str, a, b, dtype_check) -> bool:
- """return a boolean if we WILL be using numexpr"""
+def _can_use_numexpr(op, op_str, left_op, right_op, dtype_check) -> bool:
+ """return left_op boolean if we WILL be using numexpr"""
if op_str is not None:
# required min elements (otherwise we are adding overhead)
- if a.size > _MIN_ELEMENTS:
+ if left_op.size > _MIN_ELEMENTS:
# check for dtype compatibility
dtypes: set[str] = set()
- for o in [a, b]:
+ for o in [left_op, right_op]:
# ndarray and Series Case
if hasattr(o, "dtype"):
dtypes |= {o.dtype.name}
@@ -93,22 +93,22 @@ def _can_use_numexpr(op, op_str, a, b, dtype_check) -> bool:
return False
-def _evaluate_numexpr(op, op_str, a, b):
+def _evaluate_numexpr(op, op_str, left_op, right_op):
result = None
- if _can_use_numexpr(op, op_str, a, b, "evaluate"):
+ if _can_use_numexpr(op, op_str, left_op, right_op, "evaluate"):
is_reversed = op.__name__.strip("_").startswith("r")
if is_reversed:
# we were originally called by a reversed op method
- a, b = b, a
+ left_op, right_op = right_op, left_op
- a_value = a
- b_value = b
+ left_value = left_op
+ right_value = right_op
try:
result = ne.evaluate(
- f"a_value {op_str} b_value",
- local_dict={"a_value": a_value, "b_value": b_value},
+ f"left_value {op_str} right_value",
+ local_dict={"left_value": left_value, "right_value": right_op},
casting="safe",
)
except TypeError:
@@ -116,20 +116,20 @@ def _evaluate_numexpr(op, op_str, a, b):
# (https://github.com/pydata/numexpr/issues/379)
pass
except NotImplementedError:
- if _bool_arith_fallback(op_str, a, b):
+ if _bool_arith_fallback(op_str, left_op, right_op):
pass
else:
raise
if is_reversed:
# reverse order to original for fallback
- a, b = b, a
+ left_op, right_op = right_op, left_op
if _TEST_MODE:
_store_test_result(result is not None)
if result is None:
- result = _evaluate_standard(op, op_str, a, b)
+ result = _evaluate_standard(op, op_str, left_op, right_op)
return result
@@ -170,24 +170,24 @@ def _evaluate_numexpr(op, op_str, a, b):
}
-def _where_standard(cond, a, b):
+def _where_standard(cond, left_op, right_op):
# Caller is responsible for extracting ndarray if necessary
- return np.where(cond, a, b)
+ return np.where(cond, left_op, right_op)
-def _where_numexpr(cond, a, b):
+def _where_numexpr(cond, left_op, right_op):
# Caller is responsible for extracting ndarray if necessary
result = None
- if _can_use_numexpr(None, "where", a, b, "where"):
+ if _can_use_numexpr(None, "where", left_op, right_op, "where"):
result = ne.evaluate(
"where(cond_value, a_value, b_value)",
- local_dict={"cond_value": cond, "a_value": a, "b_value": b},
+ local_dict={"cond_value": cond, "a_value": left_op, "b_value": right_op},
casting="safe",
)
if result is None:
- result = _where_standard(cond, a, b)
+ result = _where_standard(cond, left_op, right_op)
return result
@@ -206,13 +206,13 @@ def _has_bool_dtype(x):
_BOOL_OP_UNSUPPORTED = {"+": "|", "*": "&", "-": "^"}
-def _bool_arith_fallback(op_str, a, b) -> bool:
+def _bool_arith_fallback(op_str, left_op, right_op) -> bool:
"""
Check if we should fallback to the python `_evaluate_standard` in case
of an unsupported operation by numexpr, which is the case for some
boolean ops.
"""
- if _has_bool_dtype(a) and _has_bool_dtype(b):
+ if _has_bool_dtype(left_op) and _has_bool_dtype(right_op):
if op_str in _BOOL_OP_UNSUPPORTED:
warnings.warn(
f"evaluating in Python space because the {op_str!r} "
@@ -224,15 +224,15 @@ def _bool_arith_fallback(op_str, a, b) -> bool:
return False
-def evaluate(op, a, b, use_numexpr: bool = True):
+def evaluate(op, left_op, right_op, use_numexpr: bool = True):
"""
- Evaluate and return the expression of the op on a and b.
+ Evaluate and return the expression of the op on left_op and right_op.
Parameters
----------
op : the actual operand
- a : left operand
- b : right operand
+ left_op : left operand
+ right_op : right operand
use_numexpr : bool, default True
Whether to try to use numexpr.
"""
@@ -240,24 +240,24 @@ def evaluate(op, a, b, use_numexpr: bool = True):
if op_str is not None:
if use_numexpr:
# error: "None" not callable
- return _evaluate(op, op_str, a, b) # type: ignore[misc]
- return _evaluate_standard(op, op_str, a, b)
+ return _evaluate(op, op_str, left_op, right_op) # type: ignore[misc]
+ return _evaluate_standard(op, op_str, left_op, right_op)
-def where(cond, a, b, use_numexpr: bool = True):
+def where(cond, left_op, right_op, use_numexpr: bool = True):
"""
- Evaluate the where condition cond on a and b.
+ Evaluate the where condition cond on left_op and right_op.
Parameters
----------
cond : np.ndarray[bool]
- a : return if cond is True
- b : return if cond is False
+ left_op : return if cond is True
+ right_op : return if cond is False
use_numexpr : bool, default True
Whether to try to use numexpr.
"""
assert _where is not None
- return _where(cond, a, b) if use_numexpr else _where_standard(cond, a, b)
+ return _where(cond, left_op, right_op) if use_numexpr else _where_standard(cond, left_op, right_op)
def set_test_mode(v: bool = True) -> None:
| DOC: Update variables a and b to names consistent with comment documentation
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://github.com/pandas-dev/pandas/blob/main/pandas/core/computation/expressions.py
### Documentation problem
Lines 234 and 235 explain what a and b are in detail (left and right operands), but there are many of those same variables earlier in the file, making it harder to understand what they represent.
### Suggested fix for documentation
Assuming a and b represent right and left operands throughout each function, change these variable names to right_op and left_op instead throughout all functions to have more descriptive variable names
| 1,733,658,054,000 | null | Feature Request | [
"pandas/core/computation/expressions.py:_evaluate_standard",
"pandas/core/computation/expressions.py:_can_use_numexpr",
"pandas/core/computation/expressions.py:_evaluate_numexpr",
"pandas/core/computation/expressions.py:_where_standard",
"pandas/core/computation/expressions.py:_where_numexpr",
"pandas/core/computation/expressions.py:_bool_arith_fallback",
"pandas/core/computation/expressions.py:evaluate",
"pandas/core/computation/expressions.py:where"
] | [] | 8 | 530 |
||
pandas-dev/pandas | pandas-dev__pandas-60518 | 8a286fa16f3160e939b192cbe8e218992a84e6fc | diff --git a/pandas/core/computation/pytables.py b/pandas/core/computation/pytables.py
index fe7e27f537b01..4a75acce46632 100644
--- a/pandas/core/computation/pytables.py
+++ b/pandas/core/computation/pytables.py
@@ -205,7 +205,7 @@ def generate(self, v) -> str:
val = v.tostring(self.encoding)
return f"({self.lhs} {self.op} {val})"
- def convert_value(self, v) -> TermValue:
+ def convert_value(self, conv_val) -> TermValue:
"""
convert the expression that is in the term to something that is
accepted by pytables
@@ -219,44 +219,44 @@ def stringify(value):
kind = ensure_decoded(self.kind)
meta = ensure_decoded(self.meta)
if kind == "datetime" or (kind and kind.startswith("datetime64")):
- if isinstance(v, (int, float)):
- v = stringify(v)
- v = ensure_decoded(v)
- v = Timestamp(v).as_unit("ns")
- if v.tz is not None:
- v = v.tz_convert("UTC")
- return TermValue(v, v._value, kind)
+ if isinstance(conv_val, (int, float)):
+ conv_val = stringify(conv_val)
+ conv_val = ensure_decoded(conv_val)
+ conv_val = Timestamp(conv_val).as_unit("ns")
+ if conv_val.tz is not None:
+ conv_val = conv_val.tz_convert("UTC")
+ return TermValue(conv_val, conv_val._value, kind)
elif kind in ("timedelta64", "timedelta"):
- if isinstance(v, str):
- v = Timedelta(v)
+ if isinstance(conv_val, str):
+ conv_val = Timedelta(conv_val)
else:
- v = Timedelta(v, unit="s")
- v = v.as_unit("ns")._value
- return TermValue(int(v), v, kind)
+ conv_val = Timedelta(conv_val, unit="s")
+ conv_val = conv_val.as_unit("ns")._value
+ return TermValue(int(conv_val), conv_val, kind)
elif meta == "category":
metadata = extract_array(self.metadata, extract_numpy=True)
result: npt.NDArray[np.intp] | np.intp | int
- if v not in metadata:
+ if conv_val not in metadata:
result = -1
else:
- result = metadata.searchsorted(v, side="left")
+ result = metadata.searchsorted(conv_val, side="left")
return TermValue(result, result, "integer")
elif kind == "integer":
try:
- v_dec = Decimal(v)
+ v_dec = Decimal(conv_val)
except InvalidOperation:
# GH 54186
# convert v to float to raise float's ValueError
- float(v)
+ float(conv_val)
else:
- v = int(v_dec.to_integral_exact(rounding="ROUND_HALF_EVEN"))
- return TermValue(v, v, kind)
+ conv_val = int(v_dec.to_integral_exact(rounding="ROUND_HALF_EVEN"))
+ return TermValue(conv_val, conv_val, kind)
elif kind == "float":
- v = float(v)
- return TermValue(v, v, kind)
+ conv_val = float(conv_val)
+ return TermValue(conv_val, conv_val, kind)
elif kind == "bool":
- if isinstance(v, str):
- v = v.strip().lower() not in [
+ if isinstance(conv_val, str):
+ conv_val = conv_val.strip().lower() not in [
"false",
"f",
"no",
@@ -268,13 +268,13 @@ def stringify(value):
"",
]
else:
- v = bool(v)
- return TermValue(v, v, kind)
- elif isinstance(v, str):
+ conv_val = bool(conv_val)
+ return TermValue(conv_val, conv_val, kind)
+ elif isinstance(conv_val, str):
# string quoting
- return TermValue(v, stringify(v), "string")
+ return TermValue(conv_val, stringify(conv_val), "string")
else:
- raise TypeError(f"Cannot compare {v} of type {type(v)} to {kind} column")
+ raise TypeError(f"Cannot compare {conv_val} of type {type(conv_val)} to {kind} column")
def convert_values(self) -> None:
pass
| DOC: Convert v to conv_val in function for pytables.py
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
pandas\pandas\core\computation\pytables.py
### Documentation problem
Many instances of just v in this function. Wanted to clarify throughout
### Suggested fix for documentation
Change v to conv_val
| 1,733,558,382,000 | null | Feature Request | [
"pandas/core/computation/pytables.py:BinOp.convert_value"
] | [] | 1 | 531 |
||
pandas-dev/pandas | pandas-dev__pandas-60512 | 659eecf22a2e4c4a8f023c655a75a7135614a409 | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index d1aa20501b060..de7fb3682fb4f 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -665,7 +665,7 @@ def size(self) -> int:
See Also
--------
- ndarray.size : Number of elements in the array.
+ numpy.ndarray.size : Number of elements in the array.
Examples
--------
| DOC: methods in see also section in the pandas.DataFrame.size are not hyperlinks
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.size.html
### Documentation problem
In the see also section the `ndarray.size` method is listed, but it is not hyperlinks and thus the reader cannot navigate with ease but has to look for them instead.
### Suggested fix for documentation
Add numpy.ndarray.size in the docstring.
| take | 1,733,537,109,000 | null | Bug Report | [
"pandas/core/generic.py:NDFrame.size"
] | [] | 1 | 532 |
|
pandas-dev/pandas | pandas-dev__pandas-60461 | a4fc97e92ed938260728e3f6c2b92df5ffb57b7f | diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 137a49c4487f6..02b9291da9b31 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -87,8 +87,8 @@
if TYPE_CHECKING:
from collections.abc import (
+ Collection,
Sequence,
- Sized,
)
from pandas._typing import (
@@ -1581,7 +1581,7 @@ def _maybe_box_and_unbox_datetimelike(value: Scalar, dtype: DtypeObj):
return _maybe_unbox_datetimelike(value, dtype)
-def construct_1d_object_array_from_listlike(values: Sized) -> np.ndarray:
+def construct_1d_object_array_from_listlike(values: Collection) -> np.ndarray:
"""
Transform any list-like object in a 1-dimensional numpy array of object
dtype.
@@ -1599,11 +1599,9 @@ def construct_1d_object_array_from_listlike(values: Sized) -> np.ndarray:
-------
1-dimensional numpy array of dtype object
"""
- # numpy will try to interpret nested lists as further dimensions, hence
- # making a 1D array that contains list-likes is a bit tricky:
- result = np.empty(len(values), dtype="object")
- result[:] = values
- return result
+ # numpy will try to interpret nested lists as further dimensions in np.array(),
+ # hence explicitly making a 1D array using np.fromiter
+ return np.fromiter(values, dtype="object", count=len(values))
def maybe_cast_to_integer_array(arr: list | np.ndarray, dtype: np.dtype) -> np.ndarray:
| PERF: Melt 2x slower when future.infer_string option enabled
### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this issue exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this issue exists on the main branch of pandas.
### Reproducible Example
```
import pandas as pd
import numpy as np
# This configuration option makes this code slow
pd.options.future.infer_string = True
# Define dimensions
n_rows = 10000
n_cols = 10000
# Generate random IDs for the rows
ids = [f"string_id_{i}" for i in range(1, n_rows + 1)]
# Generate a random sparse matrix with 10% non-NaN values
data = np.random.choice([np.nan, 1], size=(n_rows, n_cols), p=[0.9, 0.1])
# Create a DataFrame from the sparse matrix and add the 'Id' column
df = pd.DataFrame(data, columns=[f"column_name_{i}" for i in range(1, n_cols + 1)])
df.insert(0, 'Id', ids)
# Melt the DataFrame
df_melted = df.melt(id_vars=['Id'], var_name='Column', value_name='Value')
# Display the first few rows of the melted DataFrame
df_melted.head()
```
### Installed Versions
```
INSTALLED VERSIONS
------------------
commit : d9cdd2ee5a58015ef6f4d15c7226110c9aab8140
python : 3.12.5.final.0
python-bits : 64
OS : Darwin
OS-release : 23.6.0
Version : Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:30 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : pl_PL.UTF-8
LOCALE : pl_PL.UTF-8
pandas : 2.2.2
numpy : 2.1.0
pytz : 2024.1
dateutil : 2.9.0.post0
setuptools : 73.0.1
pip : 24.1.2
Cython : None
pytest : 8.3.2
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.4
IPython : 8.26.0
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.6.1
gcsfs : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 17.0.0
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : 2024.6.1
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2024.1
qtpy : None
pyqt5 : None
```
### Prior Performance
This code with `pd.options.future.infer_string = False` runs in:
`5.23 s ± 1.35 s per loop (mean ± std. dev. of 7 runs, 1 loop each)`
Memory consumption is around 14 GB.
Enabling `pd.options.future.infer_string = True` makes it 2 times slower:
`10.6 s ± 40.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)`
Also memory consumption is bigger with peak around 25GB.
| @maver1ck Thanks for the report!
On main (and on my laptop), I see:
```
In [20]: pd.options.future.infer_string = False
In [21]: df = pd.DataFrame(data, columns=[f"column_name_{i}" for i in range(1, n_cols + 1)])
In [22]: df.insert(0, 'Id', ids)
In [23]: %timeit df_melted = df.melt(id_vars=['Id'], var_name='Column', value_name='Value')
6.25 s ± 944 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [24]: pd.options.future.infer_string = True
In [25]: df = pd.DataFrame(data, columns=[f"column_name_{i}" for i in range(1, n_cols + 1)])
In [26]: df.insert(0, 'Id', ids)
In [27]: %timeit df.melt(id_vars=['Id'], var_name='Column', value_name='Value')
3.55 s ± 169 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
So for me it is actually two times faster (didn't check memory usage though)
And testing with release pandas 2.2.2, I indeed see that it is slower with `pd.options.future.infer_string = True`. So it seems we have fixed something in the meantime.
The same problem exists in Pandas 2.2.3.
So my understanding is that this will be fixed in 3.0 ?
@jorisvandenbossche is that correct ? | 1,733,057,561,000 | null | Performance Issue | [
"pandas/core/dtypes/cast.py:construct_1d_object_array_from_listlike"
] | [] | 1 | 533 |
|
pandas-dev/pandas | pandas-dev__pandas-60457 | 844b3191bd45b95cbaae341048bf7f367f086f2f | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index a6be17a654aa7..3a48cc8a66076 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3878,6 +3878,14 @@ def to_csv(
>>> import os # doctest: +SKIP
>>> os.makedirs("folder/subfolder", exist_ok=True) # doctest: +SKIP
>>> df.to_csv("folder/subfolder/out.csv") # doctest: +SKIP
+
+ Format floats to two decimal places:
+
+ >>> df.to_csv("out1.csv", float_format="%.2f") # doctest: +SKIP
+
+ Format floats using scientific notation:
+
+ >>> df.to_csv("out2.csv", float_format="{{:.2e}}".format) # doctest: +SKIP
"""
df = self if isinstance(self, ABCDataFrame) else self.to_frame()
| DOC: Add examples for float_format in to_csv documentation
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_csv.html
### Documentation problem
The float_format parameter in to_csv is explained but lacks examples. Users might struggle to understand how to apply this parameter effectively without concrete examples in the documentation.
### Suggested fix for documentation
I suggest adding examples for float_format to make the documentation more beginner-friendly. Examples could include:
```
# Format floats to two decimal places
df.to_csv("example1.csv", float_format="%.2f")
# Use scientific notation
df.to_csv("example2.csv", float_format="{:.2e}".format)
```
| take | 1,733,028,703,000 | null | Feature Request | [
"pandas/core/generic.py:NDFrame.to_csv"
] | [] | 1 | 534 |
|
pandas-dev/pandas | pandas-dev__pandas-60415 | 98f7e4deeff26a5ef993ee27104387a1a6e0d3d3 | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 039bdf9c36ee7..a6be17a654aa7 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -838,7 +838,7 @@ def pop(self, item: Hashable) -> Series | Any:
return result
@final
- def squeeze(self, axis: Axis | None = None):
+ def squeeze(self, axis: Axis | None = None) -> Scalar | Series | DataFrame:
"""
Squeeze 1 dimensional axis objects into scalars.
| DOC: Missing type hint for squeeze method
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://github.com/pandas-dev/pandas/blob/main/pandas/core/generic.py
### Documentation problem
The squeeze method is missing a type hint.
### Suggested fix for documentation
Adding a type hint to the squeeze method to be consistent with the rest of the code.
| Can confirm, specifically this line: https://github.com/pandas-dev/pandas/blob/1c986d6213904fd7d9acc5622dc91d029d3f1218/pandas/core/generic.py#L841 | 1,732,555,390,000 | null | Feature Request | [
"pandas/core/generic.py:NDFrame.squeeze"
] | [] | 1 | 535 |
|
pandas-dev/pandas | pandas-dev__pandas-60398 | e62fcb15a70dfb6f4c408cf801f83b216578335b | diff --git a/pandas/core/series.py b/pandas/core/series.py
index 35b576da87ed7..4fa8b86fa4c16 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -567,7 +567,7 @@ def __arrow_c_stream__(self, requested_schema=None):
Export the pandas Series as an Arrow C stream PyCapsule.
This relies on pyarrow to convert the pandas Series to the Arrow
- format (and follows the default behaviour of ``pyarrow.Array.from_pandas``
+ format (and follows the default behavior of ``pyarrow.Array.from_pandas``
in its handling of the index, i.e. to ignore it).
This conversion is not necessarily zero-copy.
@@ -2226,7 +2226,7 @@ def drop_duplicates(
5 hippo
Name: animal, dtype: object
- With the 'keep' parameter, the selection behaviour of duplicated values
+ With the 'keep' parameter, the selection behavior of duplicated values
can be changed. The value 'first' keeps the first occurrence for each
set of duplicated entries. The default value of keep is 'first'.
@@ -3451,7 +3451,7 @@ def sort_values(
4 5.0
dtype: float64
- Sort values ascending order (default behaviour)
+ Sort values ascending order (default behavior)
>>> s.sort_values(ascending=True)
1 1.0
@@ -4098,7 +4098,7 @@ def swaplevel(
In the following example, we will swap the levels of the indices.
Here, we will swap the levels column-wise, but levels can be swapped row-wise
- in a similar manner. Note that column-wise is the default behaviour.
+ in a similar manner. Note that column-wise is the default behavior.
By not supplying any arguments for i and j, we swap the last and second to
last indices.
| DOC: Fix docstring typo
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://github.com/pandas-dev/pandas/blob/main/pandas/core/series.py
### Documentation problem
The docstring for the __arrow_c_stream__ method in the Series class uses the word "behaviour".
### Suggested fix for documentation
Suggested to rewrite as "behavior", which is the American English spelling, to maintain consistency with the rest of the Pandas codebase.
| take | 1,732,301,626,000 | null | Bug Report | [
"pandas/core/series.py:Series.__arrow_c_stream__",
"pandas/core/series.py:Series.drop_duplicates",
"pandas/core/series.py:Series.sort_values",
"pandas/core/series.py:Series.swaplevel"
] | [] | 4 | 536 |
|
pandas-dev/pandas | pandas-dev__pandas-60310 | 61f800d7b69efa632c5f93b4be4b1e4154c698d7 | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index b35e2c8497fb7..34eb198b4b4da 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2115,8 +2115,8 @@ def from_records(
"""
Convert structured or record ndarray to DataFrame.
- Creates a DataFrame object from a structured ndarray, sequence of
- tuples or dicts, or DataFrame.
+ Creates a DataFrame object from a structured ndarray, or sequence of
+ tuples or dicts.
Parameters
----------
| DOC: Dataframe.from_records should not say that passing in a DataFrame for data is allowed
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/dev/reference/api/pandas.DataFrame.from_records.html#pandas.DataFrame.from_records
### Documentation problem
The first text in the docstring says (emphasis at the end is mine)
> Convert structured or record ndarray to DataFrame.
>
> Creates a DataFrame object from a structured ndarray, sequence of
> tuples or dicts, or **DataFrame**.
However, starting in 2.1.0, passing in a DataFrame has been deprecated. In 2.1.0 it would raise a FutureWarning; in main it will raise a TyperError.
The documentation between 2.1.0 and main appear to have been updated to remove text in the Parameters section of the docstring that still said a DataFrame could be passed in for data, but the text in the initial section of the docstring was not.
### Suggested fix for documentation
Change the initial docstring text to be:
> Convert structured or record ndarray to DataFrame.
>
> Creates a DataFrame object from a structured ndarray or sequence of
> tuples or dicts.
| Thanks for the report, PRs to fix are welcome!
take | 1,731,578,353,000 | null | Bug Report | [
"pandas/core/frame.py:DataFrame.from_records"
] | [] | 1 | 537 |
|
pandas-dev/pandas | pandas-dev__pandas-60277 | 4fcee0e431135bf6fa97440d4d7e17a96630fe6e | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 35014674565ff..3a83a3997f881 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2211,8 +2211,9 @@ def to_excel(
via the options ``io.excel.xlsx.writer`` or
``io.excel.xlsm.writer``.
- merge_cells : bool, default True
- Write MultiIndex and Hierarchical Rows as merged cells.
+ merge_cells : bool or 'columns', default False
+ If True, write MultiIndex index and columns as merged cells.
+ If 'columns', merge MultiIndex column cells only.
{encoding_parameter}
inf_rep : str, default 'inf'
Representation for infinity (there is no native representation for
| DOC: Document merge_cells="columns" in to_excel
https://pandas.pydata.org/docs/dev/reference/api/pandas.DataFrame.to_excel.html
The `merge_cells` argument can also take `"columns"` due to #35384. This should be added to the docstring.
| take | 1,731,306,243,000 | null | Feature Request | [
"pandas/core/generic.py:NDFrame.to_excel"
] | [] | 1 | 538 |
|
pandas-dev/pandas | pandas-dev__pandas-60247 | 5f23aced2f97f2ed481deda4eaeeb049d6c7debe | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 7c2cc5d33a5db..56031f20faa16 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -7668,8 +7668,12 @@ def interpolate(
* 'linear': Ignore the index and treat the values as equally
spaced. This is the only method supported on MultiIndexes.
* 'time': Works on daily and higher resolution data to interpolate
- given length of interval.
- * 'index', 'values': use the actual numerical values of the index.
+ given length of interval. This interpolates values based on
+ time interval between observations.
+ * 'index': The interpolation uses the numerical values
+ of the DataFrame's index to linearly calculate missing values.
+ * 'values': Interpolation based on the numerical values
+ in the DataFrame, treating them as equally spaced along the index.
* 'nearest', 'zero', 'slinear', 'quadratic', 'cubic',
'barycentric', 'polynomial': Passed to
`scipy.interpolate.interp1d`, whereas 'spline' is passed to
| DOC: Improve documentation df.interpolate() for methods ‘time’, ‘index’ and ‘values’
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.interpolate.html
### Documentation problem
It is not possible to understand what exactly the method `interpolate` does from reading the documentation. See e.g. this SE post for more details
https://stackoverflow.com/questions/65511992/pandas-interpolation-type-when-method-index
### Suggested fix for documentation
Rewrite doctstring and documentation page for the method
| Thanks for the report, agreed this could use clarification. PRs to improve are welcome!
take | 1,731,082,540,000 | null | Feature Request | [
"pandas/core/generic.py:NDFrame.interpolate"
] | [] | 1 | 539 |
|
pandas-dev/pandas | pandas-dev__pandas-60187 | dbeeb1f05bca199b3c1aed979e6ae72074a82243 | diff --git a/pandas/core/series.py b/pandas/core/series.py
index fe2bb0b5aa5c3..d83d9715878f8 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2482,6 +2482,7 @@ def round(self, decimals: int = 0, *args, **kwargs) -> Series:
--------
numpy.around : Round values of an np.array.
DataFrame.round : Round values of a DataFrame.
+ Series.dt.round : Round values of data to the specified freq.
Notes
-----
| DOC: Distinguish between Series.round and Series.dt.round
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.round.html#pandas.Series.round
### Documentation problem
When using Series.round, it does not work on date data.
### Suggested fix for documentation
Adding Series.dt.round in the "See also" section would make it more convenient for users to find the relevant documentation.
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.round.html
| I think it worth changing, can I take it?
take | 1,730,742,670,000 | null | Feature Request | [
"pandas/core/series.py:Series.round"
] | [] | 1 | 540 |
|
huggingface/accelerate | huggingface__accelerate-3279 | cb8b7c637a8588668c52bd306f9b2828f69d9585 | diff --git a/src/accelerate/utils/modeling.py b/src/accelerate/utils/modeling.py
index 5f88e54e3c9..806f930acaa 100644
--- a/src/accelerate/utils/modeling.py
+++ b/src/accelerate/utils/modeling.py
@@ -1101,6 +1101,7 @@ def _init_infer_auto_device_map(
special_dtypes: Optional[Dict[str, Union[str, torch.device]]] = None,
) -> Tuple[
List[Union[int, str]],
+ Dict[Union[int, str], Union[int, str]],
List[Union[int, str]],
List[int],
Dict[str, int],
@@ -1147,6 +1148,7 @@ def _init_infer_auto_device_map(
return (
devices,
+ max_memory,
main_devices,
gpus,
module_sizes,
@@ -1356,6 +1358,7 @@ def infer_auto_device_map(
# Initialize the variables
(
devices,
+ max_memory,
main_devices,
gpus,
module_sizes,
| Calling infer_auto_device_map() with max_memory=None throws an error in version 1.2.0
### System Info
```Shell
accelerate==1.2.0
```
### Reproduction
Bug is from this commit:
https://github.com/huggingface/accelerate/commit/d7b1b368e9f484a18636a71600566b757d5cf87e
`max_memory` initialization was moved into `_init_infer_auto_device_map`, which does not return the `max_memory` value.
So if max_memory=None is passed to `infer_auto_device_map` (the default value), then it will still be None at line 1415:
https://github.com/huggingface/accelerate/blob/cb8b7c637a8588668c52bd306f9b2828f69d9585/src/accelerate/utils/modeling.py#L1415
Leading to error: TypeError: 'NoneType' object is not subscriptable
### Expected behavior
max_memory=None when passed to `infer_auto_device_map` does not throw an error.
| @Nech-C
Sorry for the oversight. I will fix it ASAP. Thanks for pointing it out! | 1,733,630,086,000 | null | Bug Report | [
"src/accelerate/utils/modeling.py:_init_infer_auto_device_map",
"src/accelerate/utils/modeling.py:infer_auto_device_map"
] | [] | 2 | 541 |
|
huggingface/accelerate | huggingface__accelerate-3261 | 29be4788629b772a3b722076e433b5b3b5c85da3 | diff --git a/examples/by_feature/megatron_lm_gpt_pretraining.py b/examples/by_feature/megatron_lm_gpt_pretraining.py
index 18488ec41e2..c9d4787ed83 100644
--- a/examples/by_feature/megatron_lm_gpt_pretraining.py
+++ b/examples/by_feature/megatron_lm_gpt_pretraining.py
@@ -252,7 +252,7 @@ def main():
if args.with_tracking:
accelerator_log_kwargs["log_with"] = args.report_to
- accelerator_log_kwargs["logging_dir"] = args.output_dir
+ accelerator_log_kwargs["project_dir"] = args.output_dir
accelerator = Accelerator(gradient_accumulation_steps=args.gradient_accumulation_steps, **accelerator_log_kwargs)
| [BUG] Accelerator.__init__() got an unexpected keyword argument 'logging_dir'
### System Info
```Shell
accelerate version: main
python version: 3.11
torch version: 2.4
numpy version: 1.26.4
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [ ] My own task or dataset (give details below)
### Reproduction
When I run the accelerate/examples/megatron_1m_gpt_pretraining. py file
accelerate launch --config_file megatron_gpt_pretraining.py \
--config_name "gpt2-large" \
--tokenizer_name "gpt2-large" \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--block_size 1024 \
--learning_rate 5e-5 \
--per_device_train_batch_size 24 \
--per_device_eval_batch_size 24 \
--num_train_epochs 5 \
--with_tracking \
--report_to "wandb" \
--output_dir "awesome_model"
### Expected behavior
Normal training, but I found that in megatron_1m_gpt_pretraining on line 255 of the py file, there is an undefined parameter 'logging.dir' in the __init__ method of the Accelerator function
[BUG] Accelerator.__init__() got an unexpected keyword argument 'logging_dir'
### System Info
```Shell
accelerate version: main
python version: 3.11
torch version: 2.4
numpy version: 1.26.4
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [ ] My own task or dataset (give details below)
### Reproduction
When I run the accelerate/examples/megatron_1m_gpt_pretraining. py file
accelerate launch --config_file megatron_gpt_pretraining.py \
--config_name "gpt2-large" \
--tokenizer_name "gpt2-large" \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--block_size 1024 \
--learning_rate 5e-5 \
--per_device_train_batch_size 24 \
--per_device_eval_batch_size 24 \
--num_train_epochs 5 \
--with_tracking \
--report_to "wandb" \
--output_dir "awesome_model"
### Expected behavior
Normal training, but I found that in megatron_1m_gpt_pretraining on line 255 of the py file, there is an undefined parameter 'logging.dir' in the __init__ method of the Accelerator function
| Thanks for pointing this out. I think it should be `project_dir` instead. Are you interested in submitting a PR to fix this?
For clarity, the file is at `https://github.com/huggingface/accelerate/blob/main/examples/by_feature/megatron_lm_gpt_pretraining.py` :)
of course
Thanks for pointing this out. I think it should be `project_dir` instead. Are you interested in submitting a PR to fix this?
For clarity, the file is at `https://github.com/huggingface/accelerate/blob/main/examples/by_feature/megatron_lm_gpt_pretraining.py` :)
of course | 1,732,582,927,000 | null | Bug Report | [
"examples/by_feature/megatron_lm_gpt_pretraining.py:main"
] | [] | 1 | 542 |
|
huggingface/trl | huggingface__trl-2433 | 9ff79a65e3d1c28b7ee8bc0912b2fbdceb3dbeec | diff --git a/trl/trainer/rloo_trainer.py b/trl/trainer/rloo_trainer.py
index 106426073f..f2e3eb9674 100644
--- a/trl/trainer/rloo_trainer.py
+++ b/trl/trainer/rloo_trainer.py
@@ -279,7 +279,7 @@ def repeat_generator():
# trainer state initialization
self.state.global_step = 0
self.state.episode = 0
- self.state.max_steps = args.num_total_batches * args.num_mini_batches
+ self.state.max_steps = (args.num_total_batches * args.num_mini_batches) // 2
self.state.num_train_epochs = args.total_episodes / self.train_dataset_len
# Compute absolute values for logging, eval, and save if given as ratio
if args.logging_steps is not None:
| RLOO Trainer Stopping After 1 Epoch
### System Info
- Platform: Linux-3.10.0-693.11.6.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.5
- PyTorch version: 2.4.0
- CUDA device(s): not available
- Transformers version: 4.46.2
- Accelerate version: 1.1.1
- Accelerate config: not found
- Datasets version: 3.1.0
- HF Hub version: 0.26.2
- TRL version: 0.13.0.dev0
- bitsandbytes version: not installed
- DeepSpeed version: 0.15.4
- Diffusers version: not installed
- Liger-Kernel version: not installed
- LLM-Blender version: not installed
- OpenAI version: 1.54.4
- PEFT version: not installed
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
While reproducing RLOO using a multi-GPU setup with official [script](https://huggingface.co/docs/trl/en/rloo_trainer#benchmark-experiments), training consistently halts midway, regardless of whether it's set for 1,000 or 1 million episodes. An example wandb [run](https://wandb.ai/omerveyselcagatan/huggingface/runs/zdftqdx5?nw=nwuseromerveyselcagatan) that ended with 1954 steps, whereas it should 3908.
### Expected behavior
Should have run for 3908, or possible step miscalculation.
### Checklist
- [X] I have checked that my issue isn't already filed (see [open issues](https://github.com/huggingface/trl/issues?q=is%3Aissue))
- [X] I have included my system information
- [X] Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))
- [X] Any code provided is properly formatted in code blocks, (no screenshot, [more on code blocks](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))
- [X] Any traceback provided is complete
| 1,733,253,459,000 | null | Bug Report | [
"trl/trainer/rloo_trainer.py:RLOOTrainer.train"
] | [] | 1 | 543 |
||
huggingface/trl | huggingface__trl-2417 | 9c5388b69e0842f76edc46a2ff9d0b51e1db4337 | diff --git a/trl/trainer/online_dpo_trainer.py b/trl/trainer/online_dpo_trainer.py
index 7830d3fe64..56edd22be5 100644
--- a/trl/trainer/online_dpo_trainer.py
+++ b/trl/trainer/online_dpo_trainer.py
@@ -284,7 +284,10 @@ def __init__(
self.reward_model = prepare_deepspeed(
self.reward_model, args.per_device_train_batch_size, args.fp16, args.bf16
)
- self.ref_model = prepare_deepspeed(self.ref_model, args.per_device_train_batch_size, args.fp16, args.bf16)
+ if self.ref_model is not None:
+ self.ref_model = prepare_deepspeed(
+ self.ref_model, args.per_device_train_batch_size, args.fp16, args.bf16
+ )
else:
if self.ref_model is not None:
self.ref_model = self.ref_model.to(self.accelerator.device)
| Online DPO Meets Error When Using Deepspeed for Speed Up.
### System Info
!pip install git+https://github.com/huggingface/trl.git
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
!ACCELERATE_LOG_LEVEL=info accelerate launch --config_file multi_gpu.yaml \
online_dpo.py \
--model_name_or_path mistralai/Mistral-7B-v0.1 \
--reward_model_path Ray2333/GRM-Llama3.2-3B-rewardmodel-ft \
--dataset_name nvidia/HelpSteer2 \
--learning_rate 5.0e-6 \
--output_dir pythia-1b-tldr-online-dpo \
--per_device_train_batch_size 16 \
--gradient_accumulation_steps 8 \
--warmup_ratio 0.1 \
--missing_eos_penalty 1.0 \
--use_peft
Traceback (most recent call last):
File "/home/ec2-user/SageMaker/Zhichao/UNA_online/UNA_peft/una_peft.py", line 356, in <module>
[2024-11-28 16:59:10,071] [INFO] [config.py:999:print] DeepSpeedEngine configuration:
trainer = OnlineDPOTrainer(
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/utils/deprecation.py", line 165, in wrapped_func
return func(*args, **kwargs)
Traceback (most recent call last):
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/trl/trainer/online_dpo_trainer.py", line 286, in __init__
File "/home/ec2-user/SageMaker/Zhichao/UNA_online/UNA_peft/una_peft.py", line 356, in <module>
self.ref_model = prepare_deepspeed(self.ref_model, args.per_device_train_batch_size, args.fp16, args.bf16)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/trl/trainer/utils.py", line 1212, in prepare_deepspeed
trainer = OnlineDPOTrainer(
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/utils/deprecation.py", line 165, in wrapped_func
return func(*args, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/trl/trainer/online_dpo_trainer.py", line 286, in __init__
model, *_ = deepspeed.initialize(model=model, config=config_kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/deepspeed/__init__.py", line 139, in initialize
assert model is not None, "deepspeed.initialize requires a model"
AssertionErrorself.ref_model = prepare_deepspeed(self.ref_model, args.per_device_train_batch_size, args.fp16, args.bf16):
deepspeed.initialize requires a model File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/trl/trainer/utils.py", line 1212, in prepare_deepspeed
### Expected behavior
It should be able to run.
### Checklist
- [X] I have checked that my issue isn't already filed (see [open issues](https://github.com/huggingface/trl/issues?q=is%3Aissue))
- [X] I have included my system information
- [X] Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))
- [X] Any code provided is properly formatted in code blocks, (no screenshot, [more on code blocks](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))
- [X] Any traceback provided is complete
| Sorry, I use "deepspeed_zero2.yaml" and it should be
!ACCELERATE_LOG_LEVEL=info accelerate launch --config_file deepspeed_zero2.yaml
online_dpo.py
--model_name_or_path mistralai/Mistral-7B-v0.1
--reward_model_path Ray2333/GRM-Llama3.2-3B-rewardmodel-ft
--dataset_name nvidia/HelpSteer2
--learning_rate 5.0e-6
--output_dir pythia-1b-tldr-online-dpo
--per_device_train_batch_size 16
--gradient_accumulation_steps 8
--warmup_ratio 0.1
--missing_eos_penalty 1.0
--use_peft
Thanks for reporting. Please share your system info (`trl env`)
/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/utils/hub.py:128: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead.
warnings.warn(
Copy-paste the following information when reporting an issue:
- Platform: Linux-5.10.228-219.884.amzn2.x86_64-x86_64-with-glibc2.26
- Python version: 3.10.14
- PyTorch version: 2.2.2
- CUDA device(s): NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB
- Transformers version: 4.46.3
- Accelerate version: 0.34.2
- Accelerate config: not found
- Datasets version: 3.1.0
- HF Hub version: 0.26.2
- TRL version: 0.13.0.dev0
- bitsandbytes version: 0.44.1
- DeepSpeed version: 0.16.0
- Diffusers version: not installed
- Liger-Kernel version: not installed
- LLM-Blender version: not installed
- OpenAI version: not installed
- PEFT version: 0.13.2 | 1,732,904,159,000 | null | Bug Report | [
"trl/trainer/online_dpo_trainer.py:OnlineDPOTrainer.__init__"
] | [] | 1 | 544 |
|
huggingface/trl | huggingface__trl-2332 | 74e20cbbbcbac7ac8d426df09eda5f310c637def | diff --git a/trl/trainer/dpo_trainer.py b/trl/trainer/dpo_trainer.py
index b563cab2f5..0c9883387a 100644
--- a/trl/trainer/dpo_trainer.py
+++ b/trl/trainer/dpo_trainer.py
@@ -1086,10 +1086,10 @@ def concatenated_forward(self, model: nn.Module, batch: Dict[str, Union[List, to
# Get the first column idx that is all zeros and remove every column after that
empty_cols = torch.sum(attention_mask, dim=0) == 0
- first_empty_col = torch.nonzero(empty_cols)[0].item() if empty_cols.any() else attention_mask.size(1) + 1
- input_ids = input_ids[:, : first_empty_col - 1]
- attention_mask = attention_mask[:, : first_empty_col - 1]
- loss_mask = loss_mask[:, : first_empty_col - 1]
+ first_empty_col = torch.nonzero(empty_cols)[0].item() if empty_cols.any() else attention_mask.size(1)
+ input_ids = input_ids[:, : first_empty_col]
+ attention_mask = attention_mask[:, : first_empty_col]
+ loss_mask = loss_mask[:, : first_empty_col]
# Truncate right
if self.args.max_length is not None:
| Wrong tensor index for roll and truncate in DPOTrainer fn concatenated_forward( ).
### System Info
it is a tensor index error
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
```python
# Get the first column idx that is all zeros and remove every column after that
empty_cols = torch.sum(attention_mask, dim=0) == 0
first_empty_col = torch.nonzero(empty_cols)[0].item() if empty_cols.any() else attention_mask.size(1) + 1
input_ids = input_ids[:, : first_empty_col - 1]
attention_mask = attention_mask[:, : first_empty_col - 1]
loss_mask = loss_mask[:, : first_empty_col - 1]
```
### Expected behavior
The returns of _torch.nonzero_ is the index (starts from 0) of non-zero elements, so there is no need to add -1 to _first_empty_col_.
The correct code should be:
```python
empty_cols = torch.sum(attention_mask, dim=0) == 0
first_empty_col = torch.nonzero(empty_cols)[0].item() if empty_cols.any() else attention_mask.size(1)
input_ids = input_ids[:, : first_empty_col]
attention_mask = attention_mask[:, : first_empty_col]
loss_mask = loss_mask[:, : first_empty_col]
```
| Good catch! Thanks! Do you mind opening a PR to fix that? | 1,730,897,529,000 | null | Bug Report | [
"trl/trainer/dpo_trainer.py:DPOTrainer.concatenated_forward"
] | [] | 1 | 545 |
|
huggingface/trl | huggingface__trl-2325 | 74e20cbbbcbac7ac8d426df09eda5f310c637def | diff --git a/trl/trainer/rloo_trainer.py b/trl/trainer/rloo_trainer.py
index 7bbd39264d..e33899f5d9 100644
--- a/trl/trainer/rloo_trainer.py
+++ b/trl/trainer/rloo_trainer.py
@@ -263,7 +263,6 @@ def repeat_generator():
approxkl_stats = torch.zeros(stats_shape, device=device)
pg_clipfrac_stats = torch.zeros(stats_shape, device=device)
pg_loss_stats = torch.zeros(stats_shape, device=device)
- vf_loss_stats = torch.zeros(stats_shape, device=device)
vf_clipfrac_stats = torch.zeros(stats_shape, device=device)
entropy_stats = torch.zeros(stats_shape, device=device)
ratio_stats = torch.zeros(stats_shape, device=device)
@@ -441,7 +440,6 @@ def repeat_generator():
ratio_stats[ppo_epoch_idx, minibatch_idx, gradient_accumulation_idx] = new_ratio.mean()
gradient_accumulation_idx += 1
minibatch_idx += 1
- self.state.global_step += 1
# del everything and empty cache
# fmt: off
del (
@@ -467,7 +465,6 @@ def repeat_generator():
metrics["policy/approxkl_avg"] = self.accelerator.gather(approxkl_stats).mean().item()
metrics["policy/clipfrac_avg"] = self.accelerator.gather(pg_clipfrac_stats).mean().item()
metrics["loss/policy_avg"] = self.accelerator.gather(pg_loss_stats).mean().item()
- metrics["loss/value_avg"] = self.accelerator.gather(vf_loss_stats).mean().item()
metrics["val/clipfrac_avg"] = self.accelerator.gather(vf_clipfrac_stats).mean().item()
metrics["policy/entropy_avg"] = self.accelerator.gather(entropy_stats).mean().item()
metrics["val/ratio"] = self.accelerator.gather(ratio_stats).mean().item()
@@ -475,12 +472,12 @@ def repeat_generator():
metrics["val/num_eos_tokens"] = (responses == processing_class.eos_token_id).sum().item()
metrics["lr"] = self.lr_scheduler.get_last_lr()[0]
metrics["episode"] = self.state.episode
- self.state.epoch = self.state.episode / self.train_dataset_len # used by self.log
- self.state.global_step += 1
+ self.state.epoch = self.state.episode / (args.rloo_k * self.train_dataset_len) # used by self.log
self.log(metrics)
del kl, mean_kl, mean_entropy, scores
self.lr_scheduler.step()
+ self.state.global_step += 1
self.control = self.callback_handler.on_step_end(args, self.state, self.control)
if self.control.should_save:
self._save_checkpoint(model, trial=None)
| Several problems in RLOOTrainer
### System Info
main
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
1. metrics["loss/value_avg"] = self.accelerator.gather(vf_loss_stats).mean().item()
this metrics is useless since we don't use value func in RLOO
2. self.state.epoch = self.state.episode / self.train_dataset_len # used by self.log
This will cause problems on calculation since the true epoch count is (self.state.episode / (args.rloo_k * self.train_dataset_len) Since every instruction is repeated args.rloo_k times
3. Multiple self.state.global_step += 1
will cause the saving process to go wrong
### Expected behavior
this to be right
| 1,730,747,016,000 | null | Bug Report | [
"trl/trainer/rloo_trainer.py:RLOOTrainer.train"
] | [] | 1 | 546 |
||
sympy/sympy | sympy__sympy-27301 | a7719e719c0b43ec1dbb964b01b57c4f3783be8d | diff --git a/sympy/plotting/plot.py b/sympy/plotting/plot.py
index 63da0440dabb..50029392a1ac 100644
--- a/sympy/plotting/plot.py
+++ b/sympy/plotting/plot.py
@@ -301,8 +301,8 @@ def plot(*args, show=True, **kwargs):
:external:meth:`~matplotlib.axes.Axes.fill_between` method.
adaptive : bool, optional
- The default value is set to ``True``. Set adaptive to ``False``
- and specify ``n`` if uniform sampling is required.
+ The default value for the ``adaptive`` parameter is now ``False``.
+ To enable adaptive sampling, set ``adaptive=True`` and specify ``n`` if uniform sampling is required.
The plotting uses an adaptive algorithm which samples
recursively to accurately plot. The adaptive algorithm uses a
@@ -377,14 +377,14 @@ def plot(*args, show=True, **kwargs):
[0]: cartesian line: x**2 for x over (-6.0, 6.0)
[1]: cartesian line: x for x over (-5.0, 5.0)
- No adaptive sampling.
+ No adaptive sampling by default. If adaptive sampling is required, set ``adaptive=True``.
.. plot::
:context: close-figs
:format: doctest
:include-source: True
- >>> plot(x**2, adaptive=False, n=400)
+ >>> plot(x**2, adaptive=True, n=400)
Plot object containing:
[0]: cartesian line: x**2 for x over (-10.0, 10.0)
| DOC: outdated information about adaptive sampling in plot() function
I have recently learned (https://github.com/mgeier/python-audio/issues/4) that SymPy doesn't use adaptive sampling by default anymore.
Therefore, this documentation is outdated:
https://github.com/sympy/sympy/blob/a7719e719c0b43ec1dbb964b01b57c4f3783be8d/sympy/plotting/plot.py#L304-L305
https://github.com/sympy/sympy/blob/a7719e719c0b43ec1dbb964b01b57c4f3783be8d/sympy/plotting/plot.py#L380-L389
| 1,732,293,434,000 | null | Bug Report | [
"sympy/plotting/plot.py:plot"
] | [] | 1 | 547 |
||
SYSTRAN/faster-whisper | SYSTRAN__faster-whisper-1198 | b568faec40eef1fee88f8aeb27ac3f9d6e006ba4 | diff --git a/faster_whisper/vad.py b/faster_whisper/vad.py
index 9605931c..1f7d2057 100644
--- a/faster_whisper/vad.py
+++ b/faster_whisper/vad.py
@@ -260,8 +260,9 @@ def __init__(self, encoder_path, decoder_path):
) from e
opts = onnxruntime.SessionOptions()
- opts.inter_op_num_threads = 0
- opts.intra_op_num_threads = 0
+ opts.inter_op_num_threads = 1
+ opts.intra_op_num_threads = 1
+ opts.enable_cpu_mem_arena = False
opts.log_severity_level = 4
self.encoder_session = onnxruntime.InferenceSession(
@@ -301,7 +302,16 @@ def __call__(
batched_audio = batched_audio.reshape(-1, num_samples + context_size_samples)
- encoder_output = self.encoder_session.run(None, {"input": batched_audio})[0]
+ encoder_batch_size = 10000
+ num_segments = batched_audio.shape[0]
+ encoder_outputs = []
+ for i in range(0, num_segments, encoder_batch_size):
+ encoder_output = self.encoder_session.run(
+ None, {"input": batched_audio[i : i + encoder_batch_size]}
+ )[0]
+ encoder_outputs.append(encoder_output)
+
+ encoder_output = np.concatenate(encoder_outputs, axis=0)
encoder_output = encoder_output.reshape(batch_size, -1, 128)
decoder_outputs = []
| OOM when using VAD
Hi, does somebody else experience issues with memory consumption when transcribing audio files containing a lot of speech (~ 4 hours long)? I am running the latest version of faster-whisper in a Kubernetes pod on a g4dn AWS instance. The server has 4 cores, 1 GPU, and 16GB RAM, but the pod is limited to 2 cores. The base image is `pytorch/pytorch:2.5.1-cuda12.4-cudnn9-runtime` and as per [this pinned issue](https://github.com/SYSTRAN/faster-whisper/issues/1086) the installed versions should be compatible:
- python 3.11
- torch 2.5.1+cu124
- ctranslate2 4.5.0
- cuda 12.4
- cudnn 9.1.0.7
The process gets killed during the transcription phase when VAD is enabled. I tried the solution [described here](https://github.com/snakers4/silero-vad/issues/356), but it doesn't help. See the logs attached. Anyone has any idea what could be the cause of the OOM?
[libraries.txt](https://github.com/user-attachments/files/18039471/libraries.txt)
[logs on sigkill.txt](https://github.com/user-attachments/files/18039459/logs.on.sigkill.txt)
| 1,733,855,723,000 | null | Performance Issue | [
"faster_whisper/vad.py:SileroVADModel.__init__",
"faster_whisper/vad.py:SileroVADModel.__call__"
] | [] | 2 | 548 |
||
SYSTRAN/faster-whisper | SYSTRAN__faster-whisper-1157 | bcd8ce0fc72d1fa4e42bdf5fd34d5d17bae680c2 | diff --git a/faster_whisper/transcribe.py b/faster_whisper/transcribe.py
index 067527f1..763d64ac 100644
--- a/faster_whisper/transcribe.py
+++ b/faster_whisper/transcribe.py
@@ -1699,12 +1699,14 @@ def find_alignment(
# array([0.])
# This results in crashes when we lookup jump_times with float, like
# IndexError: arrays used as indices must be of integer (or boolean) type
- return []
+ return_list.append([])
+ continue
word_boundaries = np.pad(
np.cumsum([len(t) for t in word_tokens[:-1]]), (1, 0)
)
if len(word_boundaries) <= 1:
- return []
+ return_list.append([])
+ continue
jumps = np.pad(np.diff(text_indices), (1, 0), constant_values=1).astype(
bool
@@ -1884,11 +1886,9 @@ def merge_punctuations(alignment: List[dict], prepended: str, appended: str) ->
if previous["word"].startswith(" ") and previous["word"].strip() in prepended:
# prepend it to the following word
following["word"] = previous["word"] + following["word"]
- if "tokens" in alignment[0].keys():
- following["tokens"] = previous["tokens"] + following["tokens"]
- previous["tokens"] = []
+ following["tokens"] = previous["tokens"] + following["tokens"]
previous["word"] = ""
-
+ previous["tokens"] = []
else:
j = i
i -= 1
@@ -1902,11 +1902,9 @@ def merge_punctuations(alignment: List[dict], prepended: str, appended: str) ->
if not previous["word"].endswith(" ") and following["word"] in appended:
# append it to the previous word
previous["word"] = previous["word"] + following["word"]
- if "tokens" in alignment[0].keys():
- previous["tokens"] = previous["tokens"] + following["tokens"]
- following["tokens"] = []
+ previous["tokens"] = previous["tokens"] + following["tokens"]
following["word"] = ""
-
+ following["tokens"] = []
else:
i = j
j += 1
| IndexError: list index out of range in add_word_timestamps function
Hi,
I found a rare condition, with a specific wav file, specific language and prompt, when I try to transcribe with word_timestamps=True, there is a list index out of range error in add_word_timestamps function:
```
File "/usr/local/src/transcriber/lib/python3.11/site-packages/faster_whisper/transcribe.py", line 1574, in add_word_timestamps
median_duration, max_duration = median_max_durations[segment_idx]
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
IndexError: list index out of range
```
It seems in the median_max_durations list we have less elements than in the segments list.
I'm using large-v3-turbo model with these transcibe settings:
```
segments, _ = asr_model.transcribe(audio_to_analize, language="fr", condition_on_previous_text=False, initial_prompt="Free", task='transcribe', word_timestamps=True, suppress_tokens=[-1, 12], beam_size=5)
segments = list(segments) # The transcription will actually run here.
```
As I see, the median_max_durations is populated from alignments, so something is maybe wrong there? If i change language or prompt, or use another sound file, then there is no issue.
Thank you
| I'm aware that this error exists but I had no luck in reproducing it, can you write the exact steps to reproduce and upload the audio file?
Yes. The sample python code that generates the issue:
```
import torch
from faster_whisper import WhisperModel
asr_model = WhisperModel("large-v3-turbo", device="cuda", compute_type="int8", download_root="./models")
segments, _ = asr_model.transcribe('test.wav', language='fr', condition_on_previous_text=False, initial_prompt='Free', task='transcribe', word_timestamps=True, suppress_tokens=[-1, 12], beam_size=5)
segments = list(segments) # The transcription will actually run here.
```
And the audio sample is attached.
[test.zip](https://github.com/user-attachments/files/17646609/test.zip)
I was not able to reproduce it on my machine or using colab
Maybe python version, debian, pytorch... or something is slightly different on our setups. Anything I can do on my side to get more debug logs to see what is the issue?
are you using the master branch?
`median_max_durations` is initialized as an empty list, and since you are using sequential transcription, it will have a single value, The only reason that causes this error is that it is still an empty list which means the for loop in line 1565 was never executed, this will happen when `alignments` is an empty list, you need to figure why is this happening
https://github.com/SYSTRAN/faster-whisper/blob/203dddb047fd2c3ed2a520fe1416467a527e0f37/faster_whisper/transcribe.py#L1561-L1595
the same here, while test whisper_streaming
```shell
Traceback (most recent call last):
File "C:\Users\kr.mao\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 187, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "C:\Users\kr.mao\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 110, in _get_module_details
__import__(pkg_name)
File "F:\Workspace\skills\python3\whisper_streaming\whisper_online_server.py", line 183, in <module>
proc.process()
File "F:\Workspace\skills\python3\whisper_streaming\whisper_online_server.py", line 162, in process
o = online.process_iter()
File "F:\Workspace\skills\python3\whisper_streaming\whisper_online.py", line 378, in process_iter
res = self.asr.transcribe(self.audio_buffer, init_prompt=prompt)
File "F:\Workspace\skills\python3\whisper_streaming\whisper_online.py", line 138, in transcribe
return list(segments)
File "F:\Workspace\skills\python3\whisper_streaming\venv\lib\site-packages\faster_whisper\transcribe.py", line 2016, in restore_speech_timestamps
for segment in segments:
File "F:\Workspace\skills\python3\whisper_streaming\venv\lib\site-packages\faster_whisper\transcribe.py", line 1256, in generate_segments
self.add_word_timestamps(
File "F:\Workspace\skills\python3\whisper_streaming\venv\lib\site-packages\faster_whisper\transcribe.py", line 1595, in add_word_timestamps
median_duration, max_duration = median_max_durations[segment_idx]
IndexError: list index out of range
```
faster_whisper ***version.py***
```python
"""Version information."""
__version__ = "1.1.0rc0"
```
This problem is still non-reproducible regardless of all methods provided, it will not be solved without reproduction, someone who has the problem needs to create a colab notebook to reproduce it and if they weren't able to reproduce it on colab then they need to isolate where the problem is caused in their environment, without that there is nothing that can be done
> This problem is still non-reproducible regardless of all methods provided, it will not be solved without reproduction, someone who has the problem needs to create a colab notebook to reproduce it and if they weren't able to reproduce it on colab then they need to isolate where the problem is caused in their environment, without that there is nothing that can be done
https://gist.github.com/OliveSerg/cc6c409126567a40c94eb94339a13bae
Was able to reproduce it on Colab with the following files [test.zip](https://github.com/user-attachments/files/17818786/test.zip). Was not able to reproduce with @formater's test file though. Files are just a French bible verse from LibriVox and a [youtube](https://youtube.com/shorts/O32nnjAmpeM?si=vDHhKdbgV27r1n8b) short.
Used `ctranslate2==4.4.0` because of [1806](https://github.com/OpenNMT/CTranslate2/issues/1806).
Error occurs only when `compute_type="int8"` or `int8_float16`, `task="translate"`, and `word_timestamps=True`. No further debugging with the parameters were done aside for replacing these 3.
@MahmoudAshraf97
Maybe related to such weird output (that's from prebug [193 ](https://github.com/SYSTRAN/faster-whisper/tree/3d1de60ef3ce7d34f7c0ae6547f8a616aa060ac2)revision of faster-whisper):
```
{
"id": 279,
"seek": 132430,
"start": 1542.84,
"end": 1545.14,
"text": " Nuðarr你可以 það hverðesskj af april",
"tokens": [51225, 13612, 23436, 289, 81, 42766, 43219, 64, 23436, 276, 331, 23436, 442, 74, 73, 3238, 10992, 388, 51350],
"temperature": 1.0,
"avg_logprob": -4.741359252929687,
"compression_ratio": 1.335164835164835,
"no_speech_prob": 0.12347412109375,
"words": [
{"start": 1542.84, "end": 1542.84, "word": "af", "probability": 0.002758026123046875},
{"start": 1542.84, "end": 1542.84, "word": "aprilð", "probability": 0.057145535945892334},
{"start": 1542.84, "end": 1542.84, "word": "jævîr", "probability": 0.1567896842956543},
{"start": 1542.84, "end": 1542.84, "word": "til", "probability": 0.0018939971923828125},
{"start": 1542.84, "end": 1542.84, "word": "det", "probability": 0.0033779144287109375},
{"start": 1542.84, "end": 1543.44, "word": "bældat", "probability": 0.11750292778015137},
{"start": 1543.44, "end": 1544.36, "word": "brilliant", "probability": 7.152557373046875e-07},
{"start": 1544.36, "end": 1545.14, "word": "með", "probability": 0.2783784866333008}
]
},
{
"id": 280,
"seek": 132430,
"start": 1541.32,
"end": 1543.04,
"text": "ð jævîr til det bældat brilliant með",
"tokens": [51350, 23436, 361, 7303, 85, 7517, 81, 8440, 1141, 272, 7303, 348, 267, 10248, 385, 23436, 51436],
"temperature": 1.0,
"avg_logprob": -4.741359252929687,
"compression_ratio": 1.335164835164835,
"no_speech_prob": 0.12347412109375,
"words": []
},
{
"id": 281,
"seek": 135430,
"start": 1545.14,
"end": 1546.3,
"text": " Duð ena porgna prákankenin.",
"tokens": [50364, 5153, 23436, 465, 64, 1515, 70, 629, 582, 842, 5225, 2653, 259, 13, 50431],
"temperature": 1.0,
"avg_logprob": -4.655551255031784,
"compression_ratio": 1.3051771117166213,
"no_speech_prob": 0.036651611328125,
"words": [
{"start": 1545.14, "end": 1545.36, "word": "Duð", "probability": 0.051422119140625},
{"start": 1545.36, "end": 1545.36, "word": "ena", "probability": 0.010187149047851562},
{"start": 1545.36, "end": 1545.44, "word": "porgna", "probability": 0.004482746124267578},
{"start": 1545.44, "end": 1546.3, "word": "prákankenin.", "probability": 0.04590331315994263}
]
}
```
> https://gist.github.com/OliveSerg/cc6c409126567a40c94eb94339a13bae
>
> Was able to reproduce it on Colab with the following files [test.zip](https://github.com/user-attachments/files/17818786/test.zip). Was not able to reproduce with @formater's test file though. Files are just a French bible verse from LibriVox and a [youtube](https://youtube.com/shorts/O32nnjAmpeM?si=vDHhKdbgV27r1n8b) short.
>
> Used `ctranslate2==4.4.0` because of [1806](https://github.com/OpenNMT/CTranslate2/issues/1806).
>
> Error occurs only when `compute_type="int8"` or `int8_float16`, `task="translate"`, and `word_timestamps=True`. No further debugging with the parameters were done aside for replacing these 3.
I managed to reproduce it consistently on colab, I also reproduced it on my machine but not consistently, the reason for inconsistency is that it needs the exact encoder input and generated tokens to reproduce, and using `int8` does not guarantee that at least on my hardware(RTX 3070 Ti) so I have to try transcribing several times to reproduce.
What causes the issue is that some segments produce a single timestamp token with no text tokens and that's it, `find_alignment` function returned an empty list when no words were found which was fine before #856 , but after it, we're expecting `find_alignment` to return a list of lists which happens as long as there are text tokens, but in the edge case where it doesn't it returned a single list and ignores the rest of the loop over other segments in the batch, hence returning less alignments than segments causing the `list index out of range` error
I'll open a PR to solve the problem soon | 1,732,098,639,000 | null | Bug Report | [
"faster_whisper/transcribe.py:WhisperModel.find_alignment",
"faster_whisper/transcribe.py:merge_punctuations"
] | [] | 2 | 549 |
|
SYSTRAN/faster-whisper | SYSTRAN__faster-whisper-1141 | 85e61ea11173dce3f10ce05e4b4bc1a2939d9e4e | diff --git a/faster_whisper/transcribe.py b/faster_whisper/transcribe.py
index 6d18a173..80e5d92c 100644
--- a/faster_whisper/transcribe.py
+++ b/faster_whisper/transcribe.py
@@ -174,6 +174,9 @@ def forward(self, features, chunks_metadata, **forward_params):
compression_ratio=get_compression_ratio(
self.tokenizer.decode(subsegment["tokens"])
),
+ seek=int(
+ chunk_metadata["start_time"] * self.model.frames_per_second
+ ),
)
for subsegment in subsegments
]
@@ -496,7 +499,7 @@ def _batched_segments_generator(
for segment in result:
seg_idx += 1
yield Segment(
- seek=int(result[-1]["end"] * self.model.frames_per_second),
+ seek=segment["seek"],
id=seg_idx,
text=segment["text"],
start=round(segment["start"], 3),
@@ -1318,7 +1321,7 @@ def next_words_segment(segments: List[dict]) -> Optional[dict]:
yield Segment(
id=idx,
- seek=seek,
+ seek=previous_seek,
start=segment["start"],
end=segment["end"],
text=text,
@@ -1585,7 +1588,7 @@ def add_word_timestamps(
for segment_idx, segment in enumerate(segments):
word_index = 0
- time_offset = segment[0]["start"]
+ time_offset = segment[0]["seek"] / self.frames_per_second
median_duration, max_duration = median_max_durations[segment_idx]
for subsegment_idx, subsegment in enumerate(segment):
saved_tokens = 0
| Some segment has a 1 second shifted after PR #856
appreciate your hard work
---
audio (2 minutes): [01.aac.zip](https://github.com/user-attachments/files/17751633/01.aac.zip)
The correct SRT result (using commit fbcf58b, which is before the huge PR #856): [01.old.srt.zip](https://github.com/user-attachments/files/17751733/01.old.srt.zip)
The wrong SRT result (using latest commit 85e61ea): [01.new.srt.zip](https://github.com/user-attachments/files/17751755/01.new.srt.zip)
---
I am **not** using the batch version
```python
model = faster_whisper.WhisperModel(
model_size_or_path='large-v2',
device='cuda',
cpu_threads=4,
)
model.transcribe(
audio=audio,
language=None,
task='transcribe',
vad_filter=False,
initial_prompt=None,
word_timestamps=True,
repetition_penalty=1.0,
)
```
script from this project https://github.com/heimoshuiyu/whisper-fastapi
---

some segments on the left (wrong) has 1 second mismatch (shift +1s) than the right (correct)
---
I also test on the commit of RP #856 (eb839023), which is worse
result SRT:
[01.eb839023.srt.zip](https://github.com/user-attachments/files/17752205/01.eb839023.srt.zip)

left: commit eb839023 PR #856
middle: latest commit 85e61ea
right: commit fbcf58b
| 1,731,607,572,000 | null | Bug Report | [
"faster_whisper/transcribe.py:BatchedInferencePipeline.forward",
"faster_whisper/transcribe.py:BatchedInferencePipeline._batched_segments_generator",
"faster_whisper/transcribe.py:WhisperModel.generate_segments",
"faster_whisper/transcribe.py:WhisperModel.add_word_timestamps"
] | [] | 4 | 550 |
||
mlflow/mlflow | mlflow__mlflow-13821 | 15dbca59de6974d1ed9ce1e801edefd86b6a87ef | diff --git a/mlflow/models/model.py b/mlflow/models/model.py
index 2326c3df57402..7ae1fbede42db 100644
--- a/mlflow/models/model.py
+++ b/mlflow/models/model.py
@@ -1116,9 +1116,20 @@ def update_model_requirements(
def _validate_langchain_model(model):
- from mlflow.langchain import _validate_and_prepare_lc_model_or_path
+ from langchain_core.runnables.base import Runnable
- return _validate_and_prepare_lc_model_or_path(model, None)
+ from mlflow.models.utils import _validate_and_get_model_code_path
+
+ if isinstance(model, str):
+ return _validate_and_get_model_code_path(model, None)
+
+ if not isinstance(model, Runnable):
+ raise MlflowException.invalid_parameter_value(
+ "Model must be a Langchain Runnable type or path to a Langchain model, "
+ f"got {type(model)}"
+ )
+
+ return model
def _validate_llama_index_model(model):
| [BUG] MLflow langchain does not support logging RunnableWithMessageHistory
### Issues Policy acknowledgement
- [X] I have read and agree to submit bug reports in accordance with the [issues policy](https://www.github.com/mlflow/mlflow/blob/master/ISSUE_POLICY.md)
### Where did you encounter this bug?
Databricks
### Willingness to contribute
No. I cannot contribute a bug fix at this time.
### MLflow version
- Client: 2.16.2
### System information
- **OS Platform and Distribution**: Linux (5.4.0-1135-azure-fips)
- **Python version**: 3.11.0
### Describe the problem
I am trying to log a Langchain chain for conversational RAG with memory using Langchains RunnableWithMessageHistory. However, I get an error that says that this flavor is not supported. Is there a workaround for this?
### Tracking information
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```shell
System information: Linux #142+fips1-Ubuntu SMP Tue Jul 30 21:00:25 UTC 2024
Python version: 3.11.0rc1
MLflow version: 2.16.2
MLflow module location: /local_disk0/.ephemeral_nfs/envs/pythonEnv-ada40d50-865d-42f8-8ded-3ea7c11f55a6/lib/python3.11/site-packages/mlflow/__init__.py
Tracking URI: databricks
Registry URI: databricks-uc
Databricks runtime version: 15.4
MLflow environment variables:
MLFLOW_CONDA_HOME: /databricks/conda
MLFLOW_DEPLOYMENTS_TARGET: databricks
MLFLOW_GATEWAY_URI: databricks
MLFLOW_PYTHON_EXECUTABLE: /databricks/spark/scripts/mlflow_python.sh
MLFLOW_REGISTRY_URI: databricks-uc
MLFLOW_TRACKING_URI: databricks
MLflow dependencies:
Flask: 2.2.5
Jinja2: 3.1.2
aiohttp: 3.10.5
alembic: 1.13.3
azure-storage-file-datalake: 12.14.0
boto3: 1.34.39
botocore: 1.34.39
docker: 7.1.0
fastapi: 0.115.0
google-cloud-storage: 2.10.0
graphene: 3.3
gunicorn: 20.1.0
kubernetes: 31.0.0
langchain: 0.3.0
markdown: 3.4.1
matplotlib: 3.7.2
mlflow-skinny: 2.16.2
numpy: 1.23.5
pandas: 1.5.3
pyarrow: 14.0.1
pydantic: 2.9.2
scikit-learn: 1.3.0
scipy: 1.11.1
sqlalchemy: 2.0.35
tiktoken: 0.7.0
uvicorn: 0.30.6
virtualenv: 20.24.2
watchfiles: 0.24.0
```
### Code to reproduce issue
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```
from langchain_openai import AzureChatOpenAI
from langchain.chains import create_history_aware_retriever, create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_core.chat_history import BaseChatMessageHistory
from langchain_community.chat_message_histories import ChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
# fill with details
llm = AzureChatOpenAI()
vector_search_as_retriever = DatabricksVectorSearch().as_retriever()
contextualize_q_prompt = ChatPromptTemplate.from_messages(
[
("system", contextualize_q_system_prompt),
MessagesPlaceholder("chat_history"),
("human", "{input}"),
]
)
history_aware_retriever = create_history_aware_retriever(
llm, vector_search_as_retriever, contextualize_q_prompt
)
qa_prompt = ChatPromptTemplate.from_messages(
[
("system", system_prompt),
MessagesPlaceholder("chat_history"),
("human", "{input}"),
]
)
question_answer_chain = create_stuff_documents_chain(llm, qa_prompt)
rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain)
def get_session_history(session_id: str) -> BaseChatMessageHistory:
if session_id not in store:
store[session_id] = ChatMessageHistory()
return store[session_id]
conversational_rag_chain = RunnableWithMessageHistory(
rag_chain,
get_session_history,
input_messages_key="input",
history_messages_key="chat_history",
output_messages_key="answer",
)
# Error
with mlflow.start_run(run_name="test"):
mlflow.set_tag("type", "chain")
logged_chain_info = mlflow.langchain.log_model(
lc_model=conversational_rag_chain,
artifact_path="chain"
)
```
### Stack trace
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```
MlflowException: MLflow langchain flavor only supports subclasses of (<class 'langchain.chains.base.Chain'>, <class 'langchain.agents.agent.AgentExecutor'>, <class 'langchain_core.retrievers.BaseRetriever'>, <class 'langchain_core.language_models.chat_models.SimpleChatModel'>, <class 'langchain_core.prompts.chat.ChatPromptTemplate'>, <class 'langchain_core.runnables.passthrough.RunnablePassthrough'>, <class 'langchain_core.runnables.base.RunnableLambda'>, <class 'langchain_core.runnables.base.RunnableParallel'>, <class 'langchain_core.runnables.base.RunnableSequence'>, <class 'langchain_core.runnables.branch.RunnableBranch'>, <class 'langchain_core.runnables.passthrough.RunnableAssign'>, <class 'langchain_core.runnables.base.RunnableBinding'>), found RunnableWithMessageHistory.
File <command-2576690084880631>, line 5
3 with mlflow.start_run(run_name=f"dbdemos_rag_azure"):
4 mlflow.set_tag("type", "chain")
----> 5 logged_chain_info = mlflow.langchain.log_model(
6 lc_model=conversational_rag_chain, # Chain code file e.g., /path/to/the/chain.py
7 model_config='rag_multi_chain_config.yaml', # Chain configuration
8 artifact_path="chain"
9 )
11 # Test the chain locally
12 chain = mlflow.langchain.load_model(logged_chain_info.model_uri)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-ada40d50-865d-42f8-8ded-3ea7c11f55a6/lib/python3.11/site-packages/mlflow/tracing/provider.py:268, in trace_disabled.<locals>.wrapper(*args, **kwargs)
266 disable()
267 try:
--> 268 is_func_called, result = True, f(*args, **kwargs)
269 finally:
270 enable()
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-ada40d50-865d-42f8-8ded-3ea7c11f55a6/lib/python3.11/site-packages/mlflow/langchain/__init__.py:549, in log_model(lc_model, artifact_path, conda_env, code_paths, registered_model_name, signature, input_example, await_registration_for, pip_requirements, extra_pip_requirements, metadata, loader_fn, persist_dir, example_no_conversion, run_id, model_config, streamable)
403 @experimental
404 @format_docstring(LOG_MODEL_PARAM_DOCS.format(package_name=FLAVOR_NAME))
405 @docstring_version_compatibility_warning(FLAVOR_NAME)
(...)
424 streamable=None,
425 ):
426 """
427 Log a LangChain model as an MLflow artifact for the current run.
428
(...)
547 metadata of the logged model.
548 """
--> 549 return Model.log(
550 artifact_path=artifact_path,
551 flavor=mlflow.langchain,
552 registered_model_name=registered_model_name,
553 lc_model=lc_model,
554 conda_env=conda_env,
555 code_paths=code_paths,
556 signature=signature,
557 input_example=input_example,
558 await_registration_for=await_registration_for,
559 pip_requirements=pip_requirements,
560 extra_pip_requirements=extra_pip_requirements,
561 metadata=metadata,
562 loader_fn=loader_fn,
563 persist_dir=persist_dir,
564 example_no_conversion=example_no_conversion,
565 run_id=run_id,
566 model_config=model_config,
567 streamable=streamable,
568 )
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-ada40d50-865d-42f8-8ded-3ea7c11f55a6/lib/python3.11/site-packages/mlflow/models/model.py:725, in Model.log(cls, artifact_path, flavor, registered_model_name, await_registration_for, metadata, run_id, resources, **kwargs)
721 run_id = mlflow.tracking.fluent._get_or_start_run().info.run_id
722 mlflow_model = cls(
723 artifact_path=artifact_path, run_id=run_id, metadata=metadata, resources=resources
724 )
--> 725 flavor.save_model(path=local_path, mlflow_model=mlflow_model, **kwargs)
726 # `save_model` calls `load_model` to infer the model requirements, which may result in
727 # __pycache__ directories being created in the model directory.
728 for pycache in Path(local_path).rglob("__pycache__"):
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-ada40d50-865d-42f8-8ded-3ea7c11f55a6/lib/python3.11/site-packages/mlflow/tracing/provider.py:272, in trace_disabled.<locals>.wrapper(*args, **kwargs)
270 enable()
271 else:
--> 272 is_func_called, result = True, f(*args, **kwargs)
273 # We should only catch the exception from disable() and enable()
274 # and let other exceptions propagate.
275 except MlflowTracingException as e:
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-ada40d50-865d-42f8-8ded-3ea7c11f55a6/lib/python3.11/site-packages/mlflow/langchain/__init__.py:262, in save_model(lc_model, path, conda_env, code_paths, mlflow_model, signature, input_example, pip_requirements, extra_pip_requirements, metadata, loader_fn, persist_dir, example_no_conversion, model_config, streamable)
259 import langchain
260 from langchain.schema import BaseRetriever
--> 262 lc_model_or_path = _validate_and_prepare_lc_model_or_path(lc_model, loader_fn, temp_dir)
264 _validate_env_arguments(conda_env, pip_requirements, extra_pip_requirements)
266 path = os.path.abspath(path)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-ada40d50-865d-42f8-8ded-3ea7c11f55a6/lib/python3.11/site-packages/mlflow/langchain/utils/__init__.py:293, in _validate_and_prepare_lc_model_or_path(lc_model, loader_fn, temp_dir)
290 return _validate_and_get_model_code_path(lc_model, temp_dir)
292 if not isinstance(lc_model, supported_lc_types()):
--> 293 raise mlflow.MlflowException.invalid_parameter_value(
294 get_unsupported_model_message(type(lc_model).__name__)
295 )
297 _SUPPORTED_LLMS = _get_supported_llms()
298 if isinstance(lc_model, langchain.chains.llm.LLMChain) and not any(
299 isinstance(lc_model.llm, supported_llm) for supported_llm in _SUPPORTED_LLMS
300 ):
```
### Other info / logs
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```
REPLACE_ME
```
### What component(s) does this bug affect?
- [ ] `area/artifacts`: Artifact stores and artifact logging
- [ ] `area/build`: Build and test infrastructure for MLflow
- [ ] `area/deployments`: MLflow Deployments client APIs, server, and third-party Deployments integrations
- [ ] `area/docs`: MLflow documentation pages
- [ ] `area/examples`: Example code
- [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry
- [X] `area/models`: MLmodel format, model serialization/deserialization, flavors
- [ ] `area/recipes`: Recipes, Recipe APIs, Recipe configs, Recipe Templates
- [ ] `area/projects`: MLproject format, project running backends
- [ ] `area/scoring`: MLflow Model server, model deployment tools, Spark UDFs
- [ ] `area/server-infra`: MLflow Tracking server backend
- [ ] `area/tracking`: Tracking Service, tracking client APIs, autologging
### What interface(s) does this bug affect?
- [ ] `area/uiux`: Front-end, user experience, plotting, JavaScript, JavaScript dev server
- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
- [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry
- [ ] `area/windows`: Windows support
### What language(s) does this bug affect?
- [ ] `language/r`: R APIs and clients
- [ ] `language/java`: Java APIs and clients
- [ ] `language/new`: Proposals for new client languages
### What integration(s) does this bug affect?
- [ ] `integrations/azure`: Azure and Azure ML integrations
- [ ] `integrations/sagemaker`: SageMaker integrations
- [ ] `integrations/databricks`: Databricks integrations
| @VarunUllanat The workaround is to use `models from code` for saving the langchain model https://mlflow.org/docs/latest/models.html#models-from-code. This will be the recommended way for saving langchain models.
Thanks for the response, when I set that:
`mlflow.models.set_model(model=conversational_rag_chain)`
I get the following error:
```
MlflowException Traceback (most recent call last)
File <command-832405214942020>, line 1
----> 1 mlflow.models.set_model(model=conversational_rag_chain)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-b9c2956b-9a79-418a-8f13-d08539e9b4d1/lib/python3.11/site-packages/mlflow/models/model.py:1068, in set_model(model)
1065 except Exception:
1066 pass
-> 1068 raise mlflow.MlflowException(SET_MODEL_ERROR)
MlflowException: Model should either be an instance of PyFuncModel, Langchain type, or LlamaIndex index.
```
For clarity the `type(conversational_rag_chain)` is `langchain_core.runnables.history.RunnableWithMessageHistory` and not `langchain_core.runnables.base.RunnableSequence` like a normal chain. Is the former not supported by mlflow?
Could you try `pip install git+https://github.com/serena-ruan/mlflow.git@langchain` then retry and see if it works?
<!-- assign-maintainer -->
@mlflow/mlflow-team Please assign a maintainer and start triaging this issue.
@serena-ruan your fix works (when will it be implemented?). Also, how would the mlflow model logging work for this with two arguments in the invoke method?
```
{"input": "What is langchain?"},
{"configurable": {"session_id": "123"}})
```
@tkernr Unfortunately your input example requires us to support dictionary as params, this is on our roadmap and will be supported in the next release, please stay tuned.
BTW I think the change is merged, could you run with latest MLflow version?
Sorry for the confusion, seems the fix isn't included, let me file a PR and include it in the next release | 1,731,987,688,000 | null | Bug Report | [
"mlflow/models/model.py:_validate_langchain_model"
] | [] | 1 | 551 |
|
jax-ml/jax | jax-ml__jax-25487 | c73f3060997ac3b1c6de4f075111b684ea20b6ac | diff --git a/jax/_src/random.py b/jax/_src/random.py
index 13c4ab4dbce4..12aa5b93efbf 100644
--- a/jax/_src/random.py
+++ b/jax/_src/random.py
@@ -291,15 +291,18 @@ def split(key: ArrayLike, num: int | tuple[int, ...] = 2) -> Array:
return _return_prng_keys(wrapped, _split(typed_key, num))
-def _key_impl(keys: Array) -> str | PRNGSpec:
+def _key_impl(keys: Array) -> PRNGImpl:
assert jnp.issubdtype(keys.dtype, dtypes.prng_key)
keys_dtype = typing.cast(prng.KeyTy, keys.dtype)
- impl = keys_dtype._impl
+ return keys_dtype._impl
+
+def _key_spec(keys: Array) -> str | PRNGSpec:
+ impl = _key_impl(keys)
return impl.name if impl.name in prng.prngs else PRNGSpec(impl)
def key_impl(keys: ArrayLike) -> str | PRNGSpec:
typed_keys, _ = _check_prng_key("key_impl", keys, allow_batched=True)
- return _key_impl(typed_keys)
+ return _key_spec(typed_keys)
def _key_data(keys: Array) -> Array:
| `jax.random.beta` 3 orders of magnitude slower from 0.4.36 on GPU
### Description
My code runs substantially slower from one month ago, and I figued out a key bottleneck: sampling from beta distribution has gotten around 1000 times slower on GPU.
On Colab, I run the following code on different versions of jax
```
@jax.jit
def sample_beta(rng_key):
return jax.random.beta(key=rng_key, a=1, b=1, shape=(1000, 2))
seed = jrand.PRNGKey(1)
sample_beta(seed)
%timeit sample_beta(seed)
```
* Time take on version 0.4.35: **0.784ms**
* Time take on version 0.4.36: **351ms**
* Time take on version 0.4.37: **354ms**



### System info (python version, jaxlib version, accelerator, etc.)
jax: 0.4.36
jaxlib: 0.4.36
numpy: 1.26.4
python: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
device info: Tesla T4-1, 1 local devices"
process_count: 1
platform: uname_result(system='Linux', node='d36852658d94', release='6.1.85+', version='#1 SMP PREEMPT_DYNAMIC Thu Jun 27 21:05:47 UTC 2024', machine='x86_64')
$ nvidia-smi
Fri Dec 13 13:13:50 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |
| N/A 72C P0 31W / 70W | 109MiB / 15360MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
+---------------------------------------------------------------------------------------+
| I can reproduce this, but I'm not totally sure where this would be coming from. Perhaps @jakevdp or @froystig could take a look re: recent changes to PRNGs?
My bisection points to https://github.com/jax-ml/jax/pull/24593 | 1,734,133,002,000 | null | Performance Issue | [
"jax/_src/random.py:_key_impl",
"jax/_src/random.py:key_impl"
] | [
"jax/_src/random.py:_key_spec"
] | 2 | 552 |
|
jax-ml/jax | jax-ml__jax-24733 | 4b4fb9dae9eb7e2740d70de5b4a610f979530382 | diff --git a/jax/_src/numpy/reductions.py b/jax/_src/numpy/reductions.py
index fa8d73361e2b..be1e55675079 100644
--- a/jax/_src/numpy/reductions.py
+++ b/jax/_src/numpy/reductions.py
@@ -2360,7 +2360,8 @@ def _quantile(a: Array, q: Array, axis: int | tuple[int, ...] | None,
index[axis] = high
high_value = a[tuple(index)]
else:
- a = _where(any(lax_internal._isnan(a), axis=axis, keepdims=True), np.nan, a)
+ with jax.debug_nans(False):
+ a = _where(any(lax_internal._isnan(a), axis=axis, keepdims=True), np.nan, a)
a = lax.sort(a, dimension=axis)
n = lax.convert_element_type(a_shape[axis], lax_internal._dtype(q))
q = lax.mul(q, n - 1)
| median FloatingPointError: invalid value (nan) encountered in jit(convert_element_type)
### Description
Hello,
I got this error in jnp.median when I set JAX_DISABLE_JIT=True and JAX_DEBUG_NANS=True.
```
Traceback (most recent call last):
File "/data1/home/hhu17/zyl/PINE/H2+/3/test.py", line 29, in <module>
c = jnp.median(b)
^^^^^^^^^^^^^
File "/data1/home/hhu17/.conda/envs/jax/lib/python3.11/site-packages/jax/_src/numpy/reductions.py", line 2517, in median
return quantile(a, 0.5, axis=axis, out=out, overwrite_input=overwrite_input,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data1/home/hhu17/.conda/envs/jax/lib/python3.11/site-packages/jax/_src/numpy/reductions.py", line 2172, in quantile
return _quantile(lax_internal.asarray(a), lax_internal.asarray(q), axis, method, keepdims, False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data1/home/hhu17/.conda/envs/jax/lib/python3.11/site-packages/jax/_src/numpy/reductions.py", line 2302, in _quantile
a = _where(any(lax_internal._isnan(a), axis=axis, keepdims=True), np.nan, a)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data1/home/hhu17/.conda/envs/jax/lib/python3.11/site-packages/jax/_src/numpy/util.py", line 426, in _where
x, y = promote_dtypes(x, y)
^^^^^^^^^^^^^^^^^^^^
File "/data1/home/hhu17/.conda/envs/jax/lib/python3.11/site-packages/jax/_src/numpy/util.py", line 259, in promote_dtypes
return [lax._convert_element_type(x, to_dtype, weak_type) for x in args]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data1/home/hhu17/.conda/envs/jax/lib/python3.11/site-packages/jax/_src/numpy/util.py", line 259, in <listcomp>
return [lax._convert_element_type(x, to_dtype, weak_type) for x in args]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data1/home/hhu17/.conda/envs/jax/lib/python3.11/site-packages/jax/_src/lax/lax.py", line 587, in _convert_element_type
return convert_element_type_p.bind(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data1/home/hhu17/.conda/envs/jax/lib/python3.11/site-packages/jax/_src/lax/lax.py", line 2981, in _convert_element_type_bind
operand = core.Primitive.bind(convert_element_type_p, operand,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data1/home/hhu17/.conda/envs/jax/lib/python3.11/site-packages/jax/_src/core.py", line 438, in bind
return self.bind_with_trace(find_top_trace(args), args, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data1/home/hhu17/.conda/envs/jax/lib/python3.11/site-packages/jax/_src/core.py", line 442, in bind_with_trace
out = trace.process_primitive(self, map(trace.full_raise, args), params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data1/home/hhu17/.conda/envs/jax/lib/python3.11/site-packages/jax/_src/core.py", line 955, in process_primitive
return primitive.impl(*tracers, **params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data1/home/hhu17/.conda/envs/jax/lib/python3.11/site-packages/jax/_src/dispatch.py", line 91, in apply_primitive
outs = fun(*args)
^^^^^^^^^^
FloatingPointError: invalid value (nan) encountered in jit(convert_element_type). Because jax_config.debug_nans.value and/or config.jax_debug_infs is set, the de-optimized function (i.e., the function as if the `jit` decorator were removed) was called in an attempt to get a more precise error message. However, the de-optimized function did not produce invalid values during its execution. This behavior can result from `jit` optimizations causing the invalid value to be produced. It may also arise from having nan/inf constants as outputs, like `jax.jit(lambda ...: jax.numpy.nan)(...)`.
```
Following is the minimal code to reproduce the error.
```
import jax.numpy as jnp
import jax
key = jax.random.PRNGKey(12)
a = jax.random.normal(key, 128)
b = jnp.array(a)
c = jnp.median(b)
jit_median = jax.jit(jnp.median)
c = jit_median(b)
print(c)
```
Any help would be greatly appreciated!
### System info (python version, jaxlib version, accelerator, etc.)
jax: 0.4.35
jaxlib: 0.4.34
numpy: 2.1.1
python: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0]
device info: cpu-1, 1 local devices"
process_count: 1
platform: uname_result(system='Linux', node='login4', release='3.10.0-957.el7.x86_64', version='#1 SMP Mon Dec 7 11:30:56 UTC 2020', machine='x86_64')
| Looks like it's coming from the NaN introduced on this line:
https://github.com/jax-ml/jax/blob/4b4fb9dae9eb7e2740d70de5b4a610f979530382/jax/_src/numpy/reductions.py#L2363
@jakevdp Can I tag you here since you wrote the implementation for _quantile? | 1,730,849,863,000 | null | Bug Report | [
"jax/_src/numpy/reductions.py:_quantile"
] | [] | 1 | 553 |
|
jax-ml/jax | jax-ml__jax-24717 | 34b4787e2eff9edbd8eca242a74f1c165388b871 | diff --git a/jax/_src/scipy/stats/_core.py b/jax/_src/scipy/stats/_core.py
index 08d1c0b6b538..f7b28d3ac301 100644
--- a/jax/_src/scipy/stats/_core.py
+++ b/jax/_src/scipy/stats/_core.py
@@ -198,13 +198,12 @@ def rankdata(
return jnp.apply_along_axis(rankdata, axis, a, method)
arr = jnp.ravel(a)
- sorter = jnp.argsort(arr)
+ arr, sorter = jax.lax.sort_key_val(arr, jnp.arange(len(arr)))
inv = invert_permutation(sorter)
if method == "ordinal":
return inv + 1
- arr = arr[sorter]
- obs = jnp.insert(arr[1:] != arr[:-1], 0, True)
+ obs = jnp.concatenate([jnp.array([True]), arr[1:] != arr[:-1]])
dense = obs.cumsum()[inv]
if method == "dense":
return dense
| scipy.stats.rankdata causes constant folding warning for method='dense' but not method='ordinal'
### Description
[`scipy.stats.rankdata`](https://jax.readthedocs.io/en/latest/_autosummary/jax.scipy.stats.rankdata.html) causes a constant folding warning for `method='dense'` but not `method='ordinal'`:
```
$ py -c "import jax; jax.scipy.stats.rankdata(jax.numpy.zeros(10**7), 'ordinal')"
$ py -c "import jax; jax.scipy.stats.rankdata(jax.numpy.zeros(10**7), 'dense')"
2024-11-04 20:21:27.997499: E external/xla/xla/service/slow_operation_alarm.cc:65] Constant folding an instruction is taking > 1s:
%reduce-window.6 = s32[625000,16]{0,1} reduce-window(s32[625000,16]{0,1} %constant.174, s32[] %constant.17), window={size=1x16 pad=0_0x15_0}, to_apply=%region_5.113
This isn't necessarily a bug; constant-folding is inherently a trade-off between compilation time and speed at runtime. XLA has some guards that attempt to keep constant folding from taking too long, but fundamentally you'll always be able to come up with an input program that takes a long time.
If you'd like to file a bug, run with envvar XLA_FLAGS=--xla_dump_to=/tmp/foo and attach the results.
2024-11-04 20:21:33.721446: E external/xla/xla/service/slow_operation_alarm.cc:133] The operation took 6.728977s
Constant folding an instruction is taking > 1s:
%reduce-window.6 = s32[625000,16]{0,1} reduce-window(s32[625000,16]{0,1} %constant.174, s32[] %constant.17), window={size=1x16 pad=0_0x15_0}, to_apply=%region_5.113
This isn't necessarily a bug; constant-folding is inherently a trade-off between compilation time and speed at runtime. XLA has some guards that attempt to keep constant folding from taking too long, but fundamentally you'll always be able to come up with an input program that takes a long time.
If you'd like to file a bug, run with envvar XLA_FLAGS=--xla_dump_to=/tmp/foo and attach the results.
```
Looking at the code for `rankdata`, the culprit might be one of the 3 lines of code starting [here](https://github.com/jax-ml/jax/blob/ab47d4687f647de3aa145a9a782fb7b4aaf92af4/jax/_src/scipy/stats/_core.py#L206).
XLA dump [here](https://www.dropbox.com/scl/fo/rruuywlngh1r03hj9c2r1/AM-ym1pWfIUhkHA2hOiQNko?rlkey=2xxwdrmssgfyk7yz61xrt1t7d&st=h6yp3a8x&dl=0).
### System info (python version, jaxlib version, accelerator, etc.)
jax: 0.4.35
jaxlib: 0.4.34
numpy: 1.26.4
python: 3.12.7 (main, Oct 1 2024, 02:05:46) [Clang 15.0.0 (clang-1500.3.9.4)]
device info: cpu-1, 1 local devices"
process_count: 1
platform: uname_result(system='Darwin', node='Carloss-MacBook-Pro-2.local', release='23.6.0', version='Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:46 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6031', machine='arm64')
| 1,730,812,512,000 | null | Performance Issue | [
"jax/_src/scipy/stats/_core.py:rankdata"
] | [] | 1 | 554 |
||
phidatahq/phidata | phidatahq__phidata-1589 | 2c18b480f349eee62e16a794a250ed8549558cb1 | diff --git a/phi/document/chunking/recursive.py b/phi/document/chunking/recursive.py
index 662a9218c..47c552294 100644
--- a/phi/document/chunking/recursive.py
+++ b/phi/document/chunking/recursive.py
@@ -38,6 +38,7 @@ def chunk(self, document: Document) -> List[Document]:
chunk_id = None
if document.id:
chunk_id = f"{document.id}_{chunk_number}"
+ chunk_number += 1
meta_data["chunk_size"] = len(chunk)
chunks.append(Document(id=chunk_id, name=document.name, meta_data=meta_data, content=chunk))
| Duplicate key value violates unique constraint with recursive chunking
When use `RecursiveChunking` with large files, some errors happen:
```
ERROR Error with batch starting at index 0: (psycopg.errors.UniqueViolation) duplicate key value violates unique constraint "recipes_agentic_recursive_chunking_pkey"
DETAIL: Key (id)=(relativity_1) already exists.
```
| 1,734,420,482,000 | null | Bug Report | [
"phi/document/chunking/recursive.py:RecursiveChunking.chunk"
] | [] | 1 | 555 |
||
phidatahq/phidata | phidatahq__phidata-1583 | 54f7a22970f66c32409607e2f1e3474a7a11a395 | diff --git a/phi/memory/agent.py b/phi/memory/agent.py
index 6bfd6c185..5f3a7dea1 100644
--- a/phi/memory/agent.py
+++ b/phi/memory/agent.py
@@ -1,5 +1,6 @@
from enum import Enum
from typing import Dict, List, Any, Optional, Tuple
+from copy import deepcopy
from pydantic import BaseModel, ConfigDict
@@ -357,8 +358,22 @@ def clear(self) -> None:
self.summary = None
self.memories = None
- def deep_copy(self, *, update: Optional[Dict[str, Any]] = None) -> "AgentMemory":
- new_memory = self.model_copy(deep=True, update=update)
- # clear the new memory to remove any references to the old memory
- new_memory.clear()
- return new_memory
+ def deep_copy(self):
+ # Create a shallow copy of the object
+ copied_obj = self.__class__(**self.model_dump())
+
+ # Manually deepcopy fields that are known to be safe
+ for field_name, field_value in self.__dict__.items():
+ if field_name not in ["db", "classifier", "manager", "summarizer"]:
+ try:
+ setattr(copied_obj, field_name, deepcopy(field_value))
+ except Exception as e:
+ logger.warning(f"Failed to deepcopy field: {field_name} - {e}")
+ setattr(copied_obj, field_name, field_value)
+
+ copied_obj.db = self.db
+ copied_obj.classifier = self.classifier
+ copied_obj.manager = self.manager
+ copied_obj.summarizer = self.summarizer
+
+ return copied_obj
| Agents with memory dont work in playground
Repro Steps
```
memory_db = SqliteMemoryDb(table_name="memories", db_file="tmp/agents.db")
agent = Agent(
name="my_agent",
agent_id="my_agent",
model=models["gpt-4o"],
debug_mode=True,
memory=AgentMemory(
db=memory_db,
create_user_memories=True,
create_session_summary=True,
classifier=MemoryClassifier(
model=models["gpt-4o-mini"],
),
summarizer=MemorySummarizer(
model=models["gpt-4o-mini"],
),
manager=MemoryManager(
model=models["gpt-4o-mini"],
),
),
storage=agent_storage,
)
# This works
agent.print_response(
"Who am i?",
stream=True,
)
```
With playground, fails to `deepcopy` in the `router.py`
```
File "phi/playground/router.py", line 269, in agent_run
new_agent_instance = agent.deep_copy(update={"session_id": body.session_id})
File "phi/agent/agent.py", line 277, in deep_copy
fields_for_new_agent[field_name] = self._deep_copy_field(field_name, field_value)
File "phi/agent/agent.py", line 294, in _deep_copy_field
return field_value.deep_copy()
File "phi/memory/agent.py", line 361, in deep_copy
new_memory = self.model_copy(deep=True, update=update)
File ".venv/lib/python3.9/site-packages/pydantic/main.py", line 337, in model_copy
copied = self.__deepcopy__() if deep else self.__copy__()
File ".venv/lib/python3.9/site-packages/pydantic/main.py", line 805, in __deepcopy__
_object_setattr(m, '__dict__', deepcopy(self.__dict__, memo=memo))
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 161, in deepcopy
rv = reductor(4)
```
| Hey @nikhil-pandey, did you push a fix for this in your PR? or are you still countering this issue?
@manthanguptaa I have same issue
```
File "/Users/fireharp/.pyenv/versions/3.11.9/lib/python3.11/copy.py", line 161, in deepcopy
rv = reductor(4)
^^^^^^^^^^^
TypeError: cannot pickle 'module' object
Exception ignored in: <function SqliteMemoryDb.__del__ at 0x1086cca40>
Traceback (most recent call last):
File "/Users/fireharp/Prog/Playgrounds/phidata/.venv/lib/python3.11/site-packages/phi/memory/db/sqlite.py", line 192, in __del__
self.Session.remove()
^^^^^^^^^^^^
AttributeError: 'SqliteMemoryDb' object has no attribute 'Session'
INFO: 127.0.0.1:59086 - "GET /v1/playground/status HTTP/1.1" 200 OK
```
@fireharp allow me some time. I will take a look at it
This issue has been automatically marked as stale due to 14 days of inactivity and will now be closed. | 1,734,372,194,000 | null | Bug Report | [
"phi/memory/agent.py:AgentMemory.deep_copy"
] | [] | 1 | 556 |
|
phidatahq/phidata | phidatahq__phidata-1582 | 54f7a22970f66c32409607e2f1e3474a7a11a395 | diff --git a/phi/tools/function.py b/phi/tools/function.py
index 24d103165..89520833e 100644
--- a/phi/tools/function.py
+++ b/phi/tools/function.py
@@ -175,7 +175,7 @@ def process_entrypoint(self, strict: bool = False):
except Exception as e:
logger.warning(f"Could not parse args for {self.name}: {e}", exc_info=True)
- self.description = getdoc(self.entrypoint)
+ self.description = getdoc(self.entrypoint) or self.description
self.parameters = parameters
self.entrypoint = validate_call(self.entrypoint)
| Bedrock - Claude 3.5 Sonnet not working for Multi Agent Team
**When trying to run a Multi-Agent Team using Amazon Bedrock Claude 3.5 Sonnet, then I get the following error.**
Traceback (most recent call last):
File "/Users/RyanBlake/Desktop/Source Control/PhiData Agents/FinanceAgentTeam.py", line 34, in <module>
agent_team.print_response("Summarize analyst recommendations and share the latest news for CPI Capitec", stream=True)
File "/opt/homebrew/lib/python3.11/site-packages/phi/agent/agent.py", line 2765, in print_response
for resp in self.run(message=message, messages=messages, stream=True, **kwargs):
File "/opt/homebrew/lib/python3.11/site-packages/phi/agent/agent.py", line 1787, in _run
for model_response_chunk in self.model.response_stream(messages=messages_for_model):
File "/opt/homebrew/lib/python3.11/site-packages/phi/model/aws/bedrock.py", line 493, in response_stream
for chunk in response:
File "/opt/homebrew/lib/python3.11/site-packages/phi/model/aws/bedrock.py", line 126, in invoke_stream
response = self.bedrock_runtime_client.converse_stream(**body)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/botocore/client.py", line 569, in _api_call
return self._make_api_call(operation_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/botocore/client.py", line 980, in _make_api_call
request_dict = self._convert_to_request_dict(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/botocore/client.py", line 1047, in _convert_to_request_dict
request_dict = self._serializer.serialize_to_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/botocore/validate.py", line 381, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
botocore.exceptions.ParamValidationError: Parameter validation failed:
Invalid length for parameter toolConfig.tools[0].toolSpec.description, value: 0, valid min length: 1
Invalid length for parameter toolConfig.tools[1].toolSpec.description, value: 0, valid min length: 1
**I used the examples is, but just swopped out the model.
Here is my Python script.**
from phi.agent import Agent
from phi.tools.googlesearch import GoogleSearch
from phi.model.aws.claude import Claude
from phi.tools.yfinance import YFinanceTools
web_agent = Agent(
name="Web Agent",
model=Claude(id="anthropic.claude-3-5-sonnet-20240620-v1:0"),
tools=[GoogleSearch()],
#description="You are a professional financial analyst agent that helps people find financial data.",
instructions=["Always include sources"],
markdown=True,
)
finance_agent = Agent(
name="Finance Agent",
role="Get financial data",
model=Claude(id="anthropic.claude-3-5-sonnet-20240620-v1:0"),
tools=[YFinanceTools(stock_price=True, analyst_recommendations=True, company_info=True)],
#description="You are a professional financial analyst agent that helps people find financial data.",
instructions=["Use tables to display data"],
show_tool_calls=True,
markdown=True,
)
agent_team = Agent(
team=[web_agent, finance_agent],
model=Claude(id="anthropic.claude-3-5-sonnet-20240620-v1:0"),
instructions=["Always include sources", "Use tables to display data"],
show_tool_calls=True,
markdown=True,
)
agent_team.print_response("Summarize analyst recommendations and share the latest news for NVDA Nvidia", stream=True)
| hey @billybobpersonal, I am going to try to replicate the issue today. Allow me some time
@manthanguptaa thanks.
Were you able to replicate it?
Or would you like me to send more info.
Hey @billybobpersonal, I was able to replicate it. I am working on a fix for it | 1,734,369,612,000 | null | Bug Report | [
"phi/tools/function.py:Function.process_entrypoint"
] | [] | 1 | 557 |
|
phidatahq/phidata | phidatahq__phidata-1563 | 8f55f8b1d3fc13d46ad840666225ff2f9885cb68 | diff --git a/phi/tools/crawl4ai_tools.py b/phi/tools/crawl4ai_tools.py
index a7ca95c78..172953744 100644
--- a/phi/tools/crawl4ai_tools.py
+++ b/phi/tools/crawl4ai_tools.py
@@ -1,9 +1,10 @@
+import asyncio
from typing import Optional
from phi.tools import Toolkit
try:
- from crawl4ai import WebCrawler
+ from crawl4ai import AsyncWebCrawler, CacheMode
except ImportError:
raise ImportError("`crawl4ai` not installed. Please install using `pip install crawl4ai`")
@@ -31,21 +32,31 @@ def web_crawler(self, url: str, max_length: Optional[int] = None) -> str:
if url is None:
return "No URL provided"
- # Create an instance of WebCrawler
- crawler = WebCrawler(verbose=True)
- crawler.warmup()
+ # Run the async crawler function synchronously
+ return asyncio.run(self._async_web_crawler(url, max_length))
- # Run the crawler on a URL
- result = crawler.run(url=url)
+ async def _async_web_crawler(self, url: str, max_length: Optional[int] = None) -> str:
+ """
+ Asynchronous method to crawl a website using AsyncWebCrawler.
+
+ :param url: The URL to crawl.
+
+ :return: The results of the crawling as a markdown string, or None if no result.
+ """
+
+ async with AsyncWebCrawler(thread_safe=True) as crawler:
+ result = await crawler.arun(url=url, cache_mode=CacheMode.BYPASS)
- # Determine the length to use
- length = self.max_length or max_length
+ # Determine the length to use
+ length = self.max_length or max_length
+ if not result.markdown:
+ return "No result"
- # Remove spaces and truncate if length is specified
- if length:
- result = result.markdown[:length]
- result = result.replace(" ", "")
- return result
+ # Remove spaces and truncate if length is specified
+ if length:
+ result = result.markdown[:length]
+ result = result.replace(" ", "")
+ return result
- result = result.markdown.replace(" ", "")
+ result = result.markdown.replace(" ", "")
return result
| Crawl4AI tool has error
I tweaked example code from here:
https://docs.phidata.com/tools/crawl4ai
and used this code:
```
from phi.agent import Agent
from phi.model.openai import OpenAIChat
from phi.tools.crawl4ai_tools import Crawl4aiTools
from dotenv import load_dotenv
load_dotenv()
agent = Agent(
model=OpenAIChat(id="gpt-4o"),
tools=[Crawl4aiTools(max_length=None)],
show_tool_calls=True
)
agent.print_response("Summarize me the key points of this: https://blog.google/products/gemini/google-gemini-deep-research/")
```
but I've got error:
```
(phidata-venv) PS D:\Projects\AI_testing\phidata> python .\crawl4ai_example.py
▰▰▱▱▱▱▱ Thinking...INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
WARNING Could not run function web_crawler(url=https://blog.google/products/gemini/google-gemini-deep-research, max_length=500)
ERROR 'NoneType' object is not callable
Traceback (most recent call last):
File "D:\Projects\AI_testing\phidata\phidata-venv\Lib\site-packages\phi\tools\function.py", line 313, in execute
self.result = self.function.entrypoint(**entrypoint_args, **self.arguments)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Projects\AI_testing\phidata\phidata-venv\Lib\site-packages\pydantic\_internal\_validate_call.py", line 38, in wrapper_function
return wrapper(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Projects\AI_testing\phidata\phidata-venv\Lib\site-packages\pydantic\_internal\_validate_call.py", line 111, in __call__
res = self.__pydantic_validator__.validate_python(pydantic_core.ArgsKwargs(args, kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Projects\AI_testing\phidata\phidata-venv\Lib\site-packages\phi\tools\crawl4ai_tools.py", line 35, in web_crawler
crawler = WebCrawler(verbose=True)
^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: 'NoneType' object is not callable
```
**phidata v2.7.2**
and
**crawl4ai v0.4.1**
is used.
| Hey @vanetreg, I am able to replicate this error. Allow me some time to fix this issue. | 1,734,095,142,000 | null | Bug Report | [
"phi/tools/crawl4ai_tools.py:Crawl4aiTools.web_crawler"
] | [
"phi/tools/crawl4ai_tools.py:Crawl4aiTools._async_web_crawler"
] | 1 | 558 |
|
phidatahq/phidata | phidatahq__phidata-1562 | bd734bc8528aec12d1387064ab9cac571508fc7f | diff --git a/phi/model/google/gemini.py b/phi/model/google/gemini.py
index 4a11c1c43..263d3afb0 100644
--- a/phi/model/google/gemini.py
+++ b/phi/model/google/gemini.py
@@ -23,7 +23,7 @@
GenerateContentResponse as ResultGenerateContentResponse,
)
from google.protobuf.struct_pb2 import Struct
-except ImportError:
+except (ModuleNotFoundError, ImportError):
logger.error("`google-generativeai` not installed. Please install it using `pip install google-generativeai`")
raise
@@ -301,6 +301,7 @@ def format_functions(self, params: Dict[str, Any]) -> Dict[str, Any]:
Dict[str, Any]: The converted parameters dictionary compatible with Gemini.
"""
formatted_params = {}
+
for key, value in params.items():
if key == "properties" and isinstance(value, dict):
converted_properties = {}
@@ -322,8 +323,33 @@ def format_functions(self, params: Dict[str, Any]) -> Dict[str, Any]:
formatted_params[key] = converted_properties
else:
formatted_params[key] = value
+
return formatted_params
+ def _build_function_declaration(self, func: Function) -> FunctionDeclaration:
+ """
+ Builds the function declaration for Gemini tool calling.
+
+ Args:
+ func: An instance of the function.
+
+ Returns:
+ FunctionDeclaration: The formatted function declaration.
+ """
+ formatted_params = self.format_functions(func.parameters)
+ if "properties" in formatted_params and formatted_params["properties"]:
+ # We have parameters to add
+ return FunctionDeclaration(
+ name=func.name,
+ description=func.description,
+ parameters=formatted_params,
+ )
+ else:
+ return FunctionDeclaration(
+ name=func.name,
+ description=func.description,
+ )
+
def add_tool(
self,
tool: Union["Tool", "Toolkit", Callable, dict, "Function"],
@@ -356,11 +382,7 @@ def add_tool(
func._agent = agent
func.process_entrypoint()
self.functions[name] = func
- function_declaration = FunctionDeclaration(
- name=func.name,
- description=func.description,
- parameters=self.format_functions(func.parameters),
- )
+ function_declaration = self._build_function_declaration(func)
self.function_declarations.append(function_declaration)
logger.debug(f"Function {name} from {tool.name} added to model.")
@@ -369,11 +391,8 @@ def add_tool(
tool._agent = agent
tool.process_entrypoint()
self.functions[tool.name] = tool
- function_declaration = FunctionDeclaration(
- name=tool.name,
- description=tool.description,
- parameters=self.format_functions(tool.parameters),
- )
+
+ function_declaration = self._build_function_declaration(tool)
self.function_declarations.append(function_declaration)
logger.debug(f"Function {tool.name} added to model.")
@@ -383,11 +402,7 @@ def add_tool(
if function_name not in self.functions:
func = Function.from_callable(tool)
self.functions[func.name] = func
- function_declaration = FunctionDeclaration(
- name=func.name,
- description=func.description,
- parameters=self.format_functions(func.parameters),
- )
+ function_declaration = self._build_function_declaration(func)
self.function_declarations.append(function_declaration)
logger.debug(f"Function '{func.name}' added to model.")
except Exception as e:
| ToolKit functions with no arguments cause an error when using Gemini models.
phidata version: 2.7.2
**To reproduce**: Use a Gemini model and provide a toolkit with a registered method that takes no arguments.
**Expected behaviour**: Model can successfully use the tool.
**Actual behaviour**: The gemini library returns this error:
```
400 * GenerateContentRequest.tools[0].function_declarations[11].parameters.properties: should be non-empty for OBJECT type
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "* GenerateContentRequest.tools[0].function_declarations[11].parameters.properties: should be non-empty for OBJECT type
```
**A minimal code to reproduce is attached** [reproduce.zip](https://github.com/user-attachments/files/18121422/reproduce.zip).
Workaround: Adding a dummy parameter to the method seems to fix the issue.
Can this be fixed with ToolKit.register(), or where are the model.function_declarations being setup? In the former case, adding a dummy parameter would be easy, but feels messy.
| 1,734,091,646,000 | null | Bug Report | [
"phi/model/google/gemini.py:Gemini.add_tool"
] | [
"phi/model/google/gemini.py:Gemini._build_function_declaration"
] | 1 | 559 |
||
nltk/nltk | nltk__nltk-3335 | 9a5622f8a5b228df9499cd03181d9f8491e39f17 | diff --git a/nltk/app/wordnet_app.py b/nltk/app/wordnet_app.py
index 48fe1e30f6..437eb0f755 100644
--- a/nltk/app/wordnet_app.py
+++ b/nltk/app/wordnet_app.py
@@ -414,7 +414,7 @@ def get_relations_data(word, synset):
),
),
)
- elif synset.pos() == wn.ADJ or synset.pos == wn.ADJ_SAT:
+ elif synset.pos() == wn.ADJ or synset.pos() == wn.ADJ_SAT:
return (
(ANTONYM, "Antonym", lemma_property(word, synset, lambda l: l.antonyms())),
(SIMILAR, "Similar to", synset.similar_tos()),
@@ -435,7 +435,7 @@ def get_relations_data(word, synset):
)
# Derived from adjective - not supported by corpus
else:
- raise TypeError("Unhandles synset POS type: " + str(synset.pos()))
+ raise TypeError("Unhandled synset POS type: " + str(synset.pos()))
html_header = """
| Missing procedure call in line 417
Line 417 of the file "nltk/app/wordnet_app.py" should look like this:
elif synset.pos() == wn.ADJ or synset.pos() == wn.ADJ_SAT:
but instead looks like this:
elif synset.pos() == wn.ADJ or synset.pos == wn.ADJ_SAT:
which will generate this error (complete with spelling mistake) :
"Unhandles synset POS type: s"
| Thanks @drewvid, would you consider correcting both spelling errors in a PR?
Sure | 1,729,499,882,000 | null | Bug Report | [
"nltk/app/wordnet_app.py:get_relations_data"
] | [] | 1 | 560 |
|
kedro-org/kedro | kedro-org__kedro-4299 | 84b71b1436942d70f181a083991806cf75d5cd6d | diff --git a/kedro/framework/cli/cli.py b/kedro/framework/cli/cli.py
index f5917e1b87..6ad4e24e97 100644
--- a/kedro/framework/cli/cli.py
+++ b/kedro/framework/cli/cli.py
@@ -217,7 +217,7 @@ def global_groups(self) -> Sequence[click.MultiCommand]:
combines them with the built-in ones (eventually overriding the
built-in ones if they are redefined by plugins).
"""
- return [*load_entry_points("global"), cli, global_commands]
+ return [cli, *load_entry_points("global"), global_commands]
@property
def project_groups(self) -> Sequence[click.MultiCommand]:
| `kedro --version` isn't working
## Description
Reported by @noklam, since adding lazy loading of Kedro subcommands, the `--version`/`-V` option isn't working.
## Context
This bug is originating in Kedro 0.19.7 -> https://github.com/kedro-org/kedro/pull/3883
| > Usage: kedro [OPTIONS] COMMAND [ARGS]...
> Try 'kedro -h' for help.
>
> Error: No such option: -v
>
This is the stack trace when run `kedro -V`, `kedro -v ` or `kedro --version`
While investgating this issue, I think it's worth checking why CI didn't catch this error, we have this test inplace.
```python
def test_print_version(self):
"""Check that `kedro --version` and `kedro -V` outputs contain
the current package version."""
result = CliRunner().invoke(cli, ["--version"])
assert result.exit_code == 0
assert version in result.output
```
how to reproduce the error? the command works well for me
@DimedS `kedro -V,r kedro --version` as mentioned. Are you using the `main` branch? Can you copy the terminal log when you do `kedro`?
I confirm `kedro -V` and `kedro --version` both give `No such option` errors with 0.19.9
I figured out what the problem is:
https://github.com/kedro-org/kedro/blob/a1fae5018f35243a5e49a54a9dd3223b2c4ea743/kedro/framework/cli/cli.py#L220
Due to the changes in lazy loading PR, I re-ordered the global commands list to consider
- first the commands loaded from plugins,
- then `cli` which is the group with `info` and the `version_option` decorator
- and then the `global_commands` group which contains the `new` and `starter` lazy commands.
So if any plugin with global commands (eg. Kedro-Viz) is installed in your env, the `--version` option doesn't work. It works when you uninstall Kedro viz. Which is why it must be working in the CI and for @DimedS
The solution is simply to re-order the command groups to `[cli, *load_entry_points("global"), global_commands]` but that would mean that users can't overwrite `kedro info` which I think is acceptable. | 1,730,797,930,000 | null | Bug Report | [
"kedro/framework/cli/cli.py:KedroCLI.global_groups"
] | [] | 1 | 561 |
|
dask/dask | dask__dask-11608 | 24c492095a791696ce6611e9d2294274f4592911 | diff --git a/dask/_task_spec.py b/dask/_task_spec.py
index 316f1805aa6..c108bbb5b6b 100644
--- a/dask/_task_spec.py
+++ b/dask/_task_spec.py
@@ -799,6 +799,7 @@ def __init__(
None,
self.to_container,
*args,
+ klass=self.klass,
_dependencies=_dependencies,
**kwargs,
)
@@ -832,9 +833,9 @@ def __dask_tokenize__(self):
return super().__dask_tokenize__()
- @classmethod
- def to_container(cls, *args, **kwargs):
- return cls.klass(args)
+ @staticmethod
+ def to_container(*args, klass):
+ return klass(args)
class List(NestedContainer):
| `NestedContainer.to_container` method gets tracked individually per NestedContainer object
Looking into https://github.com/dask/distributed/issues/8958, I've noticed that for each `NestedContainer` object, its bound `to_container` method is tracked individually by the GC. This accounts for ~500k of 9MM objects in my workload. It would probably be better to stop tracking these individually.
| On top, that is very likely a self referencing cycle so breaking this will benefit GC in more than one way | 1,734,442,914,000 | null | Performance Issue | [
"dask/_task_spec.py:NestedContainer.__init__",
"dask/_task_spec.py:NestedContainer.to_container"
] | [] | 2 | 562 |
|
dask/dask | dask__dask-11539 | 5b115c4360fec6a4aa6e0edf8ad1d89a87c986dd | diff --git a/dask/array/core.py b/dask/array/core.py
index 10736af6f9d..0a7ebeb1b7c 100644
--- a/dask/array/core.py
+++ b/dask/array/core.py
@@ -3754,9 +3754,9 @@ def from_zarr(
store = zarr.storage.FSStore(url, **storage_options)
else:
store = url
- z = zarr.open_array(store=store, read_only=True, path=component, **kwargs)
+ z = zarr.open_array(store=store, path=component, **kwargs)
else:
- z = zarr.open_array(store=url, read_only=True, path=component, **kwargs)
+ z = zarr.open_array(store=url, path=component, **kwargs)
chunks = chunks if chunks is not None else z.chunks
if name is None:
name = "from-zarr-" + tokenize(z, component, storage_options, chunks, **kwargs)
| Warning raised with default `from_zarr` settings
**Describe the issue**:
Reading a zarr array with `dask.array.from_zarr` raises a `UserWarning`, but I'm not doing anything wrong.
**Minimal Complete Verifiable Example**:
```python
import dask.array
import zarr
zarr_arr = zarr.open(shape=(6, 6, 6), store="./zeros.zarr", chunks=(3, 3, 2), mode='w')
zarr_arr[:] = 0
dask_arr = dask.array.from_zarr("./zeros.zarr")
```
Raises:
```
/Users/dstansby/software/zarr/hackathon/.venv/lib/python3.12/site-packages/zarr/creation.py:614: UserWarning: ignoring keyword argument 'read_only'
compressor, fill_value = _kwargs_compat(compressor, fill_value, kwargs)
```
**Anything else we need to know?**:
**Environment**:
- Dask version: 2024.11.2
- zarr version: 2.18.3
- Python version: 3.12
- Operating System: macOS
- Install method (conda, pip, source): pip
| 1,732,053,620,000 | null | Bug Report | [
"dask/array/core.py:from_zarr"
] | [] | 1 | 563 |
||
dask/dask | dask__dask-11491 | fa8fecf10a94971f2f31df57d504d25bef4dd57e | diff --git a/dask/array/core.py b/dask/array/core.py
index fdf65bd24a4..3065406a922 100644
--- a/dask/array/core.py
+++ b/dask/array/core.py
@@ -562,7 +562,9 @@ def map_blocks(
Dimensions lost by the function.
new_axis : number or iterable, optional
New dimensions created by the function. Note that these are applied
- after ``drop_axis`` (if present).
+ after ``drop_axis`` (if present). The size of each chunk along this
+ dimension will be set to 1. Please specify ``chunks`` if the individual
+ chunks have a different size.
enforce_ndim : bool, default False
Whether to enforce at runtime that the dimensionality of the array
produced by ``func`` actually matches that of the array returned by
| `map_blocks()` with `new_axis` output has incorrect shape
<!-- Please include a self-contained copy-pastable example that generates the issue if possible.
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
**Describe the issue**:
When running `map_blocks()` with `new_axis` specified, the output shape of the dask array is not set correctly. In the below example I would expect it to be the same as the shape after computation.
**Minimal Complete Verifiable Example**:
```python
import dask.array as da
import numpy as np
def func(x):
return np.stack([x, x + 0.5])
x = da.arange(6, chunks=2)
x_mapped = x.map_blocks(func, new_axis=[0])
print(x_mapped.shape)
# (1, 6)
print(x_mapped.compute().shape)
# (2, 6)
```
**Anything else we need to know?**:
**Environment**:
- Dask version: b7d9bf49f682de8d2ef51f4617e3da782400c290
- Python version: 3.12.3
- Operating System: macOS
- Install method (conda, pip, source): source
| I don't think that we can guess the output shape with a high degree of fidelity. We should probably either set all chunks to NaN or force the specification of chunks.
Being able to specify the size of new output dimensions if known would be nice. e.g., in the above toy example we know the size of the new dimension is going to be `2` ahead of time.
Yeah, the interesting thing for Dask is the chunk sizes, the shape is just a convenient result from that, so we would need this information. | 1,730,759,801,000 | null | Bug Report | [
"dask/array/core.py:map_blocks"
] | [] | 1 | 564 |
|
feast-dev/feast | feast-dev__feast-4727 | e9cd3733f041da99bb1e84843ffe5af697085c34 | diff --git a/sdk/python/feast/feature_server.py b/sdk/python/feast/feature_server.py
index 26ee604e79..1f4918fe7a 100644
--- a/sdk/python/feast/feature_server.py
+++ b/sdk/python/feast/feature_server.py
@@ -24,6 +24,7 @@
FeastError,
FeatureViewNotFoundException,
)
+from feast.feast_object import FeastObject
from feast.permissions.action import WRITE, AuthzedAction
from feast.permissions.security_manager import assert_permissions
from feast.permissions.server.rest import inject_user_details
@@ -218,21 +219,25 @@ async def push(request: PushFeaturesRequest) -> None:
else:
store.push(**push_params)
- @app.post("/write-to-online-store", dependencies=[Depends(inject_user_details)])
- def write_to_online_store(request: WriteToFeatureStoreRequest) -> None:
- df = pd.DataFrame(request.df)
- feature_view_name = request.feature_view_name
- allow_registry_cache = request.allow_registry_cache
+ def _get_feast_object(
+ feature_view_name: str, allow_registry_cache: bool
+ ) -> FeastObject:
try:
- feature_view = store.get_stream_feature_view( # type: ignore
+ return store.get_stream_feature_view( # type: ignore
feature_view_name, allow_registry_cache=allow_registry_cache
)
except FeatureViewNotFoundException:
- feature_view = store.get_feature_view( # type: ignore
+ return store.get_feature_view( # type: ignore
feature_view_name, allow_registry_cache=allow_registry_cache
)
- assert_permissions(resource=feature_view, actions=[AuthzedAction.WRITE_ONLINE])
+ @app.post("/write-to-online-store", dependencies=[Depends(inject_user_details)])
+ def write_to_online_store(request: WriteToFeatureStoreRequest) -> None:
+ df = pd.DataFrame(request.df)
+ feature_view_name = request.feature_view_name
+ allow_registry_cache = request.allow_registry_cache
+ resource = _get_feast_object(feature_view_name, allow_registry_cache)
+ assert_permissions(resource=resource, actions=[AuthzedAction.WRITE_ONLINE])
store.write_to_online_store(
feature_view_name=feature_view_name,
df=df,
@@ -250,9 +255,8 @@ async def health():
@app.post("/materialize", dependencies=[Depends(inject_user_details)])
def materialize(request: MaterializeRequest) -> None:
for feature_view in request.feature_views or []:
- # TODO: receives a str for resource but isn't in the Union. is str actually allowed?
assert_permissions(
- resource=feature_view, # type: ignore
+ resource=_get_feast_object(feature_view, True),
actions=[AuthzedAction.WRITE_ONLINE],
)
store.materialize(
@@ -264,9 +268,8 @@ def materialize(request: MaterializeRequest) -> None:
@app.post("/materialize-incremental", dependencies=[Depends(inject_user_details)])
def materialize_incremental(request: MaterializeIncrementalRequest) -> None:
for feature_view in request.feature_views or []:
- # TODO: receives a str for resource but isn't in the Union. is str actually allowed?
assert_permissions(
- resource=feature_view, # type: ignore
+ resource=_get_feast_object(feature_view, True),
actions=[AuthzedAction.WRITE_ONLINE],
)
store.materialize_incremental(
| Wrong permission asserts on materialize endpoints
## Expected Behavior
The `assert_permissions` function expects a `resources` of type `FeastObject`.
## Current Behavior
Materialization endpoints in `feature_server` module receive instead a `str`, as in [/materialize](https://github.com/feast-dev/feast/blob/60fbc62080950549f28b9411e00926be168bea56/sdk/python/feast/feature_server.py#L256-L258) and [/materialize_incremental](https://github.com/feast-dev/feast/blob/60fbc62080950549f28b9411e00926be168bea56/sdk/python/feast/feature_server.py#L269-L271)
## Possible Solution
Fetch the `FeatureView`s like the [/write-to-online-store](https://github.com/feast-dev/feast/blob/60fbc62080950549f28b9411e00926be168bea56/sdk/python/feast/feature_server.py#L226C9-L235C14) endpoint
| 1,730,404,565,000 | null | Bug Report | [
"sdk/python/feast/feature_server.py:get_app"
] | [] | 1 | 565 |
||
python/mypy | python__mypy-18292 | c4f5056d6c43db556b5215cb3c330fcde25a77cd | diff --git a/mypy/main.py b/mypy/main.py
index e1c9f20400bc..d2a28a18c6a8 100644
--- a/mypy/main.py
+++ b/mypy/main.py
@@ -9,6 +9,7 @@
import time
from collections import defaultdict
from gettext import gettext
+from io import TextIOWrapper
from typing import IO, Any, Final, NoReturn, Sequence, TextIO
from mypy import build, defaults, state, util
@@ -74,6 +75,10 @@ def main(
if args is None:
args = sys.argv[1:]
+ # Write an escape sequence instead of raising an exception on encoding errors.
+ if isinstance(stdout, TextIOWrapper) and stdout.errors == "strict":
+ stdout.reconfigure(errors="backslashreplace")
+
fscache = FileSystemCache()
sources, options = process_options(args, stdout=stdout, stderr=stderr, fscache=fscache)
if clean_exit:
| Error when displaying error that contains unicode characters in Windows
<!--
If you're new to mypy and you're not sure whether what you're experiencing is a mypy bug, please see the "Question and Help" form
instead.
Please also consider:
- checking our common issues page: https://mypy.readthedocs.io/en/stable/common_issues.html
- searching our issue tracker: https://github.com/python/mypy/issues to see if
it's already been reported
- asking on gitter chat: https://gitter.im/python/typing
-->
**Bug Report**
<!--
Note: If the problem you are reporting is about a specific library function, then the typeshed tracker is better suited
for this report: https://github.com/python/typeshed/issues
-->
When displaying a type error about e.g. a variable that contains unicode characters, mypy crashes.
**To Reproduce**
1. Make a file `file.py` containing the line `x=γ`.
2. Run `mypy.exe --show-column-numbers file.py` through flycheck (python-mypy) in Emacs
**Expected Behavior**
An error message like `file.py:1:5: error: Name "γ" is not defined`
<!--
How did you expect your project to behave?
It’s fine if you’re not sure your understanding is correct.
Write down what you thought would happen. If you just expected no errors, you can delete this section.
-->
**Actual Behavior**
It crashes and prints a stack trace:
```
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "c:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\Scripts\mypy.exe\__main__.py", line 7, in <module>
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\site-packages\mypy\__main__.py", line 15, in console_entry
main(None, sys.stdout, sys.stderr)
File "mypy\main.py", line 96, in main
File "mypy\main.py", line 173, in run_build
File "mypy\build.py", line 180, in build
File "mypy\build.py", line 256, in _build
File "mypy\build.py", line 2717, in dispatch
File "mypy\build.py", line 3048, in process_graph
File "mypy\build.py", line 3164, in process_stale_scc
File "mypy\main.py", line 165, in flush_errors
File "mypy\main.py", line 199, in show_messages
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u03b3' in position 33: character maps to <undefined>
```
I've fixed it locally by adding
```
sys.stdout.reconfigure(encoding='utf-8')
sys.stderr.reconfigure(encoding='utf-8')
```
in `mypy/__main__.py`. It works for me, but I don't know whether it's the right thing for mypy.
<!--
Did something go wrong?
Is something broken, or not behaving as you expected?
-->
**Your Environment**
Python 3.9.7, mypy 0.931 on Windows
<!-- Include as many relevant details about the environment you experienced the bug in -->
- Mypy version used: 0.931
- Mypy command-line flags: --show-column-numbers
- Mypy configuration options from `mypy.ini` (and other config files): None, I think
- Python version used: 3.9.7
- Operating system and version: Windows 11
<!--
You can freely edit this text, please remove all the lines
you believe are unnecessary.
-->
| My 'fix' doesn't really work perfectly. Something in Windows+emacs+flycheck doesn't decode the mypy output as unicode, and what I see in Emacs is `file.py:1:5: error: Name "γ" is not defined`. But that's probably not a mypy issue.
Update: I tested this with updated mypy 0.950 in Windows and Ubuntu, and couldn't reproduce by calling `mypy.exe --show-column-numbers file.py` in the command line. The issue happens only in flycheck in Emacs. I guess that flycheck's python-mypy runs in a special environment where stderr and stdout are opened as TextIO buffers with a non-utf-8 encoding.
This can still happen anytime the output encoding can't represent a codepoint in the error message. For example, this can be reproduced on a unix system by running
```shell
$ PYTHONIOENCODING=cp1252 mypy -c "x=γ"
Traceback (most recent call last):
...
File "/home/brian/Projects/open-contrib/mypy/mypy/main.py", line 230, in show_messages
f.write(msg + "\n")
File "/usr/lib/python3.12/encodings/cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeEncodeError: 'charmap' codec can't encode character '\u03b3' in position 50: character maps to <undefined>
```
If this comes up again, we could look into using a different error handler when writing the output to stdout/stderr (the current default is `"strict"`, which raises an exception). Something like `"backslashreplace"` might make sense. For now, affected environments can try setting a different encoding or error hander via `PYTHONIOENCODING` or any other method. For example:
```shell
$ PYTHONIOENCODING=utf-8 mypy -c "x=γ"
<string>:1: error: Name "γ" is not defined [name-defined]
$ PYTHONIOENCODING=cp1252:backslashreplace mypy -c "x=γ"
<string>:1: error: Name "\u03b3" is not defined [name-defined]
```
Setting backslashreplace as the error handler seems like a good idea here. | 1,734,121,592,000 | null | Bug Report | [
"mypy/main.py:main"
] | [] | 1 | 566 |
|
albumentations-team/albumentations | albumentations-team__albumentations-2183 | 47c24503e0636f258e2af2b18e552d52271308bf | diff --git a/albumentations/augmentations/functional.py b/albumentations/augmentations/functional.py
index 52adf80df..2dc1dd07f 100644
--- a/albumentations/augmentations/functional.py
+++ b/albumentations/augmentations/functional.py
@@ -925,7 +925,12 @@ def add_sun_flare_overlay(
overlay = img.copy()
output = img.copy()
+ weighted_brightness = 0.0
+ total_radius_length = 0.0
+
for alpha, (x, y), rad3, (r_color, g_color, b_color) in circles:
+ weighted_brightness += alpha * rad3
+ total_radius_length += rad3
cv2.circle(overlay, (x, y), rad3, (r_color, g_color, b_color), -1)
output = add_weighted(overlay, alpha, output, 1 - alpha)
@@ -933,7 +938,13 @@ def add_sun_flare_overlay(
overlay = output.copy()
num_times = src_radius // 10
- alpha = np.linspace(0.0, 1, num=num_times)
+
+ # max_alpha is calculated using weighted_brightness and total_radii_length times 5
+ # meaning the higher the alpha with larger area, the brighter the bright spot will be
+ # for list of alphas in range [0.05, 0.2], the max_alpha should below 1
+ max_alpha = weighted_brightness / total_radius_length * 5
+ alpha = np.linspace(0.0, min(max_alpha, 1.0), num=num_times)
+
rad = np.linspace(1, src_radius, num=num_times)
for i in range(num_times):
| [RandomSunFlare] Add transparency to RandomSunFlare

Sunflare obscures the object
| Can I assume explore.albumentations.ai hosts latest commit on main?
Typically yes, unless I forget to update the explore.albumentations.ai
Right now it is the latest. | 1,733,844,294,000 | null | Feature Request | [
"albumentations/augmentations/functional.py:add_sun_flare_overlay"
] | [] | 1 | 567 |
|
bridgecrewio/checkov | bridgecrewio__checkov-6826 | 24535627d7315014328ec034daa3362a72948d09 | diff --git a/checkov/terraform/checks/resource/aws/EKSPlatformVersion.py b/checkov/terraform/checks/resource/aws/EKSPlatformVersion.py
index 563798a01d0..d2011578ec6 100644
--- a/checkov/terraform/checks/resource/aws/EKSPlatformVersion.py
+++ b/checkov/terraform/checks/resource/aws/EKSPlatformVersion.py
@@ -24,7 +24,8 @@ def get_inspected_key(self) -> str:
return "version"
def get_expected_values(self) -> list[Any]:
- return ["1.23", "1.24", "1.25", "1.26", "1.27", "1.28", "1.29", "1.30"]
+ # https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html
+ return ["1.24", "1.25", "1.26", "1.27", "1.28", "1.29", "1.30", "1.31"]
check = EKSPlatformVersion()
| Add EKS 1.31 as a supported version
**Describe the issue**
EKS 1.31 has been released. However `CKV_AWS_339` fails as this is not listed as a supported version.
**Examples**
```
resource "aws_eks_cluster" "eks_cluster" {
...
version = "1.31"
```
**Version (please complete the following information):**
- Checkov Version 3.2.256 (latest)
**Additional context**
https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/resource/aws/EKSPlatformVersion.py seems to be where the fix is needed
| @zvickery thanks for the comment, please feel free to contribute as this is the fastest way our checks could be updated :) | 1,731,355,205,000 | null | Feature Request | [
"checkov/terraform/checks/resource/aws/EKSPlatformVersion.py:EKSPlatformVersion.get_expected_values"
] | [] | 1 | 568 |
|
spotify/luigi | spotify__luigi-3324 | 80549f6b6f8c143effb81f3cf4a411b6068d9e2c | diff --git a/luigi/contrib/postgres.py b/luigi/contrib/postgres.py
index 719b80a4d7..19e96e8180 100644
--- a/luigi/contrib/postgres.py
+++ b/luigi/contrib/postgres.py
@@ -356,16 +356,15 @@ def copy(self, cursor, file):
else:
raise Exception('columns must consist of column strings or (column string, type string) tuples (was %r ...)' % (self.columns[0],))
- # cursor.copy_from is not available in pg8000
- if hasattr(cursor, 'copy_from'):
- cursor.copy_from(
- file, self.table, null=r'\\N', sep=self.column_separator, columns=column_names)
+ copy_sql = (
+ "COPY {table} ({column_list}) FROM STDIN "
+ "WITH (FORMAT text, NULL '{null_string}', DELIMITER '{delimiter}')"
+ ).format(table=self.table, delimiter=self.column_separator, null_string=r'\\N',
+ column_list=", ".join(column_names))
+ # cursor.copy_expert is not available in pg8000
+ if hasattr(cursor, 'copy_expert'):
+ cursor.copy_expert(copy_sql, file)
else:
- copy_sql = (
- "COPY {table} ({column_list}) FROM STDIN "
- "WITH (FORMAT text, NULL '{null_string}', DELIMITER '{delimiter}')"
- ).format(table=self.table, delimiter=self.column_separator, null_string=r'\\N',
- column_list=", ".join(column_names))
cursor.execute(copy_sql, stream=file)
def run(self):
| [contrib.postgres] copy_from does not accept schema.table notation in most recent psycopg2 versions
<!---
We use GitHub issues mainly for tracking bugs and feature requests.
Questions for how to use luigi can be sent to the mailing list.
Currently, there are no strict procedures or guidelines for submitting issues.
In short, please just use common sense.
Common sense includes this at bare-minimum:
* search for similar issues posted before creating a new issue.
* Use markdown to format all code/logs. Issues which are hard to read
when rendered on GitHub might be closed with a friendly reminder of this.
* If applicable, reading relevant parts of the documentation.
Also, add steps to reproduce the bug, if applicable. Sample code would be nice too :)
For more information on how to submit valuable contributions,
see https://opensource.guide/how-to-contribute/#how-to-submit-a-contribution
-->
## Description
I'm trying to maintain an old (2018) project that includes a lot of Luigi tasks, amongst which there are some tasks derived from [`CopyToTable`](https://github.com/spotify/luigi/blob/master/luigi/contrib/postgres.py#L299).
In this project, the PostgreSQL database contains several schemas, and some data may be added to tables which are not in default `public` schema through `CopyToTable`-derived tasks.
However `CopyToTable` uses the `cursor.copy_from` method, which has been recently modified in `psycopg2` API (see *e.g* https://github.com/psycopg/psycopg2/issues/1294). Hence using Luigi with a recent `psycopg2` raises an error like `psycopg2.errors.UndefinedTable: relation "schema.table" does not exist`.
## Expected behavior
Taking into account the behavior change in psycopg2, considering `schema.table` notation for Postgres tables that are located in a dedicated schema.
## Minimal Working Example
Let's consider the following Python module (let's call it `luigi_copytotable.py`) :
```python
from luigi.contrib.postgres import CopyToTable
import pandas as pd
class SendToDB(CopyToTable):
"""Insert bike availability data into a PostgreSQL table
"""
host = "localhost"
database = "my_db"
user = "my_username"
password = "my_password"
columns = [('a', 'VARCHAR'), ('b', 'INT')]
@property
def table(self):
return 'my_schema.my_table'
def rows(self):
df = pd.DataFrame({"a": ["foo", "bar", "wiz"], "b": [1, 2, 3]})
for idx, row in df.iterrows():
yield row.values
```
Running `luigi --local-scheduler --module luigi_copytotable SendToDB` throws:
```bash
16:04 $ luigi --local-scheduler --module luigi_copytotable SendToDB_
DEBUG: Checking if SendToDB() is complete
INFO: Informed scheduler that task SendToDB__99914b932b has status PENDING
INFO: Done scheduling tasks
INFO: Running Worker with 1 processes
DEBUG: Asking scheduler for work...
DEBUG: Pending tasks: 1
INFO: [pid 494717] Worker Worker(salt=450412579, workers=1, host=*******, username=my_username, pid=494717) running SendToDB_()
INFO: Done writing, importing at 2022-09-07 16:04:05.364381
INFO: Creating table my_schema.my_table
ERROR: [pid 494717] Worker Worker(salt=450412579, workers=1, host=*******, username=my_username, pid=494717) failed SendToDB()
Traceback (most recent call last):
File "/home/rdelhome/.virtualenvs/jitenv/lib/python3.10/site-packages/luigi/worker.py", line 198, in run
new_deps = self._run_get_new_deps()
File "/home/rdelhome/.virtualenvs/jitenv/lib/python3.10/site-packages/luigi/worker.py", line 138, in _run_get_new_deps
task_gen = self.task.run()
File "/home/rdelhome/.virtualenvs/jitenv/lib/python3.10/site-packages/luigi/contrib/postgres.py", line 403, in run
self.copy(cursor, tmp_file)
File "/home/rdelhome/.virtualenvs/jitenv/lib/python3.10/site-packages/luigi/contrib/postgres.py", line 358, in copy
cursor.copy_from(
psycopg2.errors.UndefinedTable: relation "my_schema.my_table" does not exist
DEBUG: 1 running tasks, waiting for next task to finish
INFO: Informed scheduler that task SendToDB__99914b932b has status FAILED
DEBUG: Asking scheduler for work...
DEBUG: Done
DEBUG: There are no more tasks to run at this time
DEBUG: There are 1 pending tasks possibly being run by other workers
DEBUG: There are 1 pending tasks unique to this worker
DEBUG: There are 1 pending tasks last scheduled by this worker
INFO: Worker Worker(salt=450412579, workers=1, host=*********, username=my_username, pid=494717) was stopped. Shutting down Keep-Alive thread
INFO:
===== Luigi Execution Summary =====
Scheduled 1 tasks of which:
* 1 failed:
- 1 SendToDB()
This progress looks :( because there were failed tasks
===== Luigi Execution Summary =====
```
## Hints for resolution
As suggested in the psycopg2 issue, use `copy_expert` ? Or maybe modify the `if` predicate in https://github.com/spotify/luigi/blob/master/luigi/contrib/postgres.py#L357, to choose the `else` case if `copy_from` is not happy...
(Note: As a temporary solution, I've downgraded my `psycopg2` version to `<2.9` to make it work.)
## Related issue
See on psycopg2 project: https://github.com/psycopg/psycopg2/issues/1294
| 1,732,801,325,000 | null | Bug Report | [
"luigi/contrib/postgres.py:CopyToTable.copy"
] | [] | 1 | 569 |
||
robotframework/robotframework | robotframework__robotframework-5265 | 6f58c00b10bd0b755657eb2a615b9a29a063f6ce | diff --git a/src/robot/output/pyloggingconf.py b/src/robot/output/pyloggingconf.py
index fdccb16329d..b2300a5ad21 100644
--- a/src/robot/output/pyloggingconf.py
+++ b/src/robot/output/pyloggingconf.py
@@ -36,6 +36,7 @@ def robot_handler_enabled(level):
return
handler = RobotHandler()
old_raise = logging.raiseExceptions
+ old_level = root.level
root.addHandler(handler)
logging.raiseExceptions = False
set_level(level)
@@ -43,6 +44,7 @@ def robot_handler_enabled(level):
yield
finally:
root.removeHandler(handler)
+ root.setLevel(old_level)
logging.raiseExceptions = old_raise
| `logging` module log level is not restored after execution
Hi,
It seems like that the robot handler is changing the root logger log level via ``set_level`` function (``robot.output.pyloggingconf``) but the original root logger level is not restored back after the end of the ``robot.running.model.TestSuite.run`` method or ``robot.run`` module.
The original context manager:
```python
@contextmanager
def robot_handler_enabled(level):
root = logging.getLogger()
if any(isinstance(h, RobotHandler) for h in root.handlers):
yield
return
handler = RobotHandler()
old_raise = logging.raiseExceptions
root.addHandler(handler)
logging.raiseExceptions = False
set_level(level)
try:
yield
finally:
root.removeHandler(handler)
logging.raiseExceptions = old_raise
```
Would it be necessary to restore the log level after changing it, in case the test script or any other third-party tool has already modified it for any reason?
```python
@contextmanager
def robot_handler_enabled(level):
root = logging.getLogger()
if any(isinstance(h, RobotHandler) for h in root.handlers):
yield
return
handler = RobotHandler()
old_raise = logging.raiseExceptions
* -> old_level = logging.getLevelName(root.level)
root.addHandler(handler)
logging.raiseExceptions = False
set_level(level)
try:
yield
finally:
root.removeHandler(handler)
logging.raiseExceptions = old_raise
* -> set_level(old_level)
```
| Restoring old configuration sounds good to me. Interested to create a PR?
Definitely! Thank you @pekkaklarck ! | 1,731,601,814,000 | null | Bug Report | [
"src/robot/output/pyloggingconf.py:robot_handler_enabled"
] | [] | 1 | 570 |
|
ShishirPatil/gorilla | ShishirPatil__gorilla-754 | 3b240551fe7ecb57ddd2c415b40872ce17dfb784 | diff --git a/berkeley-function-call-leaderboard/bfcl/model_handler/oss_model/base_oss_handler.py b/berkeley-function-call-leaderboard/bfcl/model_handler/oss_model/base_oss_handler.py
index c58812641..c3fc3c8e5 100644
--- a/berkeley-function-call-leaderboard/bfcl/model_handler/oss_model/base_oss_handler.py
+++ b/berkeley-function-call-leaderboard/bfcl/model_handler/oss_model/base_oss_handler.py
@@ -224,7 +224,7 @@ def _multi_threaded_inference(self, test_case, include_input_log: bool, include_
if "multi_turn" in test_case["id"]:
model_responses, metadata = self.inference_multi_turn_prompting(test_case, include_input_log, include_state_log)
else:
- model_responses, metadata = self.inference_single_turn_prompting(test_case, include_input_log, include_state_log)
+ model_responses, metadata = self.inference_single_turn_prompting(test_case, include_input_log)
except Exception as e:
print("-" * 100)
print(
| [BFCL] bugs in function def _multi_threaded_inference(self, test_case, include_input_log: bool, include_state_log: bool):
**Describe the issue**
I encountered an error while running bfcl generate. The error occurred in the file gorilla/berkeley-function-call-leaderboard/bfcl/model_handler/oss_model/base_oss_handler.py in the function def _multi_threaded_inference(self, test_case, include_input_log: bool, include_state_log: bool):.
The line model_responses, metadata = self.inference_single_turn_prompting(test_case, include_input_log, include_state_log) caused a runtime error. I discovered that the function inference_single_turn_prompting only accepts the parameters test_case and include_input_log. However, the code additionally passes include_state_log, which leads to the runtime error. When I removed include_state_log, the code ran successfully.
**ID datapoint**
1. Datapoint / Model Handler permalink:
2. Issue:
2. Gorilla repo commit #:
**What is the issue**
The function inference_single_turn_prompting does not accept include_state_log as a parameter, causing a runtime error when it is passed.
**Proposed Changes**
{
'previous_datapoint':[],
'updated_datapoint':[]
}
**Additional context**
Add any other context about the problem here.
| 1,731,442,886,000 | null | Bug Report | [
"berkeley-function-call-leaderboard/bfcl/model_handler/oss_model/base_oss_handler.py:OSSHandler._multi_threaded_inference"
] | [] | 1 | 571 |
||
Netflix/metaflow | Netflix__metaflow-2141 | 0bc4a9683ba67eedd756a8dc777916020587d5f7 | diff --git a/metaflow/cli.py b/metaflow/cli.py
index 1fc6a14953..a318b84a3e 100644
--- a/metaflow/cli.py
+++ b/metaflow/cli.py
@@ -282,31 +282,21 @@ def dump(obj, input_path, private=None, max_value_size=None, include=None, file=
else:
ds_list = list(datastore_set) # get all tasks
- tasks_processed = False
for ds in ds_list:
- if ds is not None:
- tasks_processed = True
- echo(
- "Dumping output of run_id=*{run_id}* "
- "step=*{step}* task_id=*{task_id}*".format(
- run_id=ds.run_id, step=ds.step_name, task_id=ds.task_id
- ),
- fg="magenta",
- )
-
- if file is None:
- echo_always(
- ds.format(**kwargs),
- highlight="green",
- highlight_bold=False,
- err=False,
- )
- else:
- output[ds.pathspec] = ds.to_dict(**kwargs)
+ echo(
+ "Dumping output of run_id=*{run_id}* "
+ "step=*{step}* task_id=*{task_id}*".format(
+ run_id=ds.run_id, step=ds.step_name, task_id=ds.task_id
+ ),
+ fg="magenta",
+ )
- if not tasks_processed:
- echo(f"No task(s) found for pathspec {input_path}", fg="red")
- return
+ if file is None:
+ echo_always(
+ ds.format(**kwargs), highlight="green", highlight_bold=False, err=False
+ )
+ else:
+ output[ds.pathspec] = ds.to_dict(**kwargs)
if file is not None:
with open(file, "wb") as f:
| BUG: Data store error - AWS batch/step execution
**Environment:**
metaflow version: 2.12.29
Python 3.11 (Docker Image from public.ecr.aws/docker/library/python:3.11)
Running on AWS Batch
**Description:**
Tested with version 2.12.28 and it runs successfully, with this latest version we get:
Data store error: No completed attempts of the task was found for task `MyFlow/sfn-*/_parameters/*-params`.
Maybe worth mentioning that we include a json file into `MyFlow` like:
```
json_config = IncludeFile(
name="my_config",
required=True,
help="The Configuration",
default=f"./{PARAMS_JSON}",
)
```
| I also got this error when running on argo workflows. My flow does not use `IncludeFile` but just usual parameters.
I can also confirm it happens for `2.12.29` but not `2.12.28`
And another confirmation with step on batch. 2.12.29 displays the error, 2.12.28 does not.
I also got this error on Argo Workflows. Same problematic version (`2.12.29`) and the same fix (downgrade to `2.12.28`).
we are triaging
also, for quicker resolution/response, you can always ping us on chat.metaflow.org | 1,731,502,413,000 | null | Bug Report | [
"metaflow/cli.py:dump"
] | [] | 1 | 572 |
|
ray-project/ray | ray-project__ray-49071 | f498afc76dfafcf447106471e8df33578a6293be | diff --git a/rllib/examples/rl_modules/classes/action_masking_rlm.py b/rllib/examples/rl_modules/classes/action_masking_rlm.py
index 992802ebb13a..626554a6434c 100644
--- a/rllib/examples/rl_modules/classes/action_masking_rlm.py
+++ b/rllib/examples/rl_modules/classes/action_masking_rlm.py
@@ -1,10 +1,11 @@
import gymnasium as gym
-from typing import Dict, Optional, Tuple
+from typing import Dict, Optional, Tuple, Union
from ray.rllib.algorithms.ppo.torch.ppo_torch_rl_module import PPOTorchRLModule
from ray.rllib.core.columns import Columns
from ray.rllib.core.rl_module.apis.value_function_api import ValueFunctionAPI
-from ray.rllib.core.rl_module.rl_module import RLModule, RLModuleConfig
+from ray.rllib.core.rl_module.default_model_config import DefaultModelConfig
+from ray.rllib.core.rl_module.rl_module import RLModule
from ray.rllib.utils.annotations import override
from ray.rllib.utils.framework import try_import_torch
from ray.rllib.utils.torch_utils import FLOAT_MIN
@@ -32,9 +33,17 @@ class ActionMaskingRLModule(RLModule):
"""
@override(RLModule)
- def __init__(self, config: RLModuleConfig):
+ def __init__(
+ self,
+ observation_space: Optional[gym.Space] = None,
+ action_space: Optional[gym.Space] = None,
+ inference_only: Optional[bool] = None,
+ learner_only: bool = False,
+ model_config: Optional[Union[dict, DefaultModelConfig]] = None,
+ catalog_class=None,
+ ):
# If observation space is not of type `Dict` raise an error.
- if not isinstance(config.observation_space, gym.spaces.dict.Dict):
+ if not isinstance(observation_space, gym.spaces.dict.Dict):
raise ValueError(
"This RLModule requires the environment to provide a "
"`gym.spaces.Dict` observation space of the form: \n"
@@ -46,15 +55,22 @@ def __init__(self, config: RLModuleConfig):
# the action mask and the original observation space, the 'RLModule'
# receives only the `"observation"` element of the space, but not the
# action mask.
- self.observation_space_with_mask = config.observation_space
- config.observation_space = config.observation_space["observations"]
+ self.observation_space_with_mask = observation_space
+ self.observation_space = observation_space["observations"]
# Keeps track if observation specs have been checked already.
self._checked_observations = False
# The PPORLModule, in its constructor will build networks for the original
# observation space (i.e. without the action mask).
- super().__init__(config)
+ super().__init__(
+ observation_space=self.observation_space,
+ action_space=action_space,
+ inference_only=inference_only,
+ learner_only=learner_only,
+ model_config=model_config,
+ catalog_class=catalog_class,
+ )
class ActionMaskingTorchRLModule(ActionMaskingRLModule, PPOTorchRLModule):
@@ -100,11 +116,13 @@ def _forward_train(
@override(ValueFunctionAPI)
def compute_values(self, batch: Dict[str, TensorType], embeddings=None):
- # Preprocess the batch to extract the `observations` to `Columns.OBS`.
- action_mask, batch = self._preprocess_batch(batch)
- # NOTE: Because we manipulate the batch we need to add the `action_mask`
- # to the batch to access them in `_forward_train`.
- batch["action_mask"] = action_mask
+ # Check, if the observations are still in `dict` form.
+ if isinstance(batch[Columns.OBS], dict):
+ # Preprocess the batch to extract the `observations` to `Columns.OBS`.
+ action_mask, batch = self._preprocess_batch(batch)
+ # NOTE: Because we manipulate the batch we need to add the `action_mask`
+ # to the batch to access them in `_forward_train`.
+ batch["action_mask"] = action_mask
# Call the super's method to compute values for GAE.
return super().compute_values(batch, embeddings)
| [RLlib] action_masking_example.py fails - RLModule build fails with "unexpected keyword argument 'observation_space'"
### What happened + What you expected to happen
Running the `action_masking_rl_module.py` example, which is shipped with 2.39 release, fails at RLModule instantiation.
> File "C:\Users\Philipp\anaconda3\envs\py311-raynew\Lib\site-packages\ray\rllib\core\rl_module\rl_module.py", line 100, in build
> module = self.module_class(
> ^^^^^^^^^^^^^^^^^^
> TypeError: ActionMaskingRLModule.__init__() got an unexpected keyword argument 'observation_space'
I did no change to the file locally. I skipped the CLI args "--enable-new-api-stack", as for PPO the new API stack is enabled by default since of release 2.39
### Versions / Dependencies
python==3.11.9
ray===2.39.0
torch==2.3.1+cu118
gymnasium==1.0.0
### Reproduction script
python ray/rllib/examples/rl_modules/action_masking_rl_module.py
### Issue Severity
Medium: It is a significant difficulty but I can work around it.
| 1,733,320,563,000 | null | Bug Report | [
"rllib/examples/rl_modules/classes/action_masking_rlm.py:ActionMaskingRLModule.__init__",
"rllib/examples/rl_modules/classes/action_masking_rlm.py:ActionMaskingTorchRLModule.compute_values"
] | [] | 2 | 573 |
||
ray-project/ray | ray-project__ray-48891 | 37aa0c66110fc235762c29612b90f1c73869e6cf | diff --git a/python/ray/scripts/scripts.py b/python/ray/scripts/scripts.py
index 1f26a483a7aa..eed702bb7438 100644
--- a/python/ray/scripts/scripts.py
+++ b/python/ray/scripts/scripts.py
@@ -622,6 +622,15 @@ def debug(address: str, verbose: bool):
type=str,
help="a JSON serialized dictionary mapping label name to label value.",
)
[email protected](
+ "--include-log-monitor",
+ default=None,
+ type=bool,
+ help="If set to True or left unset, a log monitor will start monitoring "
+ "the log files of all processes on this node and push their contents to GCS. "
+ "Only one log monitor should be started per physical host to avoid log "
+ "duplication on the driver process.",
+)
@add_click_logging_options
@PublicAPI
def start(
@@ -668,6 +677,7 @@ def start(
ray_debugger_external,
disable_usage_stats,
labels,
+ include_log_monitor,
):
"""Start Ray processes manually on the local machine."""
@@ -757,6 +767,7 @@ def start(
no_monitor=no_monitor,
tracing_startup_hook=tracing_startup_hook,
ray_debugger_external=ray_debugger_external,
+ include_log_monitor=include_log_monitor,
)
if ray_constants.RAY_START_HOOK in os.environ:
| [Core] Logs are duplicated if multiple nodes are running on same machine
### What happened + What you expected to happen
I encountered this https://github.com/ray-project/ray/issues/10392 issue when I was experimenting with ray.
This issue was closed due to the inability to provide a reproducible example.
### Versions / Dependencies
ray[all] 2.38.0
MacOS
### Reproduction script
```python
# example.py
import ray
@ray.remote
def foo():
print('hello')
if __name__ == '__main__':
ray.init()
handle = foo.remote()
ray.get(handle)
```
```shell
RAY_ENABLE_WINDOWS_OR_OSX_CLUSTER=1 ray start --head
RAY_ENABLE_WINDOWS_OR_OSX_CLUSTER=1 ray start --address='192.168.0.196:6379'
python example.py
```
Output:
24-11-08 13:54:19,817 INFO worker.py:1601 -- Connecting to existing Ray cluster at address: 192.168.0.196:6379...
2024-11-08 13:54:19,831 INFO worker.py:1777 -- Connected to Ray cluster. View the dashboard at http://127.0.0.1:8265
(foo pid=45881) hello
(foo pid=45881) hello
### Issue Severity
Low: It annoys or frustrates me.
A workaround is at: https://github.com/intel-analytics/BigDL-2.x/pull/2799/files
I mitigated this issue by calling this function after starting worker node. Of course, it has many downsides and it's not the way to go in long term.
```python
def kill_redundant_log_monitors():
"""
Killing redundant log_monitor.py processes.
If multiple ray nodes are started on the same machine,
there will be multiple ray log_monitor.py processes
monitoring the same log dir. As a result, the logs
will be replicated multiple times and forwarded to driver.
See issue https://github.com/ray-project/ray/issues/10392
"""
import psutil
import subprocess
log_monitor_processes = []
for proc in psutil.process_iter(["name", "cmdline"]):
try:
cmdline = subprocess.list2cmdline(proc.cmdline())
except (psutil.AccessDenied, psutil.NoSuchProcess):
continue
is_log_monitor = "log_monitor.py" in cmdline
if is_log_monitor:
log_monitor_processes.append(proc)
if len(log_monitor_processes) > 1:
for proc in log_monitor_processes[1:]:
proc.kill()
```
| thank you for reporting the issue! | 1,732,341,280,000 | null | Bug Report | [
"python/ray/scripts/scripts.py:start"
] | [] | 1 | 574 |
|
ray-project/ray | ray-project__ray-48793 | 4b4f3c669bc71027cbae99d5b12ec750b70d96d4 | diff --git a/python/ray/setup-dev.py b/python/ray/setup-dev.py
index 31d722b89984..d26d377a65f5 100755
--- a/python/ray/setup-dev.py
+++ b/python/ray/setup-dev.py
@@ -73,9 +73,27 @@ def do_link(package, force=False, skip_list=None, local_path=None):
print("You don't have write permission " f"to {package_home}, using sudo:")
sudo = ["sudo"]
print(f"Creating symbolic link from \n {local_home} to \n {package_home}")
+
+ # Preserve ray/serve/generated
+ if package == "serve":
+ # Copy generated folder to a temp dir
+ generated_folder = os.path.join(package_home, "generated")
+ temp_dir = "/tmp/ray/_serve/"
+ if not os.path.exists(temp_dir):
+ os.makedirs(temp_dir)
+ subprocess.check_call(["cp", "-r", generated_folder, temp_dir])
+
subprocess.check_call(sudo + ["rm", "-rf", package_home])
subprocess.check_call(sudo + ["ln", "-s", local_home, package_home])
+ # Move generated folder to local_home
+ if package == "serve":
+ tmp_generated_folder = os.path.join(temp_dir, "generated")
+ package_generated_folder = os.path.join(package_home, "generated")
+ subprocess.check_call(
+ ["mv", tmp_generated_folder, package_generated_folder]
+ )
+
if __name__ == "__main__":
parser = argparse.ArgumentParser(
| ray/serve/generated file is missing after running setup-dev.py
### What happened + What you expected to happen
When running `python setup-dev.py`, it creates softlink for each python package. However, since the generated folder is not part of the repository, creating the symbolic link for the `serve` package inadvertently overwrites the folder and the generated folder can't be found anymore.
### Versions / Dependencies
Lastest
### Reproduction script
```
pip install -U "ray[serve] @ https://s3-us-west-2.amazonaws.com/ray-wheels/latest/ray-3.0.0.dev0-cp39-cp39-macosx_11_0_arm64.whl"
python python/ray/setup-dev.py
```
### Issue Severity
Medium: It is a significant difficulty but I can work around it.
| 1,731,977,311,000 | null | Bug Report | [
"python/ray/setup-dev.py:do_link"
] | [] | 1 | 575 |
||
ray-project/ray | ray-project__ray-48790 | e70b37a435122609f88e02ce3377b8dd7f780e6b | diff --git a/python/ray/serve/api.py b/python/ray/serve/api.py
index 182795889d47..13b92c7fcaae 100644
--- a/python/ray/serve/api.py
+++ b/python/ray/serve/api.py
@@ -474,6 +474,7 @@ def _run(
else:
client = _private_api.serve_start(
http_options={"location": "EveryNode"},
+ global_logging_config=logging_config,
)
# Record after Ray has been started.
ServeUsageTag.API_VERSION.record("v2")
| [serve] logging_config specified in `serve.run` is not propagated cluster-wide
### Description
Specifying `logging_config` in `serve.run(..., logging_config={...})` does not configure logging for the cluster, as is expected. This is because we don't propagate `logging_config` to `.serve_start(...)` here:
https://github.com/ray-project/ray/blob/master/python/ray/serve/api.py#L475-L477
A simple workaround for now is,
```Python
logging_config = {"log_level": "..."}
serve.start(logging_config=logging_config)
serve.run(logging_config=logging_config)
```
### Use case
This issue arose when trying to configure Serve logging holistically for tests.
| 1,731,975,189,000 | null | Bug Report | [
"python/ray/serve/api.py:_run"
] | [] | 1 | 576 |
||
ray-project/ray | ray-project__ray-48786 | 5cd8967f1c0c16d3ae5fedb8449d0d25dd4f9f3e | diff --git a/python/ray/autoscaler/_private/commands.py b/python/ray/autoscaler/_private/commands.py
index 3c03738854f7..9a9b9d91cc2f 100644
--- a/python/ray/autoscaler/_private/commands.py
+++ b/python/ray/autoscaler/_private/commands.py
@@ -1153,16 +1153,15 @@ def exec_cluster(
},
docker_config=config.get("docker"),
)
- shutdown_after_run = False
if cmd and stop:
cmd = "; ".join(
[
cmd,
"ray stop",
"ray teardown ~/ray_bootstrap_config.yaml --yes --workers-only",
+ "sudo shutdown -h now",
]
)
- shutdown_after_run = True
result = _exec(
updater,
@@ -1172,7 +1171,7 @@ def exec_cluster(
port_forward=port_forward,
with_output=with_output,
run_env=run_env,
- shutdown_after_run=shutdown_after_run,
+ shutdown_after_run=False,
extra_screen_args=extra_screen_args,
)
if tmux or screen:
| [Ray Clusters] `ray exec ... --stop --tmux ...` doesn't work with both `--stop` and `--tmux` specified
### What happened + What you expected to happen
When running `ray exec ...` with both `--stop` and `--tmux` flags, the `sudo shutdown -h now` command gets incorrectly left outside the tmux command and thus the machine is immediately shut down without the actual command finishing inside tmux.
For example, consider the following command:
```
ray exec \
--verbose \
--start \
--stop \
--tmux \
--no-config-cache \
./cluster-config.yml \
'echo "start" && sleep 10 && echo "done"'
```
This results in (as printed out by the command runner):
> Running `tmux new -d bash -c 'echo "start" && sleep 10 && echo "done"; ray stop; ray teardown ~/ray_bootstrap_config.yaml --yes --workers-only; exec bash'; sudo shutdown -h now`
The first part `tmux new -d bash -c '...'` returns immediately and thus the `sudo shutdown -h now` gets executed immediately before the command inside tmux finishes. I would expect the shutdown command to run only after the actual command.
### Versions / Dependencies
```
$ ray --version
ray, version 2.37.0
```
### Reproduction script
`cluster-config.yml`:
```yml
auth:
ssh_user: ubuntu
cluster_name: minimal
provider:
type: gcp
region: us-east1
availability_zone: us-east1-b
project_id: [project-id] # Globally unique project id
```
Command:
```
ray exec \
--verbose \
--start \
--stop \
--tmux \
--no-config-cache \
./cluster-config.yml \
'echo "start" && sleep 10 && echo "done"'
```
### Issue Severity
Medium: It is a significant difficulty but I can work around it.
| @hartikainen do you want to create a PR to fix it? We are happy to review the PR. | 1,731,968,780,000 | null | Bug Report | [
"python/ray/autoscaler/_private/commands.py:exec_cluster"
] | [] | 1 | 577 |
|
ray-project/ray | ray-project__ray-48756 | e70b37a435122609f88e02ce3377b8dd7f780e6b | diff --git a/python/ray/dashboard/modules/metrics/install_and_start_prometheus.py b/python/ray/dashboard/modules/metrics/install_and_start_prometheus.py
index a65050212950..cf7cb31c3607 100644
--- a/python/ray/dashboard/modules/metrics/install_and_start_prometheus.py
+++ b/python/ray/dashboard/modules/metrics/install_and_start_prometheus.py
@@ -26,6 +26,9 @@ def get_system_info():
if architecture == "x86_64":
# In the Prometheus filename, it's called amd64
architecture = "amd64"
+ elif architecture == "aarch64":
+ # In the Prometheus filename, it's called arm64
+ architecture = "arm64"
return os_type, architecture
| [ray metrics launch-prometheus] Incorrect download URL generation for aarch64 architecture
### What happened + What you expected to happen
<img width="802" alt="image" src="https://github.com/user-attachments/assets/e370ab29-db28-432b-b2c5-4c50e8e2dcf6">
- When executing the "ray metrics launch-prometheus" command on an aarch64 architecture system, the download URL is incorrectly generated, leading to a "not found" error.
- This occurs because the command attempts to download the Prometheus build file from the GitHub releases page (https://github.com/prometheus/prometheus/releases) using "aarch64" in the URL, while Prometheus classifies this architecture as "arm64".
### Versions / Dependencies
- Ray: rayproject/ray:nightly-aarch64
### Reproduction script
1. Run "ray metrics launch-prometheus" on an aarch64 system
2. Observe that the command attempts to download a file with "aarch64" in the URL
### Issue Severity
Low: It annoys or frustrates me.
| 1,731,661,282,000 | null | Bug Report | [
"python/ray/dashboard/modules/metrics/install_and_start_prometheus.py:get_system_info"
] | [] | 1 | 578 |
||
optuna/optuna | optuna__optuna-5828 | 81d1d36cce68e7de0384951689cdbcd4ae8b6866 | diff --git a/optuna/cli.py b/optuna/cli.py
index 16fa3a6df1..7246a86e21 100644
--- a/optuna/cli.py
+++ b/optuna/cli.py
@@ -215,7 +215,10 @@ def _dump_table(records: list[dict[str, Any]], header: list[str]) -> str:
for t in value_types:
if t == ValueType.STRING:
value_type = ValueType.STRING
- max_width = max(len(header[column]), max(row[column].width() for row in rows))
+ if len(rows) == 0:
+ max_width = len(header[column])
+ else:
+ max_width = max(len(header[column]), max(row[column].width() for row in rows))
separator += "-" * (max_width + 2) + "+"
if value_type == ValueType.NUMERIC:
header_string += f" {header[column]:>{max_width}} |"
@@ -228,7 +231,8 @@ def _dump_table(records: list[dict[str, Any]], header: list[str]) -> str:
ret += separator + "\n"
ret += header_string + "\n"
ret += separator + "\n"
- ret += "\n".join(rows_string) + "\n"
+ for row_string in rows_string:
+ ret += row_string + "\n"
ret += separator + "\n"
return ret
| CLI for empty DB raises `ValueError`
### Expected behavior
CLI for empty DB should output empty result, but the current implementation raises `ValueError`.
### Environment
- Optuna version:4.2.0.dev
- Python version:3.13.0
- OS:macOS-15.1-x86_64-i386-64bit-Mach-O
- (Optional) Other libraries and their versions:
### Error messages, stack traces, or logs
```shell
See below.
```
### Steps to reproduce
For empty DB (`tmp.db` does not exist before the command), the `optuna studies` command raises `ValueError`.
```bash
$ optuna --storage sqlite:///tmp.db studies
Traceback (most recent call last):
File "/Users/naotomizuno/.pyenv/versions/optuna-3.13.0/bin/optuna", line 8, in <module>
sys.exit(main())
~~~~^^
File "/Users/naotomizuno/optuna/optuna/cli.py", line 991, in main
return args.handler(args)
~~~~~~~~~~~~^^^^^^
File "/Users/naotomizuno/optuna/optuna/cli.py", line 463, in take_action
_format_output(
~~~~~~~~~~~~~~^
records, self._study_list_header, parsed_args.format, parsed_args.flatten
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/naotomizuno/optuna/optuna/cli.py", line 258, in _format_output
return _dump_table(values, header).strip()
~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/Users/naotomizuno/optuna/optuna/cli.py", line 222, in _dump_table
max_width = max(len(header[column]), max(row[column].width() for row in rows))
~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: max() iterable argument is empty
```
### Additional context (optional)
_No response_
| 1,733,375,129,000 | null | Bug Report | [
"optuna/cli.py:_dump_table"
] | [] | 1 | 579 |
||
BerriAI/litellm | BerriAI__litellm-6915 | fd2d4254bcd01e924ca4dded36ee4714c33734af | diff --git a/litellm/llms/fireworks_ai/chat/fireworks_ai_transformation.py b/litellm/llms/fireworks_ai/chat/fireworks_ai_transformation.py
index 4d5b2d6eb3ba..10d8a5913328 100644
--- a/litellm/llms/fireworks_ai/chat/fireworks_ai_transformation.py
+++ b/litellm/llms/fireworks_ai/chat/fireworks_ai_transformation.py
@@ -25,6 +25,7 @@ class FireworksAIConfig:
stop: Optional[Union[str, list]] = None
response_format: Optional[dict] = None
user: Optional[str] = None
+ logprobs: Optional[int] = None
# Non OpenAI parameters - Fireworks AI only params
prompt_truncate_length: Optional[int] = None
@@ -44,6 +45,7 @@ def __init__(
stop: Optional[Union[str, list]] = None,
response_format: Optional[dict] = None,
user: Optional[str] = None,
+ logprobs: Optional[int] = None,
prompt_truncate_length: Optional[int] = None,
context_length_exceeded_behavior: Optional[Literal["error", "truncate"]] = None,
) -> None:
@@ -86,6 +88,7 @@ def get_supported_openai_params(self):
"stop",
"response_format",
"user",
+ "logprobs",
"prompt_truncate_length",
"context_length_exceeded_behavior",
]
| [Bug]: supported params are out of date for fireworks AI
### What happened?
when calling fireworks models, litellm is complianing: logprobs is not supproted but it's actually supported by fireworks ai.
ref: https://docs.fireworks.ai/api-reference/post-completions
### Relevant log output
_No response_
### Twitter / LinkedIn details
_No response_
| 1,732,617,202,000 | null | Bug Report | [
"litellm/llms/fireworks_ai/chat/fireworks_ai_transformation.py:FireworksAIConfig.__init__",
"litellm/llms/fireworks_ai/chat/fireworks_ai_transformation.py:FireworksAIConfig.get_supported_openai_params"
] | [] | 2 | 580 |
||
matplotlib/matplotlib | matplotlib__matplotlib-29265 | 0406a56b051a371ccf81d2946126580651a645f2 | diff --git a/lib/matplotlib/collections.py b/lib/matplotlib/collections.py
index a78f1838357e..f18d5a4c3a8c 100644
--- a/lib/matplotlib/collections.py
+++ b/lib/matplotlib/collections.py
@@ -1612,14 +1612,13 @@ def __init__(self, segments, # Can be None.
"""
Parameters
----------
- segments : list of array-like
- A sequence (*line0*, *line1*, *line2*) of lines, where each line is a list
- of points::
+ segments : list of (N, 2) array-like
+ A sequence ``[line0, line1, ...]`` where each line is a (N, 2)-shape
+ array-like containing points::
- lineN = [(x0, y0), (x1, y1), ... (xm, ym)]
+ line0 = [(x0, y0), (x1, y1), ...]
- or the equivalent Mx2 numpy array with two columns. Each line
- can have a different number of segments.
+ Each line can contain a different number of points.
linewidths : float or list of float, default: :rc:`lines.linewidth`
The width of each line in points.
colors : :mpltype:`color` or list of color, default: :rc:`lines.color`
| Improve LineCollection docstring further
(M, 2)
I would perhaps completely drop the "list of points" and just write
```
A sequence ``[line0, line1, ...]`` where each line is a (N, 2)-shape
array-like of points::
line0 = [(x0, y0), (x1, y1), ...]
Each line can...
```
_Originally posted by @anntzer in https://github.com/matplotlib/matplotlib/pull/26676#discussion_r1313026557_
| 1,733,753,596,000 | null | Feature Request | [
"lib/matplotlib/collections.py:LineCollection.__init__"
] | [] | 1 | 581 |
||
matplotlib/matplotlib | matplotlib__matplotlib-29254 | 671177c08613136fd5004092b8b56449d419c12a | diff --git a/lib/matplotlib/figure.py b/lib/matplotlib/figure.py
index e5cf88131178..3d6f9a7f4c16 100644
--- a/lib/matplotlib/figure.py
+++ b/lib/matplotlib/figure.py
@@ -1382,8 +1382,8 @@ def align_xlabels(self, axs=None):
Notes
-----
- This assumes that ``axs`` are from the same `.GridSpec`, so that
- their `.SubplotSpec` positions correspond to figure positions.
+ This assumes that all Axes in ``axs`` are from the same `.GridSpec`,
+ so that their `.SubplotSpec` positions correspond to figure positions.
Examples
--------
@@ -1444,8 +1444,8 @@ def align_ylabels(self, axs=None):
Notes
-----
- This assumes that ``axs`` are from the same `.GridSpec`, so that
- their `.SubplotSpec` positions correspond to figure positions.
+ This assumes that all Axes in ``axs`` are from the same `.GridSpec`,
+ so that their `.SubplotSpec` positions correspond to figure positions.
Examples
--------
@@ -1500,8 +1500,8 @@ def align_titles(self, axs=None):
Notes
-----
- This assumes that ``axs`` are from the same `.GridSpec`, so that
- their `.SubplotSpec` positions correspond to figure positions.
+ This assumes that all Axes in ``axs`` are from the same `.GridSpec`,
+ so that their `.SubplotSpec` positions correspond to figure positions.
Examples
--------
@@ -1544,6 +1544,11 @@ def align_labels(self, axs=None):
matplotlib.figure.Figure.align_xlabels
matplotlib.figure.Figure.align_ylabels
matplotlib.figure.Figure.align_titles
+
+ Notes
+ -----
+ This assumes that all Axes in ``axs`` are from the same `.GridSpec`,
+ so that their `.SubplotSpec` positions correspond to figure positions.
"""
self.align_xlabels(axs=axs)
self.align_ylabels(axs=axs)
| [Bug]: Figure.align_labels() confused by GridSpecFromSubplotSpec
### Bug summary
In a composite figure with nested gridspecs, `Figure.align_labels()` (and `align_xlabels()`, `align_ylabels()`) can end up aligning labels that should not intuitively be. Likewise with `align_titles()`.
### Code for reproduction
```Python
fig = plt.figure(figsize=(6, 4))
gs0 = gridspec.GridSpec(nrows=1, ncols=2, figure=fig)
gs00 = gs0[0].subgridspec(nrows=2, ncols=1, height_ratios=[8, 8])
gs01 = gs0[1].subgridspec(nrows=2, ncols=1, height_ratios=[9, 6])
left_axs = gs00.subplots()
right_axs = gs01.subplots()
left_axs[0].set_ylim(0, 0.02) # to force nontrivial alignment
left_axs[0].set_ylabel('foo')
left_axs[1].set_ylabel('bar')
right_axs[0].set_ylabel('baz')
right_axs[1].set_ylabel('qux')
left_axs[1].set_title('title')
right_axs[1].set_title('title')
fig.align_labels()
fig.align_titles()
```
### Actual outcome
All labels are aligned. Titles are aligned as well.

### Expected outcome
Labels in separate columns are aligned, but labels in different columns should not be. Titles are not aligned:

### Additional information
Right now, the ylabel (xlabel) alignment code seems to attempt to align labels on Axes with the same column index (resp. row index) without checking if those indexes are for the same gridspec. To fix this, we should probably add a check that two Axes share the same gridspec (in addition to being in the same row/col) before we align their labels. (This would not allow label alignment across gridspecs, but if a user wants to align labels between two Axes, it seems reasonable to expect them to put the Axes in the same gridspec.)
The same thing happens with align_titles().
For now, a workaround for labels is to call `Figure.align_labels()` separately for each sub-gridspec with the `axs` kwarg (as done for the expected outcome figure above).
### Operating system
macOS 14.1.1
### Matplotlib Version
3.9.2
### Matplotlib Backend
module://matplotlib_inline.backend_inline
### Python version
3.12.2
### Jupyter version
_No response_
### Installation
conda
| This is definitely an issue, but not sure we would prioritize or accept a complicated fix for this. Note the docs say
> Align the xlabels of subplots in the same subplot row if label alignment is being done automatically (i.e. the label position is not manually set).
This issue with subgridspecs not having a clear hierarchy is why we introduced [subfigures](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/subfigures.html) in v3.4. Your code would look like:
```
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure(figsize=(6, 4), layout='constrained')
sfigs = fig.subfigures(1, 2)
left_axs = sfigs[0].subplots(2, 1, height_ratios=[8, 8])
right_axs = sfigs[1].subplots(2, 1, height_ratios=[9, 6])
left_axs[0].set_ylim(0, 0.02) # to force nontrivial alignment
left_axs[0].set_ylabel('foo')
left_axs[1].set_ylabel('bar')
right_axs[0].set_ylabel('baz')
right_axs[1].set_ylabel('qux')
left_axs[1].set_title('title')
right_axs[1].set_title('title')
for sfig in sfigs:
sfig.align_labels()
sfig.align_titles()
plt.show()
```

I suppose one change we could entertain is `align_labels` and friends accepting a list of subplots to align.
This works with `subplot_mosaic`, which would be my recommended approach
```python
fig, axd = plt.subplot_mosaic("""
AC
AC
BC
BD
BD
""", layout="constrained")
axd["A"].set_ylim(0, 0.02) # to force nontrivial alignment
axd["A"].set_ylabel('foo')
axd["B"].set_ylabel('bar')
axd["C"].set_ylabel('baz')
axd["D"].set_ylabel('qux')
axd["B"].set_title('title')
axd["D"].set_title('title')
fig.align_labels()
fig.align_titles()
```

I suggest we simply declare that `align_labels` and `align_titles` do not work with subgridspece.
For sure you could use subplot_mosaic for a similar layout as well, though note that it is very hard to use it to get the height ratios exactly as requested. Depends on what your constraints actually are.
Being slightly more specific in the docstring would be fine.
Actually,
https://matplotlib.org/devdocs/api/_as_gen/matplotlib.figure.Figure.align_titles.html
https://matplotlib.org/devdocs/api/_as_gen/matplotlib.figure.Figure.align_xlabels.html
https://matplotlib.org/devdocs/api/_as_gen/matplotlib.figure.Figure.align_ylabels.html
all have a note that they assume all Axes are from the same GridSpec.
That note is missing in https://matplotlib.org/devdocs/api/_as_gen/matplotlib.figure.Figure.align_labels.html and should be copied there. | 1,733,617,371,000 | null | Bug Report | [
"lib/matplotlib/figure.py:FigureBase.align_xlabels",
"lib/matplotlib/figure.py:FigureBase.align_ylabels",
"lib/matplotlib/figure.py:FigureBase.align_titles",
"lib/matplotlib/figure.py:FigureBase.align_labels"
] | [] | 4 | 582 |
|
matplotlib/matplotlib | matplotlib__matplotlib-29236 | 84fbae8eea3bb791ae9175dbe77bf5dee3368275 | diff --git a/lib/matplotlib/animation.py b/lib/matplotlib/animation.py
index 47f2f0f9515b..2be61284073a 100644
--- a/lib/matplotlib/animation.py
+++ b/lib/matplotlib/animation.py
@@ -492,8 +492,15 @@ def grab_frame(self, **savefig_kwargs):
buf = BytesIO()
self.fig.savefig(
buf, **{**savefig_kwargs, "format": "rgba", "dpi": self.dpi})
- self._frames.append(Image.frombuffer(
- "RGBA", self.frame_size, buf.getbuffer(), "raw", "RGBA", 0, 1))
+ im = Image.frombuffer(
+ "RGBA", self.frame_size, buf.getbuffer(), "raw", "RGBA", 0, 1)
+ if im.getextrema()[3][0] < 255:
+ # This frame has transparency, so we'll just add it as is.
+ self._frame.append(im)
+ else:
+ # Without transparency, we switch to RGB mode, which converts to P mode a
+ # little better if needed (specifically, this helps with GIF output.)
+ self._frames.append(im.convert("RGB"))
def finish(self):
self._frames[0].save(
| [Bug]: inconsistent ‘animation.FuncAnimation’ between display and save
### Bug summary
when i want to save images to gif, it's inconsistent between display and save;
It seems that the color information has been lost:

### Code for reproduction
```Python
def animation_test():
import matplotlib.pyplot as plt
import matplotlib.animation as animation
file = r'example.dat'
num_frames = 72
nx = 8
ny = 9
data = np.fromfile(file, np.float32).reshape(num_frames, ny, nx)
fig, ax = plt.subplots()
img = data[0,]
# plt.imshow(img)
vmax = 100
vmin = 0
h = ax.imshow(img, cmap=plt.get_cmap('CMRmap_r'), origin='lower', interpolation='none', vmin=vmin, vmax=vmax, animated=True)
ax.set_xticks(range(nx))
ax.set_xticklabels(range(1, nx + 1))
ax.set_yticks(range(ny))
ax.set_yticklabels(range(1, ny + 1))
fig.tight_layout()
def update(frame):
img = data[frame, ]
h.set_array(img)
return h,
# create animation
interval = 100
ani = animation.FuncAnimation(fig, update, frames=range(num_frames), interval=interval, blit=True)
# ani = animation.FuncAnimation(fig, update, frames=frame_iter, interval=interval, blit=False, cache_frame_data=False)
ani.save('example.gif', writer='pillow', fps=2, dpi=300)
pass
if __name__ == '__main__':
animation_test()
```
### Actual outcome
above picture -> right
### Expected outcome
above picture -> left
### Additional information
_No response_
### Operating system
win10
### Matplotlib Version
'3.4.2'
### Matplotlib Backend
_No response_
### Python version
3.7.10
### Jupyter version
_No response_
### Installation
pip
| Do you mind also including the data points that you plotted?
I updated the code and uploaded the data file:
[example.zip](https://github.com/user-attachments/files/17945028/example.zip)
Thank you. I was able to reproduce the behavior now. It does seem like a bug.
It may be because the PillowWriter is renormalizing the color values frame-by-frame instead of using the original normalization that is still there when you directly .show() the plot. In that case, keeping around a Normalization object that the PillowWriter can reference later would solve it. But I'll let the veterans figure decide if that's the issue.
Um, can you provide a preliminary solution? :)
So far I'm only good enough to triage bugs. :(
Without having debugged this exactly, my guess is that this is a fundamental limitation of gif. From https://en.wikipedia.org/wiki/GIF
> The format can contain up to [8 bits per pixel](https://en.wikipedia.org/wiki/8-bit_color), allowing a single image to reference its own [palette](https://en.wikipedia.org/wiki/Palette_(computing)) of up to 256 different colors chosen from the [24-bit](https://en.wikipedia.org/wiki/24-bit_color) [RGB color space](https://en.wikipedia.org/wiki/RGB_color_model). It can also represent multiple images in a file, which can be used for [animations](https://en.wikipedia.org/wiki/Animation), and allows a separate palette of up to 256 colors for each frame. These palette limitations make GIF less suitable for reproducing color photographs and other [images with color gradients](https://en.wikipedia.org/wiki/Image_gradient) but well-suited for simpler images such as graphics or logos with solid areas of color.
This seems to be specific to the Pillow writer; it looks similar to the expected result when using ffmpeg.
This is a similar issue to https://github.com/matplotlib/matplotlib/issues/25806; Pillow converts the RGBA image to P(alette) and loses some colours. This is due to the inherent limitations of the GIF format as @timhoffm has mentioned. See for example the upstream issue https://github.com/python-pillow/Pillow/issues/6832
I think your best bet is to either switch to ffmpeg, which does this better, or switch to a more flexible format like `webp`.
Hmm, actually it looks like we can help Pillow a little bit here. If the image doesn't contain any transparency, then we can convert it to `RGB` mode, and Pillow's conversion from that mode to `P` mode is a bit better. | 1,733,387,401,000 | null | Bug Report | [
"lib/matplotlib/animation.py:PillowWriter.grab_frame"
] | [] | 1 | 583 |
|
tobymao/sqlglot | tobymao__sqlglot-4526 | 946cd4234a2ca403785b7c6a026a39ef604e8754 | diff --git a/sqlglot/planner.py b/sqlglot/planner.py
index 2e42b32c4..687bffb9f 100644
--- a/sqlglot/planner.py
+++ b/sqlglot/planner.py
@@ -201,11 +201,13 @@ def set_ops_and_aggs(step):
aggregate.add_dependency(step)
step = aggregate
+ else:
+ aggregate = None
order = expression.args.get("order")
if order:
- if isinstance(step, Aggregate):
+ if aggregate and isinstance(step, Aggregate):
for i, ordered in enumerate(order.expressions):
if extract_agg_operands(exp.alias_(ordered.this, f"_o_{i}", quoted=True)):
ordered.this.replace(exp.column(f"_o_{i}", step.name, quoted=True))
| getting UnboundLocalError: cannot access local variable 'aggregate' where it is not associated with a value when running sqlglot.planner.Plan
**Before you file an issue**
- Make sure you specify the "read" dialect eg. `parse_one(sql, read="spark")`
- Make sure you specify the "write" dialect eg. `ast.sql(dialect="duckdb")`
- Check if the issue still exists on main
**Fully reproducible code snippet**
```
import sqlglot
import sqlglot.planner
r = 'select suma from ( select sum(a) as suma from table1) order by suma'
parsed = sqlglot.parse_one(r, dialect='snowflake')
p = sqlglot.planner.Plan(parsed)
```
Throws:
```
File venv/lib/python3.11/site-packages/sqlglot/planner.py:14, in Plan.__init__(self, expression)
12 def __init__(self, expression: exp.Expression) -> None:
13 self.expression = expression.copy()
---> 14 self.root = Step.from_expression(self.expression)
15 self._dag: t.Dict[Step, t.Set[Step]] = {}
File venv/lib/python3.11/site-packages/sqlglot/planner.py:213, in Step.from_expression(cls, expression, ctes)
210 if extract_agg_operands(exp.alias_(ordered.this, f"_o_{i}", quoted=True)):
211 ordered.this.replace(exp.column(f"_o_{i}", step.name, quoted=True))
--> 213 set_ops_and_aggs(aggregate)
215 sort = Sort()
216 sort.name = step.name
UnboundLocalError: cannot access local variable 'aggregate' where it is not associated with a value
```
On sqlglot 25.33.0
dialect seems to be irrelevant, same error with Athena
**Official Documentation**
Please include links to official SQL documentation related to your issue.
| You need to run the optimizer first:
```python
>>> import sqlglot
>>> import sqlglot.planner
>>>
>>> r = 'select suma from ( select sum(a) as suma from table1) order by suma'
>>> parsed = sqlglot.parse_one(r, dialect='snowflake')
>>> p = sqlglot.planner.Plan(parsed)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/georgesittas/Code/tobiko/sqlglot/sqlglot/planner.py", line 14, in __init__
self.root = Step.from_expression(self.expression)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/georgesittas/Code/tobiko/sqlglot/sqlglot/planner.py", line 213, in from_expression
set_ops_and_aggs(aggregate)
^^^^^^^^^
UnboundLocalError: cannot access local variable 'aggregate' where it is not associated with a value
>>> from sqlglot.optimizer import optimize
>>> optimized = optimize(parsed)
>>> optimized.sql()
'WITH "_q_0" AS (SELECT SUM("table1"."a") AS "suma" FROM "table1" AS "table1") SELECT "_q_0"."suma" AS "suma" FROM "_q_0" AS "_q_0" ORDER BY "suma" NULLS LAST'
>>>
>>> p = sqlglot.planner.Plan(optimized)
>>> p
Plan
----
- Sort: _q_0 (4376798720)
Context:
Key:
- "suma" NULLS LAST
Projections:
- "_q_0"."suma" AS "suma"
Dependencies:
- Scan: _q_0 (4343324816)
Context:
Source: "_q_0" AS "_q_0"
Projections:
Dependencies:
- Aggregate: _q_0 (4376798672)
Context:
Aggregations:
- SUM("table1"."a") AS "suma"
Projections:
- "table1"."suma"
Dependencies:
- Scan: table1 (4376798816)
Context:
Source: "table1" AS "table1"
Projections:
```
Looks like there's a code path where this _can_ happen, I think I may have made an incorrect assumption on needing the optimizer. Will double check and re-close if needed. | 1,734,394,253,000 | null | Bug Report | [
"sqlglot/planner.py:Step.from_expression"
] | [] | 1 | 584 |
|
tobymao/sqlglot | tobymao__sqlglot-4369 | a665030323b200f3bed241bb928993b9807c4100 | diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py
index f04cece117..b0c2a7f560 100644
--- a/sqlglot/expressions.py
+++ b/sqlglot/expressions.py
@@ -767,6 +767,7 @@ def and_(
*expressions: t.Optional[ExpOrStr],
dialect: DialectType = None,
copy: bool = True,
+ wrap: bool = True,
**opts,
) -> Condition:
"""
@@ -781,18 +782,22 @@ def and_(
If an `Expression` instance is passed, it will be used as-is.
dialect: the dialect used to parse the input expression.
copy: whether to copy the involved expressions (only applies to Expressions).
+ wrap: whether to wrap the operands in `Paren`s. This is true by default to avoid
+ precedence issues, but can be turned off when the produced AST is too deep and
+ causes recursion-related issues.
opts: other options to use to parse the input expressions.
Returns:
The new And condition.
"""
- return and_(self, *expressions, dialect=dialect, copy=copy, **opts)
+ return and_(self, *expressions, dialect=dialect, copy=copy, wrap=wrap, **opts)
def or_(
self,
*expressions: t.Optional[ExpOrStr],
dialect: DialectType = None,
copy: bool = True,
+ wrap: bool = True,
**opts,
) -> Condition:
"""
@@ -807,12 +812,15 @@ def or_(
If an `Expression` instance is passed, it will be used as-is.
dialect: the dialect used to parse the input expression.
copy: whether to copy the involved expressions (only applies to Expressions).
+ wrap: whether to wrap the operands in `Paren`s. This is true by default to avoid
+ precedence issues, but can be turned off when the produced AST is too deep and
+ causes recursion-related issues.
opts: other options to use to parse the input expressions.
Returns:
The new Or condition.
"""
- return or_(self, *expressions, dialect=dialect, copy=copy, **opts)
+ return or_(self, *expressions, dialect=dialect, copy=copy, wrap=wrap, **opts)
def not_(self, copy: bool = True):
"""
@@ -6921,6 +6929,7 @@ def _combine(
operator: t.Type[Connector],
dialect: DialectType = None,
copy: bool = True,
+ wrap: bool = True,
**opts,
) -> Expression:
conditions = [
@@ -6930,10 +6939,10 @@ def _combine(
]
this, *rest = conditions
- if rest:
+ if rest and wrap:
this = _wrap(this, Connector)
for expression in rest:
- this = operator(this=this, expression=_wrap(expression, Connector))
+ this = operator(this=this, expression=_wrap(expression, Connector) if wrap else expression)
return this
@@ -7316,7 +7325,11 @@ def condition(
def and_(
- *expressions: t.Optional[ExpOrStr], dialect: DialectType = None, copy: bool = True, **opts
+ *expressions: t.Optional[ExpOrStr],
+ dialect: DialectType = None,
+ copy: bool = True,
+ wrap: bool = True,
+ **opts,
) -> Condition:
"""
Combine multiple conditions with an AND logical operator.
@@ -7330,16 +7343,23 @@ def and_(
If an Expression instance is passed, this is used as-is.
dialect: the dialect used to parse the input expression.
copy: whether to copy `expressions` (only applies to Expressions).
+ wrap: whether to wrap the operands in `Paren`s. This is true by default to avoid
+ precedence issues, but can be turned off when the produced AST is too deep and
+ causes recursion-related issues.
**opts: other options to use to parse the input expressions.
Returns:
The new condition
"""
- return t.cast(Condition, _combine(expressions, And, dialect, copy=copy, **opts))
+ return t.cast(Condition, _combine(expressions, And, dialect, copy=copy, wrap=wrap, **opts))
def or_(
- *expressions: t.Optional[ExpOrStr], dialect: DialectType = None, copy: bool = True, **opts
+ *expressions: t.Optional[ExpOrStr],
+ dialect: DialectType = None,
+ copy: bool = True,
+ wrap: bool = True,
+ **opts,
) -> Condition:
"""
Combine multiple conditions with an OR logical operator.
@@ -7353,16 +7373,23 @@ def or_(
If an Expression instance is passed, this is used as-is.
dialect: the dialect used to parse the input expression.
copy: whether to copy `expressions` (only applies to Expressions).
+ wrap: whether to wrap the operands in `Paren`s. This is true by default to avoid
+ precedence issues, but can be turned off when the produced AST is too deep and
+ causes recursion-related issues.
**opts: other options to use to parse the input expressions.
Returns:
The new condition
"""
- return t.cast(Condition, _combine(expressions, Or, dialect, copy=copy, **opts))
+ return t.cast(Condition, _combine(expressions, Or, dialect, copy=copy, wrap=wrap, **opts))
def xor(
- *expressions: t.Optional[ExpOrStr], dialect: DialectType = None, copy: bool = True, **opts
+ *expressions: t.Optional[ExpOrStr],
+ dialect: DialectType = None,
+ copy: bool = True,
+ wrap: bool = True,
+ **opts,
) -> Condition:
"""
Combine multiple conditions with an XOR logical operator.
@@ -7376,12 +7403,15 @@ def xor(
If an Expression instance is passed, this is used as-is.
dialect: the dialect used to parse the input expression.
copy: whether to copy `expressions` (only applies to Expressions).
+ wrap: whether to wrap the operands in `Paren`s. This is true by default to avoid
+ precedence issues, but can be turned off when the produced AST is too deep and
+ causes recursion-related issues.
**opts: other options to use to parse the input expressions.
Returns:
The new condition
"""
- return t.cast(Condition, _combine(expressions, Xor, dialect, copy=copy, **opts))
+ return t.cast(Condition, _combine(expressions, Xor, dialect, copy=copy, wrap=wrap, **opts))
def not_(expression: ExpOrStr, dialect: DialectType = None, copy: bool = True, **opts) -> Not:
| Excessive Recursion in Query Optimization with Multiple OR Clauses
## Context
We are encountering an issue where a query with a high number of OR operators is causing excessive recursion during the optimization phase. The resulting recursion depth leads to stack overflow errors. As a temporary workaround, we increased the stack size limit.
Despite the number of entries not being particularly high, we suspect that something in the optimization process is causing the recursion depth to increase unexpectedly.
## Reproducible example code snippet
```python
import sqlglot
import sqlglot.expressions as expressions
from sqlglot.expressions import column
is_equal_list = ['a'] * 500
is_equal = expressions.false()
for value in is_equal_list:
is_equal = is_equal.or_(column("a_column").eq(value))
```
If you try to access `is_equal`, you'll receive an error:
```python
is_equal
# throws
#sqlglot/expressions.py", line 256, in is_leaf
# return not any(isinstance(v, (Expression, list)) for v in self.args.values())
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#RecursionError: maximum recursion depth exceeded
```
The default recursion depth is 1000.
| this is because to_s is showing the full nested tree. if you do is_equal.sql() it should be ok | 1,731,329,118,000 | null | Bug Report | [
"sqlglot/expressions.py:Expression.and_",
"sqlglot/expressions.py:Expression.or_",
"sqlglot/expressions.py:_combine",
"sqlglot/expressions.py:xor"
] | [] | 4 | 585 |
|
flet-dev/flet | flet-dev__flet-4554 | be58db6a4120596c45172933432678105785d94a | diff --git a/sdk/python/packages/flet-cli/src/flet_cli/utils/project_dependencies.py b/sdk/python/packages/flet-cli/src/flet_cli/utils/project_dependencies.py
index 218705576..f39561bfc 100644
--- a/sdk/python/packages/flet-cli/src/flet_cli/utils/project_dependencies.py
+++ b/sdk/python/packages/flet-cli/src/flet_cli/utils/project_dependencies.py
@@ -14,29 +14,69 @@ def get_poetry_dependencies(
if poetry_dependencies is None:
return None
- def format_dependency_version(dependency: str, version_value: Any):
+ def format_dependency_version(dependency_name: str, dependency_value: Any):
+ sep = "@"
+ value = ""
suffix = ""
- if isinstance(version_value, dict):
- version = version_value["version"]
- markers = version_value.get("markers")
+
+ if isinstance(dependency_value, dict):
+ version = dependency_value.get("version")
+ if version:
+ sep = "=="
+ value = version
+ else:
+ git_url = dependency_value.get("git")
+ if git_url:
+ value = (
+ f"git+{git_url}" if not git_url.startswith("git@") else git_url
+ )
+ rev = (
+ dependency_value.get("branch")
+ or dependency_value.get("rev")
+ or dependency_value.get("tag")
+ )
+ if rev:
+ value = f"{value}@{rev}"
+ subdirectory = dependency_value.get("subdirectory")
+ if subdirectory:
+ value = f"{value}#subdirectory={subdirectory}"
+ else:
+ path = dependency_value.get("path")
+ if path:
+ value = path
+ dependency_name = ""
+ sep = ""
+ else:
+ url = dependency_value.get("url")
+ if url:
+ value = url
+ dependency_name = ""
+ sep = ""
+ else:
+ raise Exception(
+ f"Unsupported dependency specification: {dependency_name} = {dependency_value}"
+ )
+
+ # markers - common for all
+ markers = dependency_value.get("markers")
if markers is not None:
suffix = f";{markers}"
else:
- version = version_value
+ value = dependency_value
+ sep = "=="
- sep = "=="
- if version.startswith("^"):
+ if value.startswith("^"):
sep = ">="
- version = version[1:]
- elif version.startswith("~"):
+ value = value[1:]
+ elif value.startswith("~"):
sep = "~="
- version = version[1:]
- return f"{dependency}~={version[1:]}"
- elif "<" in version or ">" in version:
+ value = value[1:]
+ return f"{dependency_name}~={value[1:]}"
+ elif "<" in value or ">" in value:
sep = ""
- version = version.replace(" ", "")
+ value = value.replace(" ", "")
- return f"{dependency}{sep}{version}{suffix}"
+ return f"{dependency_name}{sep}{value}{suffix}"
dependencies: set[str] = {
format_dependency_version(dependency, version)
| `flet build` fails to parse file and git dependencies from `tool.poetry.dependencies` in `pyproject.toml`
### Discussed in https://github.com/flet-dev/flet/discussions/4546
<div type='discussions-op-text'>
<sup>Originally posted by **amcraig** December 11, 2024</sup>
### Question
Hi all,
I've tried including my python package(not on PyPi) through both relative paths to the whl/tar.gz and via git in both `requirements.txt` and in `pyproject.toml` (including poetry) but any attempts I do fail due to a `distutils Module not found` error or `KeyError: 'version'`.
Does anyone have a guaranteed way to provide a local/private python package to Flet in the build process?
Thanks!
### Code sample
```python
##### Pyproject Poetry
[tool.poetry]
name = "file_tracker"
version = "0.5.0"
description = "redacted"
authors = ["amcraig"]
[tool.poetry.dependencies]
python = "^3.10"
private_package = { git = "https://github.com/private/package.git" }
flet = "^0.25.1"
##### requirements.txt
python==3.10
flet
datateam @ git+https://github.com/private/package
```
### Error message
_No response_
### ------------------------------------------------------
- [X] I have searched for answers to my question both in the issues and in previous discussions.</div>
`flet build` fails to parse file and git dependencies from `tool.poetry.dependencies` in `pyproject.toml`
### Discussed in https://github.com/flet-dev/flet/discussions/4546
<div type='discussions-op-text'>
<sup>Originally posted by **amcraig** December 11, 2024</sup>
### Question
Hi all,
I've tried including my python package(not on PyPi) through both relative paths to the whl/tar.gz and via git in both `requirements.txt` and in `pyproject.toml` (including poetry) but any attempts I do fail due to a `distutils Module not found` error or `KeyError: 'version'`.
Does anyone have a guaranteed way to provide a local/private python package to Flet in the build process?
Thanks!
### Code sample
```python
##### Pyproject Poetry
[tool.poetry]
name = "file_tracker"
version = "0.5.0"
description = "redacted"
authors = ["amcraig"]
[tool.poetry.dependencies]
python = "^3.10"
private_package = { git = "https://github.com/private/package.git" }
flet = "^0.25.1"
##### requirements.txt
python==3.10
flet
datateam @ git+https://github.com/private/package
```
### Error message
_No response_
### ------------------------------------------------------
- [X] I have searched for answers to my question both in the issues and in previous discussions.</div>
| 1,734,034,325,000 | null | Bug Report | [
"sdk/python/packages/flet-cli/src/flet_cli/utils/project_dependencies.py:get_poetry_dependencies"
] | [] | 1 | 586 |
||
flet-dev/flet | flet-dev__flet-4452 | f62b5066ab79f3b99241e9c234baeac71fd60f95 | diff --git a/sdk/python/packages/flet-cli/src/flet_cli/commands/build.py b/sdk/python/packages/flet-cli/src/flet_cli/commands/build.py
index 0dcd8539a..212157549 100644
--- a/sdk/python/packages/flet-cli/src/flet_cli/commands/build.py
+++ b/sdk/python/packages/flet-cli/src/flet_cli/commands/build.py
@@ -1271,6 +1271,7 @@ def package_python_app(self):
assert self.options
assert self.get_pyproject
assert self.python_app_path
+ assert self.package_app_path
assert self.build_dir
assert self.flutter_dir
@@ -1282,7 +1283,7 @@ def package_python_app(self):
"run",
"serious_python:main",
"package",
- str(self.python_app_path),
+ str(self.package_app_path),
"--platform",
self.package_platform,
]
| `flet build` creates bundle but running it gives `ImportError: No module named main` error
### Duplicate Check
- [X] I have searched the [opened issues](https://github.com/flet-dev/flet/issues) and there are no duplicates
### Describe the bug
Traceback (most recent call last):
File "<string>", line 47, in <module>
File "<frozen runpy>", line 222, in run_module
File "<frozen runpy>", line 142, in _get_module_details
ImportError: No module named main
### Code sample
<details open><summary>Code</summary>
```python
print(error)
```
</details>
### To reproduce
...
### Expected behavior
_No response_
### Screenshots / Videos
<details open>
<summary>Captures</summary>

</details>
### Operating System
Windows
### Operating system details
11
### Flet version
0.25
### Regression
No, it isn't
### Suggestions
_No response_
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Additional details
_No response_
| What do you have in pyproject.toml and what is the file structure of your project?
### `pyproject.toml`
```toml
[project]
name = "weather-app"
version = "0.1.0"
description = ""
readme = "README.md"
requires-python = ">=3.8"
dependencies = [
"flet"
]
[tool.flet]
# org name in reverse domain name notation, e.g. "com.mycompany".
# Combined with project.name to build bundle ID for iOS and Android apps
org = "com.mycompany"
# project display name that is used as an app title on Android and iOS home screens,
# shown in window titles and about app dialogs on desktop.
product = "Weather Forcast"
# company name to display in about app dialogs
company = "Flet"
# copyright text to display in about app dialogs
copyright = "Copyright (C) 2024 by Flet"
[tool.flet.app]
path = "src"
```
### `Structure`
```
W:\dev-mobile\dev mobile (flet)\api-app\weather-app>flet build apk
[09:17:25] Created Flutter bootstrap project from gh:flet-dev/flet-build-template with ref 0.25.0 ✅
Customized app icons and splash images ✅
[09:18:41] Generated app icons ✅
[09:18:51] Generated splash screens ✅
[09:21:48] Packaged Python app ✅
[09:30:39] Built .apk for Android ✅
Copied build to build\apk directory ✅
Successfully built your .apk for Android! 🥳 Find it in build\apk directory. 📁
```



I'm having the same problem.
I created a basic project and built it right away, and it runs fine with flet run, but I get an error after building.
win11, 0.25.0
Commands used
```
mkdir flet-test
cd flet-test
flet create .
flet run
flet build windows -vv
cd flet build/windows
flet-test.exe
```
Error Description
```
Traceback (most recent call last):
File "<string>", line 47, in <module>
File "<frozen runpy>", line 222, in run_module
File "<frozen runpy>", line 142, in _get_module_details
ImportError: No module named main
```
I filmed a video of the process.
https://github.com/user-attachments/assets/c6033991-4dbb-4967-9206-2f8833cd2640
The terminal log is here.
[ps-log.txt](https://github.com/user-attachments/files/17961394/ps-log.txt)
Also, apps written with existing FLETs are experiencing the same error after updating from 0.24.1 to 0.25.0.
Thanks for the additional details. I'm on it. | 1,732,904,343,000 | null | Bug Report | [
"sdk/python/packages/flet-cli/src/flet_cli/commands/build.py:Command.package_python_app"
] | [] | 1 | 587 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.