repo
stringlengths
5
51
instance_id
stringlengths
11
56
base_commit
stringlengths
40
40
patch
stringlengths
400
333k
test_patch
stringlengths
0
895k
problem_statement
stringlengths
27
55.6k
hints_text
stringlengths
0
72k
created_at
int64
1,447B
1,739B
labels
sequencelengths
0
7
category
stringclasses
4 values
edit_functions
sequencelengths
1
10
added_functions
sequencelengths
0
20
edit_functions_length
int64
1
10
yt-dlp/yt-dlp
yt-dlp__yt-dlp-11596
f9197295388b44ee0a8992cb00f361c7ef42acdb
diff --git a/yt_dlp/extractor/stripchat.py b/yt_dlp/extractor/stripchat.py index 31c8afbc6268..84846042f38f 100644 --- a/yt_dlp/extractor/stripchat.py +++ b/yt_dlp/extractor/stripchat.py @@ -28,24 +28,21 @@ class StripchatIE(InfoExtractor): def _real_extract(self, url): video_id = self._match_id(url) webpage = self._download_webpage(url, video_id, headers=self.geo_verification_headers()) + data = self._search_json( + r'<script\b[^>]*>\s*window\.__PRELOADED_STATE__\s*=', + webpage, 'data', video_id, transform_source=lowercase_escape) - data = self._parse_json( - self._search_regex( - r'<script\b[^>]*>\s*window\.__PRELOADED_STATE__\s*=(?P<value>.*?)<\/script>', - webpage, 'data', default='{}', group='value'), - video_id, transform_source=lowercase_escape, fatal=False) - if not data: - raise ExtractorError('Unable to find configuration for stream.') - - if traverse_obj(data, ('viewCam', 'show'), expected_type=dict): - raise ExtractorError('Model is in private show', expected=True) - elif not traverse_obj(data, ('viewCam', 'model', 'isLive'), expected_type=bool): + if traverse_obj(data, ('viewCam', 'show', {dict})): + raise ExtractorError('Model is in a private show', expected=True) + if not traverse_obj(data, ('viewCam', 'model', 'isLive', {bool})): raise UserNotLive(video_id=video_id) - model_id = traverse_obj(data, ('viewCam', 'model', 'id'), expected_type=int) + model_id = data['viewCam']['model']['id'] formats = [] - for host in traverse_obj(data, ('config', 'data', ( + # HLS hosts are currently found in .configV3.static.features.hlsFallback.fallbackDomains[] + # The rest of the path is for backwards compatibility and to guard against A/B testing + for host in traverse_obj(data, ((('config', 'data'), ('configV3', 'static')), ( (('features', 'featuresV2'), 'hlsFallback', 'fallbackDomains', ...), 'hlsStreamHost'))): formats = self._extract_m3u8_formats( f'https://edge-hls.{host}/hls/{model_id}/master/{model_id}_auto.m3u8', @@ -53,7 +50,7 @@ def _real_extract(self, url): if formats: break if not formats: - self.raise_no_formats('No active streams found', expected=True) + self.raise_no_formats('Unable to extract stream host', video_id=video_id) return { 'id': video_id,
stripchat extractor not working: "No active stream found" ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm reporting that yt-dlp is broken on a **supported** site - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details - [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command) - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) - [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required ### Region Israel ### Provide a description that is worded well enough to be understood When trying to download a stream from stripchat.com yt-dlp gives back "No active stream" even though a stream is live at the same time. If I paste the m3u8 of the same stream then yt-dlp works and downloads the stream. sidenote: I tried to set my yt-dlp to nightly but can't because I downloaded it from homebrew. and using yt-dlp --update-to nightly gives back: "ERROR: You installed yt-dlp from a manual build or with a package manager; Use that to update" and no documentation on how to exactly do that. ### Provide verbose output that clearly demonstrates the problem - [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead - [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell [debug] Command-line config: ['-vU', 'https://stripchat.com/MagicLilu'] [debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] [debug] Python 3.13.0 (CPython arm64 64bit) - macOS-14.6.1-arm64-arm-64bit-Mach-O (OpenSSL 3.4.0 22 Oct 2024) [debug] exe versions: ffmpeg 7.1 (setts), ffprobe 7.1 [debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.47.0, urllib3-2.2.3, websockets-13.1 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests, websockets [debug] Loaded 1837 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest Latest version: [email protected] from yt-dlp/yt-dlp yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp) [Stripchat] Extracting URL: https://stripchat.com/MagicLilu [Stripchat] MagicLilu: Downloading webpage ERROR: [Stripchat] MagicLilu: No active streams found File "/opt/homebrew/Cellar/yt-dlp/HEAD-f919729/libexec/lib/python3.13/site-packages/yt_dlp/extractor/common.py", line 742, in extract ie_result = self._real_extract(url) File "/opt/homebrew/Cellar/yt-dlp/HEAD-f919729/libexec/lib/python3.13/site-packages/yt_dlp/extractor/stripchat.py", line 56, in _real_extract self.raise_no_formats('No active streams found', expected=True) ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/yt-dlp/HEAD-f919729/libexec/lib/python3.13/site-packages/yt_dlp/extractor/common.py", line 1276, in raise_no_formats raise ExtractorError(msg, expected=expected, video_id=video_id) ```
Looks like they're using .live as a host now https://edge-hls.doppiocdn.com/hls/164812713/master/164812713_auto.m3u8 doesn't work but https://edge-hls.doppiocdn.live/hls/164812713/master/164812713_auto.m3u8 does so Stripchat extractor needs .live as a fallback I think. Also is there a way to also use xHamsterLive as it uses the exact same manifests but doesn't work because of domain name.
1,732,141,453,000
null
Bug Report
[ "yt_dlp/extractor/stripchat.py:StripchatIE._real_extract" ]
[]
1
yt-dlp/yt-dlp
yt-dlp__yt-dlp-11555
f2a4983df7a64c4e93b56f79dbd16a781bd90206
diff --git a/yt_dlp/extractor/chaturbate.py b/yt_dlp/extractor/chaturbate.py index 864d61f9c2b8..aa70f26a1bcb 100644 --- a/yt_dlp/extractor/chaturbate.py +++ b/yt_dlp/extractor/chaturbate.py @@ -5,6 +5,7 @@ ExtractorError, lowercase_escape, url_or_none, + urlencode_postdata, ) @@ -40,14 +41,48 @@ class ChaturbateIE(InfoExtractor): 'only_matching': True, }] - _ROOM_OFFLINE = 'Room is currently offline' + _ERROR_MAP = { + 'offline': 'Room is currently offline', + 'private': 'Room is currently in a private show', + 'away': 'Performer is currently away', + 'password protected': 'Room is password protected', + 'hidden': 'Hidden session in progress', + } - def _real_extract(self, url): - video_id, tld = self._match_valid_url(url).group('id', 'tld') + def _extract_from_api(self, video_id, tld): + response = self._download_json( + f'https://chaturbate.{tld}/get_edge_hls_url_ajax/', video_id, + data=urlencode_postdata({'room_slug': video_id}), + headers={ + **self.geo_verification_headers(), + 'X-Requested-With': 'XMLHttpRequest', + 'Accept': 'application/json', + }, fatal=False, impersonate=True) or {} + + status = response.get('room_status') + if status != 'public': + if error := self._ERROR_MAP.get(status): + raise ExtractorError(error, expected=True) + self.report_warning('Falling back to webpage extraction') + return None + + m3u8_url = response.get('url') + if not m3u8_url: + self.raise_geo_restricted() + + return { + 'id': video_id, + 'title': video_id, + 'thumbnail': f'https://roomimg.stream.highwebmedia.com/ri/{video_id}.jpg', + 'is_live': True, + 'age_limit': 18, + 'formats': self._extract_m3u8_formats(m3u8_url, video_id, ext='mp4', live=True), + } + def _extract_from_webpage(self, video_id, tld): webpage = self._download_webpage( f'https://chaturbate.{tld}/{video_id}/', video_id, - headers=self.geo_verification_headers()) + headers=self.geo_verification_headers(), impersonate=True) found_m3u8_urls = [] @@ -85,8 +120,8 @@ def _real_extract(self, url): webpage, 'error', group='error', default=None) if not error: if any(p in webpage for p in ( - self._ROOM_OFFLINE, 'offline_tipping', 'tip_offline')): - error = self._ROOM_OFFLINE + self._ERROR_MAP['offline'], 'offline_tipping', 'tip_offline')): + error = self._ERROR_MAP['offline'] if error: raise ExtractorError(error, expected=True) raise ExtractorError('Unable to find stream URL') @@ -113,3 +148,7 @@ def _real_extract(self, url): 'is_live': True, 'formats': formats, } + + def _real_extract(self, url): + video_id, tld = self._match_valid_url(url).group('id', 'tld') + return self._extract_from_api(video_id, tld) or self._extract_from_webpage(video_id, tld)
[Chaturbate] Consider using the API ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm requesting a site-specific feature - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) - [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required ### Region _No response_ ### Example URLs https://chaturbate.com/kira_censori/ ### Provide a description that is worded well enough to be understood Currently yt-dlp loads the entire page to find the m3u8 stream URL, but another way is to use the API: ```sh $ curl -X POST "https://chaturbate.com/get_edge_hls_url_ajax/" -H "X-Requested-With: XMLHttpRequest" -d "room_slug=kira_censori" ``` ```json { "success": true, "url": "https://edge17-hel.live.mmcdn.com/live-hls/amlst:kira_censori-sd-203fe4e99b463f0b5013d75b7f491286d7f8cbdad109cef79db409bfc80e33d0_trns_h264/playlist.m3u8", "room_status": "public", "hidden_message": "" } ``` This endpoint provides the same m3u8 stream URL that is embedded in HTML (specifically in `window.initialRoomDossier`). The advantage is that this is ~500 times smaller in size compared to downloading the entire HTML page and simplifies error handling. Here is a rough sequence of actions: ``` if "success": - true: if "room_status": - "public": if "url": - [m3u8 stream url] - "": [room is geo-blocked] - something else: [room is private or offline] - false: [room doesn't exist] ``` All possible `room_status` values can be found [here](https://devportal.cb.dev/wiki/api/$room#roomstatus-string). Not sure what we need to tell the user in non-`public` cases, above is just an example. Maybe someday I'll do a PR if I have time. What do you think about that? ### Provide verbose output that clearly demonstrates the problem - [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead - [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output _No response_
1,731,692,715,000
null
Feature Request
[ "yt_dlp/extractor/chaturbate.py:ChaturbateIE._real_extract" ]
[ "yt_dlp/extractor/chaturbate.py:ChaturbateIE._extract_from_api", "yt_dlp/extractor/chaturbate.py:ChaturbateIE._extract_from_webpage" ]
1
yt-dlp/yt-dlp
yt-dlp__yt-dlp-11545
c014fbcddcb4c8f79d914ac5bb526758b540ea33
diff --git a/yt_dlp/YoutubeDL.py b/yt_dlp/YoutubeDL.py index 3186a999deb8..3130deda3162 100644 --- a/yt_dlp/YoutubeDL.py +++ b/yt_dlp/YoutubeDL.py @@ -4381,7 +4381,9 @@ def _write_thumbnails(self, label, info_dict, filename, thumb_filename_base=None return None for idx, t in list(enumerate(thumbnails))[::-1]: - thumb_ext = (f'{t["id"]}.' if multiple else '') + determine_ext(t['url'], 'jpg') + thumb_ext = t.get('ext') or determine_ext(t['url'], 'jpg') + if multiple: + thumb_ext = f'{t["id"]}.{thumb_ext}' thumb_display_id = f'{label} thumbnail {t["id"]}' thumb_filename = replace_extension(filename, thumb_ext, info_dict.get('ext')) thumb_filename_final = replace_extension(thumb_filename_base, thumb_ext, info_dict.get('ext')) diff --git a/yt_dlp/extractor/afreecatv.py b/yt_dlp/extractor/afreecatv.py index 6682a898179c..572d1a38936b 100644 --- a/yt_dlp/extractor/afreecatv.py +++ b/yt_dlp/extractor/afreecatv.py @@ -66,6 +66,14 @@ def _call_api(self, endpoint, display_id, data=None, headers=None, query=None): extensions={'legacy_ssl': True}), display_id, 'Downloading API JSON', 'Unable to download API JSON') + @staticmethod + def _fixup_thumb(thumb_url): + if not url_or_none(thumb_url): + return None + # Core would determine_ext as 'php' from the url, so we need to provide the real ext + # See: https://github.com/yt-dlp/yt-dlp/issues/11537 + return [{'url': thumb_url, 'ext': 'jpg'}] + class AfreecaTVIE(AfreecaTVBaseIE): IE_NAME = 'soop' @@ -155,7 +163,7 @@ def _real_extract(self, url): 'uploader': ('writer_nick', {str}), 'uploader_id': ('bj_id', {str}), 'duration': ('total_file_duration', {int_or_none(scale=1000)}), - 'thumbnail': ('thumb', {url_or_none}), + 'thumbnails': ('thumb', {self._fixup_thumb}), }) entries = [] @@ -226,8 +234,7 @@ def _real_extract(self, url): return self.playlist_result(self._entries(data), video_id) - @staticmethod - def _entries(data): + def _entries(self, data): # 'files' is always a list with 1 element yield from traverse_obj(data, ( 'data', lambda _, v: v['story_type'] == 'catch', @@ -238,7 +245,7 @@ def _entries(data): 'title': ('title', {str}), 'uploader': ('writer_nick', {str}), 'uploader_id': ('writer_id', {str}), - 'thumbnail': ('thumb', {url_or_none}), + 'thumbnails': ('thumb', {self._fixup_thumb}), 'timestamp': ('write_timestamp', {int_or_none}), })) diff --git a/yt_dlp/extractor/common.py b/yt_dlp/extractor/common.py index 01915acf23be..23f6fc6c460f 100644 --- a/yt_dlp/extractor/common.py +++ b/yt_dlp/extractor/common.py @@ -279,6 +279,7 @@ class InfoExtractor: thumbnails: A list of dictionaries, with the following entries: * "id" (optional, string) - Thumbnail format ID * "url" + * "ext" (optional, string) - actual image extension if not given in URL * "preference" (optional, int) - quality of the image * "width" (optional, int) * "height" (optional, int)
[soop] ERROR: unsafe file extension: 'php' when downloading thumbnail ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm reporting that yt-dlp is broken on a **supported** site - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details - [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command) - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) - [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required ### Region _No response_ ### Provide a description that is worded well enough to be understood The thumbnail file is actually jpeg, but yt-dlp tries to download it as php. So it leads ERROR: unsafe file extension: 'php' `yt-dlp --skip-download --write-thumbnail -vU https://vod.sooplive.co.kr/player/136504897` ### Provide verbose output that clearly demonstrates the problem - [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead - [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell [debug] Command-line config: ['--skip-download', '--write-thumbnail', '-vU', 'https://vod.sooplive.co.kr/player/136504897'] [debug] Encodings: locale cp949, fs utf-8, pref cp949, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-master-builds [f2a4983df] (win_exe) [debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.26100-SP0 (OpenSSL 1.1.1t 7 Feb 2023) [debug] exe versions: ffmpeg 7.1-full_build-www.gyan.dev (setts), ffprobe 7.1-full_build-www.gyan.dev [debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-13.1 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests, websockets, curl_cffi [debug] Loaded 1838 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-master-builds/releases/latest Latest version: [email protected] from yt-dlp/yt-dlp-master-builds yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp-master-builds) [soop] Extracting URL: https://vod.sooplive.co.kr/player/136504897 [soop] 136504897: Downloading API JSON [soop] 136504897: Downloading part 1 m3u8 information [soop] 136504897: Downloading part 2 m3u8 information [download] Downloading multi_video: [동출 99일차/유입 환영!]추석특집 신창섭 월드컵 엄선작 ERROR: unsafe file extension: 'php' Traceback (most recent call last): File "yt_dlp\YoutubeDL.py", line 1625, in wrapper File "yt_dlp\YoutubeDL.py", line 1781, in __extract_info File "yt_dlp\YoutubeDL.py", line 1910, in process_ie_result File "yt_dlp\YoutubeDL.py", line 2010, in __process_playlist File "yt_dlp\YoutubeDL.py", line 4386, in _write_thumbnails File "yt_dlp\utils\_utils.py", line 2149, in _change_extension File "yt_dlp\utils\_utils.py", line 5232, in sanitize_extension yt_dlp.utils._UnsafeExtensionError: unsafe file extension: 'php' ```
This block, after assigning `common_info`, should prevent the problem (also import `utils.urlhandle_detect_ext`): ``` thumbnail = common_info.pop('thumbnail', None) if thumbnail: common_info['thumbnails'] = [{ 'url': thumbnail, 'ext': urlhandle_detect_ext(self._request_webpage( HEADRequest(thumbnail), video_id, note='Getting thumbnail ext')), }] ``` But actually, since `check_thumbnails()` within `YoutubeDL._sanitize_thumbnails()` is already fetching each thumbnail URL, shouldn't it be doing something like the above to set default extensions? And then extractors wouldn't need to do so. The above patch doesn't work
1,731,622,771,000
null
Bug Report
[ "yt_dlp/YoutubeDL.py:YoutubeDL._write_thumbnails", "yt_dlp/extractor/afreecatv.py:AfreecaTVIE._real_extract", "yt_dlp/extractor/afreecatv.py:AfreecaTVCatchStoryIE._entries" ]
[ "yt_dlp/extractor/afreecatv.py:AfreecaTVBaseIE._fixup_thumb" ]
3
yt-dlp/yt-dlp
yt-dlp__yt-dlp-11542
f2a4983df7a64c4e93b56f79dbd16a781bd90206
diff --git a/yt_dlp/extractor/spankbang.py b/yt_dlp/extractor/spankbang.py index 6805a72deb7b..05f0bb1468ed 100644 --- a/yt_dlp/extractor/spankbang.py +++ b/yt_dlp/extractor/spankbang.py @@ -71,9 +71,11 @@ class SpankBangIE(InfoExtractor): def _real_extract(self, url): mobj = self._match_valid_url(url) video_id = mobj.group('id') or mobj.group('id_2') + country = self.get_param('geo_bypass_country') or 'US' + self._set_cookie('.spankbang.com', 'country', country.upper()) webpage = self._download_webpage( url.replace(f'/{video_id}/embed', f'/{video_id}/video'), - video_id, headers={'Cookie': 'country=US'}) + video_id, impersonate=True) if re.search(r'<[^>]+\b(?:id|class)=["\']video_removed', webpage): raise ExtractorError(
spankbang - 403 Forbidden errors ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm reporting that a **supported** site is broken - [X] I've verified that I'm running yt-dlp version **2023.03.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit) - [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details - [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command) - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) - [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required ### Region USA ### Provide a description that is worded well enough to be understood All video url's from Spankbang are returning 403 forbidden errors. I have confirmed that they load and play in the browser just fine. Verbose output is provided. My `yt-dlp` version is completely up to date. ### Provide verbose output that clearly demonstrates the problem - [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead - [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell > yt-dlp -vU https://spankbang.com/6c6z5/video/fan+appreciation+event+english+sub [debug] Command-line config: ['-vU', 'https://spankbang.com/6c6z5/video/fan+appreciation+event+english+sub'] [debug] Encodings: locale UTF-8, fs utf-8, out utf-8, err utf-8, pref UTF-8 [debug] yt-dlp version 2022.02.04 [c1653e9ef] (zip) [debug] Python version 3.8.10 (CPython 64bit) - Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.29 [debug] exe versions: ffmpeg 4.2.4, ffprobe 4.2.4 [debug] Optional libraries: Cryptodome, secretstorage, mutagen, sqlite, websockets [debug] Proxy map: {} Latest version: 2023.03.04, Current version: 2022.02.04 Current Build Hash a16fe3b3bd474d562c4b8645579b209377b967d58d4edffe6e31dc8de81d7283 Updating to version 2023.03.04 ... ERROR: Unable to write to /usr/local/bin/yt-dlp; Try running as administrator [debug] [SpankBang] Extracting URL: https://spankbang.com/6c6z5/video/fan+appreciation+event+english+sub [SpankBang] 6c6z5: Downloading webpage ERROR: [SpankBang] 6c6z5: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: 'Forbidden'>); please report this issue on https://github.com/yt-dlp/yt-dlp , filling out the "Broken site" issue template properly. Confirm you are on the latest version using -U (caused by <HTTPError 403: 'Forbidden'>); please report this issue on https://github.com/yt-dlp/yt-dlp , filling out the "Broken site" issue template properly. Confirm you are on the latest version using -U File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 730, in _request_webpage return self._downloader.urlopen(url_or_request) File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 3558, in urlopen return self._opener.open(req, timeout=self._socket_timeout) File "/usr/lib/python3.8/urllib/request.py", line 531, in open response = meth(req, response) File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response response = self.parent.error( File "/usr/lib/python3.8/urllib/request.py", line 569, in error return self._call_chain(*args) File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain result = func(*args) File "/usr/lib/python3.8/urllib/request.py", line 649, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) ```
> My yt-dlp version is completely up to date. > [debug] yt-dlp version **2022.02.04** [c1653e9ef] (zip) > Latest version: **2023.03.04**, Current version: **2022.02.04** > Updating to version 2023.03.04 ... > **ERROR: Unable to write to /usr/local/bin/yt-dlp; Try running as administrator** I see in the logs that it shows my `yt-dlp` version as 2022.02.04 but when I try to update using python, it says that I'm up to date already. And when I try to update via `yt-dlp -U`: ``` Available version: [email protected], Current version: [email protected] Current Build Hash: 91cad9f121c1f6f0a81b747415c46ecba0ff331ed38cc6433040b4ac7b6e15ca yt-dlp is up to date ([email protected]) ``` ``` > yt-dlp --version 2023.03.04 ``` The log in the OP most definitely is **not** from version 2023.03.04. Are you sure you're not running two different versions? The first log looks like it's being run in WSL? > The log in the OP most definitely is **not** from version 2023.03.04. Are you sure you're not running two different versions? The first log looks like it's being run in WSL? Yep, looks like I was. I ran `pip3 uninstall yt-dlp` but the command was still accessible. So looks like I had two installations going. Removing the python version got everything working. Thanks, feel free to close this issue. Hi @bashonly , I use the master version will get 403 too ```bash [debug] Command-line config: ['-v', '--proxy', 'http://127.0.0.1:1080', 'https://spankbang.com/81sy6/video/japanese'] [debug] Encodings: locale cp936, fs utf-8, pref cp936, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] [392389b7d] [debug] Lazy loading extractors is disabled [debug] Python 3.8.10 (CPython AMD64 32bit) - Windows-10-10.0.22621-SP0 (OpenSSL 1.1.1k 25 Mar 2021) [debug] exe versions: ffmpeg 4.2.1 (fdk), ffprobe 4.2.1 [debug] Optional libraries: Cryptodome-3.17, brotli-1.0.9, certifi-2020.12.05, mutagen-1.46.0, sqlite3-2.6.0, websockets-10.4 [debug] Proxy map: {'http': 'http://127.0.0.1:1080', 'https': 'http://127.0.0.1:1080'} [debug] Loaded 1791 extractors [SpankBang] Extracting URL: https://spankbang.com/81sy6/video/japanese [SpankBang] 81sy6: Downloading webpage ERROR: [SpankBang] 81sy6: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: 'Forbidden'>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 694, in extract ie_result = self._real_extract(url) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\spankbang.py", line 74, in _real_extract webpage = self._download_webpage( File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 1100, in _download_webpage return self.__download_webpage(url_or_request, video_id, note, errnote, None, fatal, *args, **kwargs) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 1051, in download_content res = getattr(self, download_handle.__name__)(url_or_request, video_id, **kwargs) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 885, in _download_webpage_handle urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data, headers=headers, query=query, expected_status=expected_status) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 842, in _request_webpage raise ExtractorError(errmsg, cause=err) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 824, in _request_webpage return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query)) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\YoutubeDL.py", line 3745, in urlopen return self._opener.open(req, timeout=self._socket_timeout) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 531, in open response = meth(req, response) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 640, in http_response response = self.parent.error( File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 569, in error return self._call_chain(*args) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 502, in _call_chain result = func(*args) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 649, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 403: Forbidden ``` but the 2022.07.18 on my WSL will succeed ```bash [debug] Command-line config: ['-v', '--proxy', 'http://127.0.0.1:1080', 'https://spankbang.com/81sy6/video/japanese'] [debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out UTF-8, error UTF-8, screen UTF-8 [debug] yt-dlp version 2022.07.18 [135f05ef6] [debug] Lazy loading extractors is disabled [debug] Python 3.7.5 (CPython 64bit) - Linux-4.4.0-22621-Microsoft-x86_64-with-Ubuntu-18.04-bionic (glibc 2.26) [debug] Checking exe version: ffmpeg -bsfs [debug] Checking exe version: ffprobe -bsfs [debug] exe versions: ffmpeg 3.4.8, ffprobe 3.4.8 [debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2020.12.05, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3 [debug] Proxy map: {'http': 'http://127.0.0.1:1080', 'https': 'http://127.0.0.1:1080'} [debug] [SpankBang] Extracting URL: https://spankbang.com/81sy6/video/japanese [SpankBang] 81sy6: Downloading webpage [SpankBang] 81sy6: Downloading stream JSON [SpankBang] 81sy6: Downloading m3u8 information [SpankBang] 81sy6: Downloading m3u8 information [SpankBang] 81sy6: Downloading m3u8 information [SpankBang] 81sy6: Downloading m3u8 information [SpankBang] 81sy6: Downloading m3u8 information [debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id [debug] Default format spec: bestvideo*+bestaudio/best [info] 81sy6: Downloading 1 format(s): hls-2564-1 [debug] Invoking hlsnative downloader on "https://hls-uranus.sb-cd.com/hls/1/3/13521102-1080p.mp4/index-v1-a1.m3u8?_tid=13521102&d=1&m=43&secure=nxKExkSSNg5q0juEWzONGA,1680835039" [hlsnative] Downloading m3u8 manifest [hlsnative] Total fragments: 810 [download] Destination: Japanese [81sy6].mp4 [download] 0.5% of ~662.91MiB at 151.59KiB/s ETA 27:28 (frag 5/810) ``` I can't reproduce the 403. Maybe it's due to a change in network/proxy code? ``` $ yt-dlp --ignore-config -vF "https://spankbang.com/81sy6/video/japanese" [debug] Command-line config: ['--ignore-config', '-vF', 'https://spankbang.com/81sy6/video/japanese'] [debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] [392389b7d] [debug] Lazy loading extractors is disabled [debug] Python 3.10.10 (CPython x86_64 64bit) - Linux-6.2.8-arch1-1-x86_64-with-glibc2.37 (OpenSSL 3.0.8 7 Feb 2023, glibc 2.37) [debug] exe versions: ffmpeg 6.0 (setts), ffprobe 6.0 [debug] Optional libraries: Cryptodome-3.12.0, brotlicffi-1.0.9.2, certifi-2022.12.07, mutagen-1.46.0, sqlite3-2.6.0, websockets-10.4 [debug] Proxy map: {} [debug] Loaded 1791 extractors [SpankBang] Extracting URL: https://spankbang.com/81sy6/video/japanese [SpankBang] 81sy6: Downloading webpage [SpankBang] 81sy6: Downloading stream JSON [SpankBang] 81sy6: Downloading m3u8 information [SpankBang] 81sy6: Downloading m3u8 information [SpankBang] 81sy6: Downloading m3u8 information [SpankBang] 81sy6: Downloading m3u8 information [SpankBang] 81sy6: Downloading m3u8 information [debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id [info] Available formats for 81sy6: ID EXT RESOLUTION FPS │ FILESIZE TBR PROTO │ VCODEC VBR ACODEC ABR ──────────────────────────────────────────────────────────────────────────────────────── 240p mp4 240p │ https │ unknown unknown hls-379-0 mp4 426x240 25 │ ~224.74MiB 380k m3u8 │ avc1.42c01e 380k mp4a.40.2 0k hls-379-1 mp4 426x240 25 │ ~224.74MiB 380k m3u8 │ avc1.42c01e 380k mp4a.40.2 0k 480p mp4 480p │ https │ unknown unknown hls-1090-0 mp4 852x480 25 │ ~645.52MiB 1091k m3u8 │ avc1.4d401f 1091k mp4a.40.2 0k hls-1090-1 mp4 852x480 25 │ ~645.52MiB 1091k m3u8 │ avc1.4d401f 1091k mp4a.40.2 0k 720p mp4 720p │ https │ unknown unknown hls-1996-0 mp4 1280x720 25 │ ~ 1.15GiB 1996k m3u8 │ avc1.640020 1996k mp4a.40.2 0k hls-1996-1 mp4 1280x720 25 │ ~ 1.15GiB 1996k m3u8 │ avc1.640020 1996k mp4a.40.2 0k 1080p mp4 1080p │ https │ unknown unknown hls-2564-0 mp4 1920x1080 25 │ ~ 1.48GiB 2565k m3u8 │ avc1.64002a 2565k mp4a.40.2 0k hls-2564-1 mp4 1920x1080 25 │ ~ 1.48GiB 2565k m3u8 │ avc1.64002a 2565k mp4a.40.2 0k ``` Termux also get `403` ``` yt-dlp --ignore-config -vF https://spankbang.com/782eu/video/band+girl+part+2+1 [debug] Command-line config: ['--ignore-config', '-vF', 'https://spankbang.com/782eu/video/band+girl+part+2+1'] [debug] Encodings: locale utf-8, fs utf-8, pref utf-8, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] [392389b7d] (pip) [debug] Python 3.11.2 (CPython aarch64 64bit) - Linux-4.14.309-classified+-aarch64-with-libc (OpenSSL 3.1.0 14 Mar 2023, libc) [debug] exe versions: none [debug] Optional libraries: Cryptodome-3.17, brotli-1.0.9, certifi-2022.12.07, mutagen-1.46.0, sqlite3-2.6.0, websockets-10.4 [debug] Proxy map: {} [debug] Loaded 1786 extractors [SpankBang] Extracting URL: https://spankbang.com/782eu/video/band+girl+part+2+1 [SpankBang] 782eu: Downloading webpage ERROR: [SpankBang] 782eu: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: 'Forbidden'>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/yt_dlp/extractor/common.py", line 694, in extract ie_result = self._real_extract(url) ^^^^^^^^^^^^^^^^^^^^^^^ File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/yt_dlp/extractor/spankbang.py", line 74, in _real_extract webpage = self._download_webpage( ^^^^^^^^^^^^^^^^^^^^^^^ File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/yt_dlp/extractor/common.py", line 1097, in _download_webpage return self.__download_webpage(url_or_request, video_id, note, errnote, None, fatal, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/yt_dlp/extractor/common.py", line 1048, in download_content res = getattr(self, download_handle.__name__)(url_or_request, video_id, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/yt_dlp/extractor/common.py", line 882, in _download_webpage_handle urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data, headers=headers, query=query, expected_status=expected_status) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/yt_dlp/extractor/common.py", line 839, in _request_webpage raise ExtractorError(errmsg, cause=err) File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/yt_dlp/extractor/common.py", line 821, in _request_webpage return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/yt_dlp/YoutubeDL.py", line 3742, in urlopen return self._opener.open(req, timeout=self._socket_timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/data/com.termux/files/usr/lib/python3.11/urllib/request.py", line 525, in open response = meth(req, response) ^^^^^^^^^^^^^^^^^^^ File "/data/data/com.termux/files/usr/lib/python3.11/urllib/request.py", line 634, in http_response response = self.parent.error( ^^^^^^^^^^^^^^^^^^ File "/data/data/com.termux/files/usr/lib/python3.11/urllib/request.py", line 563, in error return self._call_chain(*args) ^^^^^^^^^^^^^^^^^^^^^^^ File "/data/data/com.termux/files/usr/lib/python3.11/urllib/request.py", line 496, in _call_chain result = func(*args) ^^^^^^^^^^^ File "/data/data/com.termux/files/usr/lib/python3.11/urllib/request.py", line 643, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 403: Forbidden ``` yt-dlp [email protected] isn't showing 403 for me with `81sy6`, but yt-dl (essentially identical extractor code) is getting 403 on the webpage itself, as is is _wget_, not showing any page content in the response. UA 'Mozilla/5.0' may break through CloudFlare: it works with yt-dl and _wget_ now, though not when I first tried. Hi, @bashonly the proxy network can work on my wsl with version 2022.07.18, the wsl on the same computer, so they use the same proxy ```bash [debug] Command-line config: ['-v', '--proxy', 'http://127.0.0.1:1080', 'https://spankbang.com/81sy6/video/japanese'] [debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out UTF-8, error UTF-8, screen UTF-8 [debug] yt-dlp version 2022.07.18 [135f05ef6] [debug] Lazy loading extractors is disabled [debug] Python 3.7.5 (CPython 64bit) - Linux-4.4.0-22621-Microsoft-x86_64-with-Ubuntu-18.04-bionic (glibc 2.26) [debug] Checking exe version: ffmpeg -bsfs [debug] Checking exe version: ffprobe -bsfs [debug] exe versions: ffmpeg 3.4.8, ffprobe 3.4.8 [debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2020.12.05, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3 [debug] Proxy map: {'http': 'http://127.0.0.1:1080', 'https': 'http://127.0.0.1:1080'} [debug] [SpankBang] Extracting URL: https://spankbang.com/81sy6/video/japanese [SpankBang] 81sy6: Downloading webpage [SpankBang] 81sy6: Downloading stream JSON [SpankBang] 81sy6: Downloading m3u8 information [SpankBang] 81sy6: Downloading m3u8 information [SpankBang] 81sy6: Downloading m3u8 information [SpankBang] 81sy6: Downloading m3u8 information [SpankBang] 81sy6: Downloading m3u8 information [debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id [debug] Default format spec: bestvideo*+bestaudio/best [info] 81sy6: Downloading 1 format(s): hls-2564-1 [debug] Invoking hlsnative downloader on "https://hls-uranus.sb-cd.com/hls/1/3/13521102-1080p.mp4/index-v1-a1.m3u8?_tid=13521102&d=1&m=43&secure=nxKExkSSNg5q0juEWzONGA,1680835039" [hlsnative] Downloading m3u8 manifest [hlsnative] Total fragments: 810 [download] Destination: Japanese [81sy6].mp4 [download] 0.5% of ~662.91MiB at 151.59KiB/s ETA 27:28 (frag 5/810) ``` @dirkf if I add set UA to 'Mozilla/5.0', I will get this output ```bash [debug] Command-line config: ['-v', '--proxy', '127.0.0.1:1080', '--add-header', 'User-Agent:Mozilla/5.0', 'http://127.0.0.1:1080', 'https://spankbang.com/81sy6/video/japanese'] [debug] Encodings: locale cp936, fs utf-8, pref cp936, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] [392389b7d] [debug] Lazy loading extractors is disabled [debug] Python 3.8.10 (CPython AMD64 32bit) - Windows-10-10.0.22621-SP0 (OpenSSL 1.1.1k 25 Mar 2021) [debug] exe versions: ffmpeg 4.2.1 (fdk), ffprobe 4.2.1 [debug] Optional libraries: Cryptodome-3.17, brotli-1.0.9, certifi-2020.12.05, mutagen-1.46.0, sqlite3-2.6.0, websockets-10.4 [debug] Proxy map: {'http': '127.0.0.1:1080', 'https': '127.0.0.1:1080'} [debug] Loaded 1791 extractors [generic] Extracting URL: http://127.0.0.1:1080 [generic] 127.0.0: Downloading webpage ERROR: [generic] None: Unable to download webpage: HTTP Error 400: Invalid header received from client (caused by <HTTPError 400: 'Invalid header received from client'>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 694, in extract ie_result = self._real_extract(url) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\generic.py", line 2385, in _real_extract full_response = self._request_webpage(url, video_id, headers={ File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 842, in _request_webpage raise ExtractorError(errmsg, cause=err) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 824, in _request_webpage return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query)) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\YoutubeDL.py", line 3745, in urlopen return self._opener.open(req, timeout=self._socket_timeout) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 531, in open response = meth(req, response) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 640, in http_response response = self.parent.error( File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 569, in error return self._call_chain(*args) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 502, in _call_chain result = func(*args) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 649, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 400: Invalid header received from client [SpankBang] Extracting URL: https://spankbang.com/81sy6/video/japanese [SpankBang] 81sy6: Downloading webpage ERROR: [SpankBang] 81sy6: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: 'Forbidden'>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 694, in extract ie_result = self._real_extract(url) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\spankbang.py", line 74, in _real_extract webpage = self._download_webpage( File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 1100, in _download_webpage return self.__download_webpage(url_or_request, video_id, note, errnote, None, fatal, *args, **kwargs) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 1051, in download_content res = getattr(self, download_handle.__name__)(url_or_request, video_id, **kwargs) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 885, in _download_webpage_handle urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data, headers=headers, query=query, expected_status=expected_status) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 842, in _request_webpage raise ExtractorError(errmsg, cause=err) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\extractor\common.py", line 824, in _request_webpage return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query)) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\site-packages\yt_dlp-2023.3.4-py3.8.egg\yt_dlp\YoutubeDL.py", line 3745, in urlopen return self._opener.open(req, timeout=self._socket_timeout) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 531, in open response = meth(req, response) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 640, in http_response response = self.parent.error( File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 569, in error return self._call_chain(*args) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 502, in _call_chain result = func(*args) File "C:\Users\test\AppData\Local\Programs\Python\Python38-32\lib\urllib\request.py", line 649, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 403: Forbidden ``` workaround: try adding `--legacy-server-connect` to your command I was able to fix this by switching to a different proxy/vpn In my case, `'Cookie': 'country=US'` in `_real_extract` of `extractor/spankbang.py` needs to be manually replaced with the Cookie header. Note that setting the user-agent will cause a 403 error, even after replacing `'Cookie': 'country=US'`. Additionally, the cookie must be added to the `self._download_json()` method. Also, the cookie needs to be renewed after it expires. Details [in this comment](https://github.com/yt-dlp/yt-dlp/issues/6545#issuecomment-1609300876). > In my case, `'Cookie': 'country=US'` in `_real_extract` of `extractor/spankbang.py` needs to be manually replaced with the Cookie header. Note that setting the user-agent will cause a 403 error, even after replacing `'Cookie': 'country=US'`. Additionally, the cookie must be added to the `self._download_json()` method. Also, the cookie needs to be renewed after it expires. can you go into greater detail please? > > In my case, `'Cookie': 'country=US'` in `_real_extract` of `extractor/spankbang.py` needs to be manually replaced with the Cookie header. Note that setting the user-agent will cause a 403 error, even after replacing `'Cookie': 'country=US'`. Additionally, the cookie must be added to the `self._download_json()` method. Also, the cookie needs to be renewed after it expires. > > can you go into greater detail please? Simply right-click on video web page - >"Inspect" -> "Network" tab -> click upper left "Clear" button -> Ctrl+R Reload page -> click current page url item(normally first item or it has blue icon on left side) -> right panel -> "Headers" tab -> "Requests Headers" section), copy **Cookie**'s value(and copy **User-Agent**'s value later) and paste into the below **4 Lines to replace:** in the file `extractor/spankbang.py` ( you can find parent folder of `extractor/spankbang.py` with command `python3 -c "import yt_dlp; print(yt_dlp.__path__[0])"` ). -> save the edited file -> rerun yt-dlp should no more 403 error. **4 Lines to edit:** [1] Add this line `MY_COOKIE = ` on top of line `class SpankBangIE(InfoExtractor):` with your copied cookie value, you only need to edit this line in future when renew cookie, e.g.: ``` MY_COOKIE = 'paste your copied cookie value, surrounded with single quotes. No extra space' class SpankBangIE(InfoExtractor): ``` [2] `url, playlist_id, headers={'Cookie': 'country=US; mobile=on'})` edit it to: `url, playlist_id, headers={'Cookie': MY_COOKIE})` and: [3] `video_id, headers={'Cookie': 'country=US'})` edit it to: `video_id, headers={'Cookie': MY_COOKIE})` [4] Then under `self._download_json` need add cookie too, e.g.: ``` }), headers={ 'Referer': url, 'X-Requested-With': 'XMLHttpRequest', }) ``` edit it to: ``` }), headers={ 'Cookie': MY_COOKIE, 'Referer': url, 'X-Requested-With': 'XMLHttpRequest', }) ``` Note that the **spaces before the lines** need typing space instead of tab. And ensure total spaces same as original. Note that the cookie needs renewal after it expires or the IP changes. You may need clear and reload page if first attempt of cookie not working. Ensure copy the complete cookie value. Update yt-dlp may replace this code and need redo. You also need add your copied latest **User-Agent**'s value(same steps as get **Cookie**'s value. It probably updated when web browser update) to your command, e.g. `--add-headers 'User-Agent:paste your copied user-agent value, surrounded with single quotes'`: `yt-dlp --add-headers 'User-Agent:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36' ...` Note that you need to use user agent get from the **same web browser** of the cookie above. [UPDATE 2024]: `extractor/spankbang.py` needs to disable .m3u8 and allow only .mp4 if you encounter `HTTP Error 502: Bad Gateway`: ``` ext = determine_ext(f_url) ''' # Add this to start disable m3u8 if format_id.startswith('m3u8') or ext == 'm3u8': formats.extend(self._extract_m3u8_formats( f_url, video_id, 'mp4', entry_protocol='m3u8_native', m3u8_id='hls', fatal=False)) elif format_id.startswith('mpd') or ext == 'mpd': formats.extend(self._extract_mpd_formats( f_url, video_id, mpd_id='dash', fatal=False)) elif ext == 'mp4' or f.get('width') or f.get('height'): ''' # Add this to end disable if ext == 'mp4' or f.get('width') or f.get('height'): # New Added ``` Tried the above and still getting 403 forbidden. Verified formatting and complete cookie value. The extractor needs to be fixed so that it's not hardcoding a cookie header into the request. It should check if the user has passed the necessary cookies (via `--cookies` or `--cookies-from-browser`), and if not, then set cookie(s) to the cookiejar before the request(s) > The extractor needs to be fixed so that it's not hardcoding a cookie header into the request. It should check if the user has passed the necessary cookies (via `--cookies` or `--cookies-from-browser`), and if not, then set cookie(s) to the cookiejar before the request(s) I found one of your previous suggestions of adding --legacy-server-connect to my config file and it seems to working through my testing so far. > adding --legacy-server-connect to my config file don't do this. only use this option when needed > > adding --legacy-server-connect to my config file > > don't do this. only use this option when needed good call. created separate config and batch to call that option if needed. Confirmed. Still happening on latest. Not happening on any other site. > In my case, `'Cookie': 'country=US'` in `_real_extract` of `extractor/spankbang.py` needs to be manually replaced with the Cookie header. Note that setting the user-agent will cause a 403 error, even after replacing `'Cookie': 'country=US'`. Additionally, the cookie must be added to the `self._download_json()` method. Also, the cookie needs to be renewed after it expires. This work but cookie expire really fast. Need to renew about after 10-20 downloads not sure if its possible to add this import to the extractor, but using this package bypasses cloudflares 403 page and returns the real page source `pip install cloudscraper` ```python import cloudscraper url = "https://spankbang.com/5icow/video/skylarvox08" scraper = cloudscraper.create_scraper() content = scraper.get(url).content print(content) # bytes #or print(content.decode('utf-8')) ``` #7595, once completed, should fix this Anyone got a work around, I've tried almost all the answers ? I'm using the last version of yt-dlp with python 3.11.4, youtube is working fine, but spankbang is not got like op " 8iidg: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: 'Forbidden'>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U" One workaround I found is working perfectly. Just change the URL from https://spankbang.com/5icow/video/skylarvox08 to https://spankbang.party/5icow/video/skylarvox08 Domain part from spankbang.com to spankbang.party works perfectly for all URLs I tested. > One workaround I found is working perfectly. > > Just change the URL from https://spankbang.com/5icow/video/skylarvox08 to https://spankbang.party/5icow/video/skylarvox08 > > Domain part from spankbang.com to spankbang.party works perfectly for all URLs I tested. This will only allow 720p downloads @cosify The above URL has 4K resolution but maybe it's a yt-dlp issue that it only gets up to 720p. You can check by getting the HTML content of the URL and searching for **var stream_data = {** Isn't the HTML5 format `4` found by the generic extractor 4320x2160 (actually 2x2160x2160 since this video is 3D)? If the SB extractor is tweaked to recognise `.party` and to use that root domain for its `stream JSON` retrieval: ```console $ yt-dlp -v -F 'https://spankbang.party/5icow/video/skylarvox08' [debug] Command-line config: ['-v', '-F', 'https://spankbang.party/5icow/video/skylarvox08'] [debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] [812cdfa06] (source) [debug] Lazy loading extractors is disabled [debug] Git HEAD: de4cf77ec [debug] Python 3.9.16 (CPython i686 32bit) - Linux-4.4.0-210-generic-i686-with-glibc2.23 (OpenSSL 1.1.1v 1 Aug 2023, glibc 2.23) [debug] exe versions: ffmpeg 4.3, ffprobe 4.3 [debug] Optional libraries: Cryptodome-3.11.0, certifi-2019.11.28, secretstorage-3.2.0, sqlite3-2.6.0 [debug] Proxy map: {} [debug] Loaded 1851 extractors [SpankBang] Extracting URL: https://spankbang.party/5icow/video/skylarvox08 [SpankBang] 5icow: Downloading webpage [SpankBang] 5icow: Downloading stream JSON [SpankBang] 5icow: Downloading m3u8 information [SpankBang] 5icow: Downloading m3u8 information [SpankBang] 5icow: Downloading m3u8 information [SpankBang] 5icow: Downloading m3u8 information [SpankBang] 5icow: Downloading m3u8 information [SpankBang] 5icow: Downloading m3u8 information [debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id [info] Available formats for 5icow: ID EXT RESOLUTION FPS │ FILESIZE TBR PROTO │ VCODEC ACODEC ────────────────────────────────────────────────────────────────────────────── 240p mp4 240p │ https │ unknown unknown hls-231-0 mp4 480x240 30 │ ~ 60.42MiB 231k m3u8 │ avc1.42c01e mp4a.40.2 hls-231-1 mp4 480x240 30 │ ~ 60.42MiB 231k m3u8 │ avc1.42c01e mp4a.40.2 480p mp4 480p │ https │ unknown unknown hls-1201-0 mp4 960x480 30 │ ~314.06MiB 1202k m3u8 │ avc1.4d401f mp4a.40.2 hls-1201-1 mp4 960x480 30 │ ~314.06MiB 1202k m3u8 │ avc1.4d401f mp4a.40.2 720p mp4 720p │ https │ unknown unknown hls-2172-0 mp4 1440x720 30 │ ~567.77MiB 2172k m3u8 │ avc1.640020 mp4a.40.2 hls-2172-1 mp4 1440x720 30 │ ~567.77MiB 2172k m3u8 │ avc1.640020 mp4a.40.2 1080p mp4 1080p │ https │ unknown unknown hls-3390-0 mp4 2160x1080 30 │ ~886.07MiB 3390k m3u8 │ avc1.64002a mp4a.40.2 hls-3390-1 mp4 2160x1080 30 │ ~886.07MiB 3390k m3u8 │ avc1.64002a mp4a.40.2 4k mp4 2160p │ https │ unknown unknown hls-5543-0 mp4 4320x2160 30 │ ~ 1.41GiB 5543k m3u8 │ avc1.640034 mp4a.40.2 hls-5543-1 mp4 4320x2160 30 │ ~ 1.41GiB 5543k m3u8 │ avc1.640034 mp4a.40.2 $ ``` @dirkf any plans to submit a PR? I would assume that it would be within scope to automatically use spankbang.party's server, unless that's an unofficial mirror, which I highly doubt seems to work with the party server but it doesn't work with playlist links I've noticed yt-dlp using TLSv1_2 while the web browser uses TLSv1_3. You can **temporarily** add TLSv1_3 in `yt_dlp/networking/_helper.py` : _(Note that this affects all websites, so you should revert these changes if other websites stop working.)_ ``` #context.minimum_version = ssl.TLSVersion.TLSv1_2 context.minimum_version = ssl.TLSVersion.TLSv1_3 ``` Similar issue [#25437](https://github.com/ytdl-org/youtube-dl/issues/25437) I've got the same issue. Logs: ``` yt-dlp.exe -v https://spankbang.com/7ihal/video/perfect+body+strip+tease+3+seductive+cam+show+dance+shows+off+everything+she+s+got [debug] Command-line config: ['-v', '--proxy', 'socks5://127.0.0.1:9999', 'https://spankbang.com/7ihal/video/perfect+body+strip+tease+3+seductive+cam+show+dance+shows+off+everything+she+s+got'] [debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] [088add956] (win_exe) [debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.22621-SP0 (OpenSSL 1.1.1k 25 Mar 2021) [debug] exe versions: none [debug] Optional libraries: Cryptodome-3.19.0, brotli-1.1.0, certifi-2023.07.22, mutagen-1.47.0, sqlite3-3.35.5, websockets-11.0.3 [debug] Proxy map: {'all': 'socks5://127.0.0.1:9999'} [debug] Loaded 1886 extractors [SpankBang] Extracting URL: https://spankbang.com/7ihal/video/perfect+body+strip+tease+3+seductive+cam+show+dance+shows+off+everything+she+s+got [SpankBang] 7ihal: Downloading webpage ERROR: [SpankBang] 7ihal: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: Forbidden>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U File "yt_dlp\extractor\common.py", line 715, in extract File "yt_dlp\extractor\spankbang.py", line 74, in _real_extract File "yt_dlp\extractor\common.py", line 1118, in _download_webpage File "yt_dlp\extractor\common.py", line 1069, in download_content File "yt_dlp\extractor\common.py", line 903, in _download_webpage_handle File "yt_dlp\extractor\common.py", line 860, in _request_webpage File "yt_dlp\networking\_urllib.py", line 410, in _send File "urllib\request.py", line 531, in open File "urllib\request.py", line 640, in http_response File "urllib\request.py", line 569, in error File "urllib\request.py", line 502, in _call_chain File "urllib\request.py", line 649, in http_error_default urllib.error.HTTPError: HTTP Error 403: Forbidden The above exception was the direct cause of the following exception: Traceback (most recent call last): File "yt_dlp\YoutubeDL.py", line 4051, in urlopen File "yt_dlp\networking\common.py", line 114, in send File "yt_dlp\networking\_helper.py", line 204, in wrapper File "yt_dlp\networking\common.py", line 325, in send File "yt_dlp\networking\_urllib.py", line 415, in _send yt_dlp.networking.exceptions.HTTPError: HTTP Error 403: Forbidden The above exception was the direct cause of the following exception: Traceback (most recent call last): File "yt_dlp\extractor\common.py", line 847, in _request_webpage File "yt_dlp\YoutubeDL.py", line 4070, in urlopen yt_dlp.networking.exceptions._CompatHTTPError: HTTP Error 403: Forbidden ``` > I've noticed yt-dlp using TLSv1_2 while the web browser uses TLSv1_3. > > You can **temporarily** add TLSv1_3 in `yt_dlp/networking/_helper.py` (note that this affects all requests, of course): > > ``` > > #context.minimum_version = ssl.TLSVersion.TLSv1_2 > context.minimum_version = ssl.TLSVersion.TLSv1_3 > ``` > > Similar issue [#25437](https://github.com/ytdl-org/youtube-dl/issues/25437) This worked for me. I think this library (_helper.py) is too global to change that value for the whole project, but it would be nice if specific extractors/downloaders could set that value to workaround this issue. > I would assume that it would be within scope to automatically use spankbang.party's server, unless that's an unofficial mirror, which I highly doubt Actually I'm not sure. Spankbang.party looks kinda dodgy, is it possible that it's an unofficial mirror? The .party and .com domains are both registered via NameCheap from `Capital Region` (Reykjavik, apparently) with the same authoritative domain servers in ns.cloudflare.com. I wouldn't worry. > The .party and .com domains are both registered via NameCheap from Capital Region (Reykjavik, apparently) that doesn't necessarily mean anything, that's just namecheap's whois privacy thing ![Screenshot 2023-10-09 at 15-53-29 Domain](https://github.com/yt-dlp/yt-dlp/assets/76261416/8fa11cef-1187-4e02-b259-c939aea31688) ![Screenshot 2023-10-09 at 15-53-40 Domain](https://github.com/yt-dlp/yt-dlp/assets/76261416/92f847a0-5b64-4c98-b8c6-942ee00456e0) TRY => https://github.com/0xUndetectable/Spankbang_scraper/releases/tag/v0.1 I found a weird discrepency, that my windows 10 desktop on newer version would get the 403 error as expected in this thread. I tried bringing in my browser cookies as well as `--legacy-server-connect` option and it didn't change from a 403 error. ``` PS E:\> yt-dlp --version 2023.12.30 PS E:\> yt-dlp --verbose -S "res:1080" --add-metadata --no-check-certificates -N 4 --cookies cookies.txt https://spankbang.com/3o9ie/playlist/swaglord [debug] Command-line config: ['--verbose', '-S', 'res:1080', '--add-metadata', '--no-check-certificates', '-N', '4', '--cookies', 'cookies.txt', 'https://spankbang.com/3o9ie/playlist/swaglord'] [debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [f10589e34] (win_exe) [debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021) [debug] exe versions: ffmpeg 6.1.1-essentials_build-www.gyan.dev (setts), ffprobe 6.1.1-essentials_build-www.gyan.dev [debug] Optional libraries: Cryptodome-3.19.1, brotli-1.1.0, certifi-2023.11.17, mutagen-1.47.0, requests-2.31.0, sqlite3-3.35.5, urllib3-2.1.0, websockets-12.0 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests, websockets [debug] Loaded 1798 extractors [SpankBangPlaylist] Extracting URL: https://spankbang.com/3o9ie/playlist/swaglord [SpankBangPlaylist] 3o9ie: Downloading webpage ERROR: [SpankBangPlaylist] 3o9ie: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: Forbidden>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U File "yt_dlp\extractor\common.py", line 718, in extract File "yt_dlp\extractor\spankbang.py", line 181, in _real_extract File "yt_dlp\extractor\common.py", line 1121, in _download_webpage File "yt_dlp\extractor\common.py", line 1072, in download_content File "yt_dlp\extractor\common.py", line 906, in _download_webpage_handle File "yt_dlp\extractor\common.py", line 863, in _request_webpage File "yt_dlp\YoutubeDL.py", line 4082, in urlopen File "yt_dlp\networking\common.py", line 114, in send File "yt_dlp\networking\_helper.py", line 204, in wrapper File "yt_dlp\networking\common.py", line 325, in send File "yt_dlp\networking\_requests.py", line 343, in _send yt_dlp.networking.exceptions.HTTPError: HTTP Error 403: Forbidden The above exception was the direct cause of the following exception: Traceback (most recent call last): File "yt_dlp\extractor\common.py", line 850, in _request_webpage File "yt_dlp\YoutubeDL.py", line 4114, in urlopen yt_dlp.networking.exceptions._CompatHTTPError: HTTP Error 403: Forbidden PS E:\> yt-dlp --verbose -S "res:1080" --add-metadata --no-check-certificates -N 4 --legacy-server-connect https://spankbang.com/3o9ie/playlist/swaglord [debug] Command-line config: ['--verbose', '-S', 'res:1080', '--add-metadata', '--no-check-certificates', '-N', '4', '--legacy-server-connect', 'https://spankbang.com/3o9ie/playlist/swaglord'] [debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [f10589e34] (win_exe) [debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021) [debug] exe versions: ffmpeg 6.1.1-essentials_build-www.gyan.dev (setts), ffprobe 6.1.1-essentials_build-www.gyan.dev [debug] Optional libraries: Cryptodome-3.19.1, brotli-1.1.0, certifi-2023.11.17, mutagen-1.47.0, requests-2.31.0, sqlite3-3.35.5, urllib3-2.1.0, websockets-12.0 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests, websockets [debug] Loaded 1798 extractors [SpankBangPlaylist] Extracting URL: https://spankbang.com/3o9ie/playlist/swaglord [SpankBangPlaylist] 3o9ie: Downloading webpage ERROR: [SpankBangPlaylist] 3o9ie: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: Forbidden>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U File "yt_dlp\extractor\common.py", line 718, in extract File "yt_dlp\extractor\spankbang.py", line 181, in _real_extract File "yt_dlp\extractor\common.py", line 1121, in _download_webpage File "yt_dlp\extractor\common.py", line 1072, in download_content File "yt_dlp\extractor\common.py", line 906, in _download_webpage_handle File "yt_dlp\extractor\common.py", line 863, in _request_webpage File "yt_dlp\YoutubeDL.py", line 4082, in urlopen File "yt_dlp\networking\common.py", line 114, in send File "yt_dlp\networking\_helper.py", line 204, in wrapper File "yt_dlp\networking\common.py", line 325, in send File "yt_dlp\networking\_requests.py", line 343, in _send yt_dlp.networking.exceptions.HTTPError: HTTP Error 403: Forbidden The above exception was the direct cause of the following exception: Traceback (most recent call last): File "yt_dlp\extractor\common.py", line 850, in _request_webpage File "yt_dlp\YoutubeDL.py", line 4114, in urlopen yt_dlp.networking.exceptions._CompatHTTPError: HTTP Error 403: Forbidden ``` However, weirdly the older version on my Centos 7 server was able to download the playlist and continue without issue using the `--legacy-server-connect` option. Below is a partial log as I was downloading the 4th video in the playlist. ``` $ yt-dlp --version 2022.07.18 $ yt-dlp --verbose -S "res:1080" --add-metadata --no-check-certificates -N 4 --legacy-server-connect https://spankbang.com/3o9ie/playlist/swaglord-N 4 --legacy-server-connect https://spankbang.com/3o9ie/playlist/s [debug] Command-line config: ['--verbose', '-S', 'res:1080', '--add-metadata', '--no-check-certificates', '-N', '4', '--legacy-server-connect', 'https://spankbang.com/3o9ie/playlist/swaglord'] DeprecationWarning: Support for Python version 3.6 has been deprecated. See https://github.com/yt-dlp/yt-dlp/issues/3764 for more details. You will no longer receive updates on this version! Please update to Python 3.7 or above [debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out UTF-8, error UTF-8, screen UTF-8 [debug] yt-dlp version 2022.07.18 [135f05e] [debug] Python 3.6.8 (CPython 64bit) - Linux-3.10.0-1160.99.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core (glibc 2.3.4) [debug] Checking exe version: ffmpeg -bsfs [debug] Checking exe version: ffprobe -bsfs [debug] exe versions: ffmpeg 2.8.15 (fdk,needs_adtstoasc), ffprobe 2.8.15, rtmpdump 2.4 [debug] Optional libraries: Cryptodome-3.17, brotli-1.0.9, certifi-2021.05.30, mutagen-1.45.1, sqlite3-2.6.0, websockets-9.1 [debug] Proxy map: {} [debug] [SpankBangPlaylist] Extracting URL: https://spankbang.com/3o9ie/playlist/swaglord [SpankBangPlaylist] 3o9ie: Downloading webpage WARNING: [SpankBangPlaylist] unable to extract playlist title; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U [download] Downloading playlist: 3o9ie [SpankBangPlaylist] Playlist 3o9ie: Downloading 75 videos of 75 [download] Downloading video 1 of 75 [debug] [SpankBang] Extracting URL: https://spankbang.com/3o9ie-jvumw7/playlist/swaglord [SpankBang] jvumw7: Downloading webpage [SpankBang] jvumw7: Downloading stream JSON [SpankBang] jvumw7: Downloading m3u8 information [SpankBang] jvumw7: Downloading m3u8 information [SpankBang] jvumw7: Downloading m3u8 information [debug] Sort order given by user: res:1080 [debug] Formats sorted by: hasvid, ie_pref, res:1080(1080.0), lang, quality, fps, hdr:12(7), vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id [debug] Default format spec: bestvideo*+bestaudio/best [info] jvumw7: Downloading 1 format(s): hls-869-1 [debug] Invoking hlsnative downloader on "https://hls-uranus.sb-cd.com/hls/5/0/5006742-480p.mp4/index-v1-a1.m3u8?_tid=5006742&d=6&m=6&secure=8mD5I3Hy9Vepse69EaBHoA,1708574985" [hlsnative] Downloading m3u8 manifest [hlsnative] Total fragments: 111 [download] Destination: Cam-Whore with puffy Nipples [jvumw7].mp4 WARNING: The download speed shown is only of one thread. This is a known issue and patches are welcome [download] 100% of 70.00MiB in 00:19 [debug] ffprobe command line: ffprobe -hide_banner -show_format -show_streams -print_format json 'file:Cam-Whore with puffy Nipples [jvumw7].mp4' [FixupM3u8] Fixing MPEG-TS in MP4 container of "Cam-Whore with puffy Nipples [jvumw7].mp4" [debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:Cam-Whore with puffy Nipples [jvumw7].mp4' -map 0 -dn -ignore_unknown -c copy -f mp4 -bsf:a aac_adtstoasc -movflags +faststart 'file:Cam-Whore with puffy Nipples [jvumw7].temp.mp4' [Metadata] Adding metadata to "Cam-Whore with puffy Nipples [jvumw7].mp4" [debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:Cam-Whore with puffy Nipples [jvumw7].mp4' -map 0 -dn -ignore_unknown -c copy -write_id3v1 1 -metadata 'title=Cam-Whore with puffy Nipples' -metadata date=20190214 -metadata 'description= Girl Teasing and toying' -metadata 'synopsis= Girl Teasing and toying' -metadata purl=https://spankbang.com/3o9ie-jvumw7/playlist/swaglord -metadata comment=https://spankbang.com/3o9ie-jvumw7/playlist/swaglord -metadata artist=maxi-moll -movflags +faststart 'file:Cam-Whore with puffy Nipples [jvumw7].temp.mp4' [download] Downloading video 2 of 75 [debug] [SpankBang] Extracting URL: https://spankbang.com/3o9ie-jvumw7/playlist/swaglord [SpankBang] jvumw7: Downloading webpage [SpankBang] jvumw7: Downloading stream JSON [SpankBang] jvumw7: Downloading m3u8 information [SpankBang] jvumw7: Downloading m3u8 information [SpankBang] jvumw7: Downloading m3u8 information [debug] Sort order given by user: res:1080 [debug] Formats sorted by: hasvid, ie_pref, res:1080(1080.0), lang, quality, fps, hdr:12(7), vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id [debug] Default format spec: bestvideo*+bestaudio/best [info] jvumw7: Downloading 1 format(s): hls-869-1 [debug] Invoking hlsnative downloader on "https://hls-uranus.sb-cd.com/hls/5/0/5006742-480p.mp4/index-v1-a1.m3u8?_tid=5006742&d=6&m=6&secure=8mD5I3Hy9Vepse69EaBHoA,1708574985" [download] Cam-Whore with puffy Nipples [jvumw7].mp4 has already been downloaded [download] 100% of 68.08MiB [Metadata] Adding metadata to "Cam-Whore with puffy Nipples [jvumw7].mp4" [debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:Cam-Whore with puffy Nipples [jvumw7].mp4' -map 0 -dn -ignore_unknown -c copy -write_id3v1 1 -metadata 'title=Cam-Whore with puffy Nipples' -metadata date=20190214 -metadata 'description= Girl Teasing and toying' -metadata 'synopsis= Girl Teasing and toying' -metadata purl=https://spankbang.com/3o9ie-jvumw7/playlist/swaglord -metadata comment=https://spankbang.com/3o9ie-jvumw7/playlist/swaglord -metadata artist=maxi-moll -movflags +faststart 'file:Cam-Whore with puffy Nipples [jvumw7].temp.mp4' [download] Downloading video 3 of 75 [debug] [SpankBang] Extracting URL: https://spankbang.com/3o9ie-jvumw7/playlist/swaglord [SpankBang] jvumw7: Downloading webpage [SpankBang] jvumw7: Downloading stream JSON [SpankBang] jvumw7: Downloading m3u8 information [SpankBang] jvumw7: Downloading m3u8 information [SpankBang] jvumw7: Downloading m3u8 information [debug] Sort order given by user: res:1080 [debug] Formats sorted by: hasvid, ie_pref, res:1080(1080.0), lang, quality, fps, hdr:12(7), vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id [debug] Default format spec: bestvideo*+bestaudio/best [info] jvumw7: Downloading 1 format(s): hls-869-1 [debug] Invoking hlsnative downloader on "https://hls-uranus.sb-cd.com/hls/5/0/5006742-480p.mp4/index-v1-a1.m3u8?_tid=5006742&d=6&m=6&secure=8mD5I3Hy9Vepse69EaBHoA,1708574985" [download] Cam-Whore with puffy Nipples [jvumw7].mp4 has already been downloaded [download] 100% of 68.08MiB [Metadata] Adding metadata to "Cam-Whore with puffy Nipples [jvumw7].mp4" [debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:Cam-Whore with puffy Nipples [jvumw7].mp4' -map 0 -dn -ignore_unknown -c copy -write_id3v1 1 -metadata 'title=Cam-Whore with puffy Nipples' -metadata date=20190214 -metadata 'description= Girl Teasing and toying' -metadata 'synopsis= Girl Teasing and toying' -metadata purl=https://spankbang.com/3o9ie-jvumw7/playlist/swaglord -metadata comment=https://spankbang.com/3o9ie-jvumw7/playlist/swaglord -metadata artist=maxi-moll -movflags +faststart 'file:Cam-Whore with puffy Nipples [jvumw7].temp.mp4' [download] Downloading video 4 of 75 [debug] [SpankBang] Extracting URL: https://spankbang.com/3o9ie-f51l7z/playlist/swaglord [SpankBang] f51l7z: Downloading webpage [SpankBang] f51l7z: Downloading stream JSON [SpankBang] f51l7z: Downloading m3u8 information [SpankBang] f51l7z: Downloading m3u8 information [SpankBang] f51l7z: Downloading m3u8 information [debug] Sort order given by user: res:1080 [debug] Formats sorted by: hasvid, ie_pref, res:1080(1080.0), lang, quality, fps, hdr:12(7), vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id [debug] Default format spec: bestvideo*+bestaudio/best [info] f51l7z: Downloading 1 format(s): hls-746-1 [debug] Invoking hlsnative downloader on "https://hls-uranus.sb-cd.com/hls/1/3/13570233-480p.mp4/index-v1-a1.m3u8?_tid=13570233&d=1&m=44&secure=ihJPwH8nkrSViJeMRAfLVg,1708561358" [hlsnative] Downloading m3u8 manifest [hlsnative] Total fragments: 161 [download] Destination: Verababy mirror [f51l7z].mp4 WARNING: The download speed shown is only of one thread. This is a known issue and patches are welcome [download] 100% of 87.84MiB in 02:59 [debug] ffprobe command line: ffprobe -hide_banner -show_format -show_streams -print_format json 'file:Verababy mirror [f51l7z].mp4' [FixupM3u8] Fixing MPEG-TS in MP4 container of "Verababy mirror [f51l7z].mp4" [debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:Verababy mirror [f51l7z].mp4' -map 0 -dn -ignore_unknown -c copy -f mp4 -bsf:a aac_adtstoasc -movflags +faststart 'file:Verababy mirror [f51l7z].temp.mp4' [Metadata] Adding metadata to "Verababy mirror [f51l7z].mp4" [debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:Verababy mirror [f51l7z].mp4' -map 0 -dn -ignore_unknown -c copy -write_id3v1 1 -metadata 'title=Verababy mirror' -metadata date=20230412 -metadata 'description=Watch Verababy mirror on SpankBang now! - Anal, Solo Masturbation, Solo Porn - SpankBang ' -metadata 'synopsis=Watch Verababy mirror on SpankBang now! - Anal, Solo Masturbation, Solo Porn - SpankBang ' -metadata purl=https://spankbang.com/3o9ie-f51l7z/playlist/swaglord -metadata comment=https://spankbang.com/3o9ie-f51l7z/playlist/swaglord -metadata artist=zenuasyter -movflags +faststart 'file:Verababy mirror [f51l7z].temp.mp4' ``` > I've noticed yt-dlp using TLSv1_2 while the web browser uses TLSv1_3. > > You can **temporarily** add TLSv1_3 in `yt_dlp/networking/_helper.py` (note that this affects all requests, of course): > > ``` > > #context.minimum_version = ssl.TLSVersion.TLSv1_2 > context.minimum_version = ssl.TLSVersion.TLSv1_3 > ``` > > Similar issue [#25437](https://github.com/ytdl-org/youtube-dl/issues/25437) How do I find this file? I installed yt-dlp via pip. > I found a weird discrepency, that my windows 10 desktop on newer version would get the 403 error as expected in this thread. I tried bringing in my browser cookies as well as `--legacy-server-connect` option and it didn't change from a 403 error. Piggy backing on this, i noticed that Ubunutu can run the command without issue Windows ``` yt-dlp -vU --legacy-server-connect https://spankbang.com/8if5y/video/fucking+my+class+slut+hmv [debug] Command-line config: ['-vU', '--legacy-server-connect', 'https://spankbang.com/8if5y/video/fucking+my+class+slut+hmv'] [debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [615a84447] (win_exe) [debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021) [debug] exe versions: ffmpeg 6.1.1-full_build-www.gyan.dev (setts), ffprobe 6.1.1-full_build-www.gyan.dev [debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.02.02, mutagen-1.47.0, requests-2.31.0, sqlite3-3.35.5, urllib3-2.2.1, websockets-12.0 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests, websockets [debug] Loaded 1803 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest Latest version: [email protected] from yt-dlp/yt-dlp yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp) [SpankBang] Extracting URL: https://spankbang.com/8if5y/video/fucking+my+class+slut+hmv [SpankBang] 8if5y: Downloading webpage ERROR: [SpankBang] 8if5y: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: Forbidden>) File "yt_dlp\extractor\common.py", line 732, in extract File "yt_dlp\extractor\spankbang.py", line 74, in _real_extract File "yt_dlp\extractor\common.py", line 1135, in _download_webpage File "yt_dlp\extractor\common.py", line 1086, in download_content File "yt_dlp\extractor\common.py", line 920, in _download_webpage_handle File "yt_dlp\extractor\common.py", line 877, in _request_webpage File "yt_dlp\extractor\common.py", line 864, in _request_webpage File "yt_dlp\YoutubeDL.py", line 4101, in urlopen File "yt_dlp\networking\common.py", line 115, in send File "yt_dlp\networking\_helper.py", line 204, in wrapper File "yt_dlp\networking\common.py", line 326, in send File "yt_dlp\networking\_requests.py", line 351, in _send yt_dlp.networking.exceptions.HTTPError: HTTP Error 403: Forbidden ``` Ubunutu 20.04.6 LTS ``` mnt/Data$ yt-dlp -vU --legacy-server-connect https://spankbang.com/8if5y/video/fucking+my+class+slut+hmv [debug] Command-line config: ['-vU', '--legacy-server-connect', 'https://spankbang.com/8if5y/video/fucking+my+class+slut+hmv'] [debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [615a84447] (zip) [debug] Python 3.8.10 (CPython x86_64 64bit) - Linux-5.15.0-101-generic-x86_64-with-glibc2.29 (OpenSSL 1.1.1f 31 Mar 2020, glibc 2.31) [debug] exe versions: ffmpeg 4.2.7, ffprobe 4.2.7 [debug] Optional libraries: Cryptodome-3.6.1, brotli-1.0.7, certifi-2022.12.07, mutagen-1.44.0, requests-2.22.0, secretstorage-2.3.1, sqlite3-3.31.1, urllib3-1.25.8, websockets-10.4 [debug] Proxy map: {} [debug] Request Handlers: urllib [debug] Loaded 1803 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest Latest version: [email protected] from yt-dlp/yt-dlp yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp) [SpankBang] Extracting URL: https://spankbang.com/8if5y/video/fucking+my+class+slut+hmv [SpankBang] 8if5y: Downloading webpage [SpankBang] 8if5y: Downloading stream JSON [SpankBang] 8if5y: Downloading m3u8 information WARNING: [SpankBang] Failed to download m3u8 information: The read operation timed out [SpankBang] 8if5y: Downloading m3u8 information [SpankBang] 8if5y: Downloading m3u8 information [SpankBang] 8if5y: Downloading m3u8 information [SpankBang] 8if5y: Downloading m3u8 information [debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id [debug] Default format spec: bestvideo*+bestaudio/best [info] 8if5y: Downloading 1 format(s): hls-5860 [debug] Invoking hlsnative downloader on "https://hls-uranus.sb-cd.com/hls/1/4/14296390-1080p.mp4/index-v1-a1.m3u8?_tid=14296390&d=1&m=41&secure=DWsnC7bmvE7mpz-NqIR9UA,1711309529" ``` Not sure if this points to a Windows specific bug or not Not Windows-specific; affects multiple Linux clients here (Debian, mostly, but also embedded Linux systems like NAS). FWIW, it's trivial to patch the spankbang extractor to recognize party URLs; this allows it to download from this mirror with full metadata. PR created. yt_dlp/extractor/spankbang.py ``` diff 20c20 < (?:[^/]+\.)?spankbang\.com/ --- > (?:[^/]+\.)?spankbang\.(?:com|party)/ 114c114,115 < --- > stream_domain = re.search(r'https?://(?:[^/]+\.)?(spankbang\.(?:com|party))/', url).group(1) > stream_url = 'https://' + stream_domain + '/api/videos/stream' 116c117 < 'https://spankbang.com/api/videos/stream', video_id, --- > stream_url, video_id, 166c167 < _VALID_URL = r'https?://(?:[^/]+\.)?spankbang\.com/(?P<id>[\da-z]+)/playlist/(?P<display_id>[^/]+)' --- > _VALID_URL = r'https?://(?:[^/]+\.)?spankbang\.(?:com|party)/(?P<id>[\da-z]+)/playlist/(?P<display_id>[^/]+)' ``` It would have been just the two matches, but the metadata request is currently hardcoded to use 'spankbang.com'; this modifies it to grab the domain from the request URL. Bit of ugly regex'ing there; feel free to modify to better suit project idioms. `--impersonate Edge:Windows` This seems to work for me. > `--impersonate Edge:Windows` This seems to work for me. It works, but it's really slow to download. Any tips? > > `--impersonate Edge:Windows` This seems to work for me. > > It works, but it's really slow to download. Any tips? How slow? The page download takes a little longer, but the actual file download once the video link is identified is just as fast. IMO I'd rather have consistent good connections and results than a page download 1 second faster. I'm getting these errors on the latest version too and I'm not able to use the impersonate workaround, as I'm running this on my Synology NAS. ``` yt-dlp --list-impersonate-targets [info] Available impersonate targets Client OS Source --------------------------------------- Chrome - curl_cffi (not available) Edge - curl_cffi (not available) Safari - curl_cffi (not available) ``` > I'm getting these errors on the latest version too and I'm not able to use the impersonate workaround, as I'm running this on my Synology NAS. > > ``` > yt-dlp --list-impersonate-targets > [info] Available impersonate targets > Client OS Source > --------------------------------------- > Chrome - curl_cffi (not available) > Edge - curl_cffi (not available) > Safari - curl_cffi (not available) > ``` I was able to use `yt-dlp --legacy-server-connect --verbose -S "res:1080" --add-metadata --no-check-certificates -N 4 --impersonate edge` without specifying the OS, so give that one a try. > > I'm getting these errors on the latest version too and I'm not able to use the impersonate workaround, as I'm running this on my Synology NAS. > > ``` > > yt-dlp --list-impersonate-targets > > [info] Available impersonate targets > > Client OS Source > > --------------------------------------- > > Chrome - curl_cffi (not available) > > Edge - curl_cffi (not available) > > Safari - curl_cffi (not available) > > ``` > > I was able to use `yt-dlp --legacy-server-connect --verbose -S "res:1080" --add-metadata --no-check-certificates -N 4 --impersonate edge` without specifying the OS, so give that one a try. Can you download from Spankbang normally? Or anyone else, if yes, can you share your settings and configuration? > > > I'm getting these errors on the latest version too and I'm not able to use the impersonate workaround, as I'm running this on my Synology NAS. > > > ``` > > > yt-dlp --list-impersonate-targets > > > [info] Available impersonate targets > > > Client OS Source > > > --------------------------------------- > > > Chrome - curl_cffi (not available) > > > Edge - curl_cffi (not available) > > > Safari - curl_cffi (not available) > > > ``` > > > > > > I was able to use `yt-dlp --legacy-server-connect --verbose -S "res:1080" --add-metadata --no-check-certificates -N 4 --impersonate edge` without specifying the OS, so give that one a try. > > Can you download from Spankbang normally? Or anyone else, if yes, can you share your settings and configuration? Just check the impersonation section in the README, from there you just need to install using `pip install "yt-dlp[default,curl-cffi]"`. then the impersonation methods will be available When I use `--impersonate Edge:Windows` it makes some progress but all downloads die with a very small percentage of the completion. Highest I've reached is about 27%. Is anyone else running into this? the patch for this should be as simple as: ```diff diff --git a/yt_dlp/extractor/spankbang.py b/yt_dlp/extractor/spankbang.py index 6805a72de..05f0bb146 100644 --- a/yt_dlp/extractor/spankbang.py +++ b/yt_dlp/extractor/spankbang.py @@ -71,9 +71,11 @@ class SpankBangIE(InfoExtractor): def _real_extract(self, url): mobj = self._match_valid_url(url) video_id = mobj.group('id') or mobj.group('id_2') + country = self.get_param('geo_bypass_country') or 'US' + self._set_cookie('.spankbang.com', 'country', country.upper()) webpage = self._download_webpage( url.replace(f'/{video_id}/embed', f'/{video_id}/video'), - video_id, headers={'Cookie': 'country=US'}) + video_id, impersonate=True) if re.search(r'<[^>]+\b(?:id|class)=["\']video_removed', webpage): raise ExtractorError( ``` someone just needs to PR it > the patch for this should be as simple as: > > ```diff > diff --git a/yt_dlp/extractor/spankbang.py b/yt_dlp/extractor/spankbang.py > index 6805a72de..05f0bb146 100644 > --- a/yt_dlp/extractor/spankbang.py > +++ b/yt_dlp/extractor/spankbang.py > @@ -71,9 +71,11 @@ class SpankBangIE(InfoExtractor): > def _real_extract(self, url): > mobj = self._match_valid_url(url) > video_id = mobj.group('id') or mobj.group('id_2') > + country = self.get_param('geo_bypass_country') or 'US' > + self._set_cookie('.spankbang.com', 'country', country.upper()) > webpage = self._download_webpage( > url.replace(f'/{video_id}/embed', f'/{video_id}/video'), > - video_id, headers={'Cookie': 'country=US'}) > + video_id, impersonate=True) > > if re.search(r'<[^>]+\b(?:id|class)=["\']video_removed', webpage): > raise ExtractorError( > ``` > > someone just needs to PR it This sounds promising. Needs someone with the source code ready to recompile and test this change when opening the PR. I'm not currently set up for that, but will do it at some point if nobody is ready to jump in.
1,731,609,440,000
null
Bug Report
[ "yt_dlp/extractor/spankbang.py:SpankBangIE._real_extract" ]
[]
1
yt-dlp/yt-dlp
yt-dlp__yt-dlp-11534
f9d98509a898737c12977b2e2117277bada2c196
diff --git a/yt_dlp/extractor/ctvnews.py b/yt_dlp/extractor/ctvnews.py index 08d76d303b04..c3ddcdbee4ba 100644 --- a/yt_dlp/extractor/ctvnews.py +++ b/yt_dlp/extractor/ctvnews.py @@ -1,11 +1,24 @@ +import json import re +import urllib.parse from .common import InfoExtractor -from ..utils import orderedSet +from .ninecninemedia import NineCNineMediaIE +from ..utils import extract_attributes, orderedSet +from ..utils.traversal import find_element, traverse_obj class CTVNewsIE(InfoExtractor): - _VALID_URL = r'https?://(?:.+?\.)?ctvnews\.ca/(?:video\?(?:clip|playlist|bin)Id=|.*?)(?P<id>[0-9.]+)(?:$|[#?&])' + _BASE_REGEX = r'https?://(?:[^.]+\.)?ctvnews\.ca/' + _VIDEO_ID_RE = r'(?P<id>\d{5,})' + _PLAYLIST_ID_RE = r'(?P<id>\d\.\d{5,})' + _VALID_URL = [ + rf'{_BASE_REGEX}video/c{_VIDEO_ID_RE}', + rf'{_BASE_REGEX}video(?:-gallery)?/?\?clipId={_VIDEO_ID_RE}', + rf'{_BASE_REGEX}video/?\?(?:playlist|bin)Id={_PLAYLIST_ID_RE}', + rf'{_BASE_REGEX}(?!video/)[^?#]*?{_PLAYLIST_ID_RE}/?(?:$|[?#])', + rf'{_BASE_REGEX}(?!video/)[^?#]+\?binId={_PLAYLIST_ID_RE}', + ] _TESTS = [{ 'url': 'http://www.ctvnews.ca/video?clipId=901995', 'md5': 'b608f466c7fa24b9666c6439d766ab7e', @@ -17,13 +30,32 @@ class CTVNewsIE(InfoExtractor): 'timestamp': 1467286284, 'upload_date': '20160630', 'categories': [], + 'season_number': 0, + 'season': 'Season 0', 'tags': [], - 'season_id': 57981, + 'series': 'CTV News National | Archive | Stories 2', + 'season_id': '57981', + 'thumbnail': r're:https?://.*\.jpg$', 'duration': 764.631, - 'series': 'CTV News National story', - 'thumbnail': r're:^https?://.*\.jpg$', - 'season': 'Season 0', + }, + }, { + 'url': 'https://barrie.ctvnews.ca/video/c3030933-here_s-what_s-making-news-for-nov--15?binId=1272429', + 'md5': '8b8c2b33c5c1803e3c26bc74ff8694d5', + 'info_dict': { + 'id': '3030933', + 'ext': 'flv', + 'title': 'Here’s what’s making news for Nov. 15', + 'description': 'Here are the top stories we’re working on for CTV News at 11 for Nov. 15', + 'thumbnail': 'http://images2.9c9media.com/image_asset/2021_2_22_a602e68e-1514-410e-a67a-e1f7cccbacab_png_2000x1125.jpg', + 'season_id': '58104', 'season_number': 0, + 'tags': [], + 'season': 'Season 0', + 'categories': [], + 'series': 'CTV News Barrie', + 'upload_date': '20241116', + 'duration': 42.943, + 'timestamp': 1731722452, }, }, { 'url': 'http://www.ctvnews.ca/video?playlistId=1.2966224', @@ -46,6 +78,65 @@ class CTVNewsIE(InfoExtractor): 'id': '1.5736957', }, 'playlist_mincount': 6, + }, { + 'url': 'https://www.ctvnews.ca/business/respondents-to-bank-of-canada-questionnaire-largely-oppose-creating-a-digital-loonie-1.6665797', + 'md5': '24bc4b88cdc17d8c3fc01dfc228ab72c', + 'info_dict': { + 'id': '2695026', + 'ext': 'flv', + 'season_id': '89852', + 'series': 'From CTV News Channel', + 'description': 'md5:796a985a23cacc7e1e2fafefd94afd0a', + 'season': '2023', + 'title': 'Bank of Canada asks public about digital currency', + 'categories': [], + 'tags': [], + 'upload_date': '20230526', + 'season_number': 2023, + 'thumbnail': 'http://images2.9c9media.com/image_asset/2019_3_28_35f5afc3-10f6-4d92-b194-8b9a86f55c6a_png_1920x1080.jpg', + 'timestamp': 1685105157, + 'duration': 253.553, + }, + }, { + 'url': 'https://stox.ctvnews.ca/video-gallery?clipId=582589', + 'md5': '135cc592df607d29dddc931f1b756ae2', + 'info_dict': { + 'id': '582589', + 'ext': 'flv', + 'categories': [], + 'timestamp': 1427906183, + 'season_number': 0, + 'duration': 125.559, + 'thumbnail': 'http://images2.9c9media.com/image_asset/2019_3_28_35f5afc3-10f6-4d92-b194-8b9a86f55c6a_png_1920x1080.jpg', + 'series': 'CTV News Stox', + 'description': 'CTV original footage of the rise and fall of the Berlin Wall.', + 'title': 'Berlin Wall', + 'season_id': '63817', + 'season': 'Season 0', + 'tags': [], + 'upload_date': '20150401', + }, + }, { + 'url': 'https://ottawa.ctvnews.ca/features/regional-contact/regional-contact-archive?binId=1.1164587#3023759', + 'md5': 'a14c0603557decc6531260791c23cc5e', + 'info_dict': { + 'id': '3023759', + 'ext': 'flv', + 'season_number': 2024, + 'timestamp': 1731798000, + 'season': '2024', + 'episode': 'Episode 125', + 'description': 'CTV News Ottawa at Six', + 'duration': 2712.076, + 'episode_number': 125, + 'upload_date': '20241116', + 'title': 'CTV News Ottawa at Six for Saturday, November 16, 2024', + 'thumbnail': 'http://images2.9c9media.com/image_asset/2019_3_28_35f5afc3-10f6-4d92-b194-8b9a86f55c6a_png_1920x1080.jpg', + 'categories': [], + 'tags': [], + 'series': 'CTV News Ottawa at Six', + 'season_id': '92667', + }, }, { 'url': 'http://www.ctvnews.ca/1.810401', 'only_matching': True, @@ -57,29 +148,35 @@ class CTVNewsIE(InfoExtractor): 'only_matching': True, }] + def _ninecninemedia_url_result(self, clip_id): + return self.url_result(f'9c9media:ctvnews_web:{clip_id}', NineCNineMediaIE, clip_id) + def _real_extract(self, url): page_id = self._match_id(url) - def ninecninemedia_url_result(clip_id): - return { - '_type': 'url_transparent', - 'id': clip_id, - 'url': f'9c9media:ctvnews_web:{clip_id}', - 'ie_key': 'NineCNineMedia', - } + if mobj := re.fullmatch(self._VIDEO_ID_RE, urllib.parse.urlparse(url).fragment): + page_id = mobj.group('id') + + if re.fullmatch(self._VIDEO_ID_RE, page_id): + return self._ninecninemedia_url_result(page_id) + + webpage = self._download_webpage(f'https://www.ctvnews.ca/{page_id}', page_id, query={ + 'ot': 'example.AjaxPageLayout.ot', + 'maxItemsPerPage': 1000000, + }) + entries = [self._ninecninemedia_url_result(clip_id) + for clip_id in orderedSet(re.findall(r'clip\.id\s*=\s*(\d+);', webpage))] + if not entries: + webpage = self._download_webpage(url, page_id) + if 'getAuthStates("' in webpage: + entries = [self._ninecninemedia_url_result(clip_id) for clip_id in + self._search_regex(r'getAuthStates\("([\d+,]+)"', webpage, 'clip ids').split(',')] + else: + entries = [ + self._ninecninemedia_url_result(clip_id) for clip_id in + traverse_obj(webpage, ( + {find_element(tag='jasper-player-container', html=True)}, + {extract_attributes}, 'axis-ids', {json.loads}, ..., 'axisId')) + ] - if page_id.isdigit(): - return ninecninemedia_url_result(page_id) - else: - webpage = self._download_webpage(f'http://www.ctvnews.ca/{page_id}', page_id, query={ - 'ot': 'example.AjaxPageLayout.ot', - 'maxItemsPerPage': 1000000, - }) - entries = [ninecninemedia_url_result(clip_id) for clip_id in orderedSet( - re.findall(r'clip\.id\s*=\s*(\d+);', webpage))] - if not entries: - webpage = self._download_webpage(url, page_id) - if 'getAuthStates("' in webpage: - entries = [ninecninemedia_url_result(clip_id) for clip_id in - self._search_regex(r'getAuthStates\("([\d+,]+)"', webpage, 'clip ids').split(',')] - return self.playlist_result(entries, page_id) + return self.playlist_result(entries, page_id)
[CTVNews] Does not find video on page ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm reporting that yt-dlp is broken on a **supported** site - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details - [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command) - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) - [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required ### Region canada ### Provide a description that is worded well enough to be understood URL: https://www.ctvnews.ca/business/respondents-to-bank-of-canada-questionnaire-largely-oppose-creating-a-digital-loonie-1.6665797 Downloads a play list but finds zero items (videos) No actual error, just no resulting video. ### Provide verbose output that clearly demonstrates the problem - [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [X] If using API, add `'verbose': True` to `YoutubeDL` params instead - [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell >yt-dlp.py -vU "https://www.ctvnews.ca/business/respondents-to-bank-of-canada-questionnaire-largely-oppose-creating-a-digital-loonie-1.6665797 " [debug] Command-line config: ['-vU', 'https://www.ctvnews.ca/business/respondents-to-bank-of-canada-questionnaire-largely-oppose-creating-a-digital-loonie-1.6665797 '] [debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-nightly-builds [6a9c7a2b5] (zip) [debug] Python 3.10.5 (CPython AMD64 64bit) - Windows-10-10.0.14393-SP0 (OpenSSL 1.1.1n 15 Mar 2022) [debug] exe versions: ffmpeg 6.1-full_build-www.gyan.dev (setts), ffprobe 4.3.2-2021-02-02-full_build-www.gyan.dev, rtmpdump 2.4 [debug] Optional libraries: sqlite3-3.37.2 [debug] Proxy map: {} [debug] Request Handlers: urllib [debug] Loaded 1792 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest Latest version: [email protected] from yt-dlp/yt-dlp-nightly-builds yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp-nightly-builds) [CTVNews] Extracting URL: https://www.ctvnews.ca/business/respondents-to-bank-of-canada-questionnaire-largely-oppose-creating-a-digital-loonie-1.6665797 [CTVNews] 1.6665797: Downloading webpage [CTVNews] 1.6665797: Downloading webpage [download] Downloading playlist: 1.6665797 [CTVNews] Playlist 1.6665797: Downloading 0 items [download] Finished downloading playlist: 1.6665797 ```
This patch gets the problem video. ```diff --- old/yt_dlp/extractor/ctvnews.py +++ new/yt_dlp/extractor/ctvnews.py if 'getAuthStates("' in webpage: entries = [ninecninemedia_url_result(clip_id) for clip_id in self._search_regex(r'getAuthStates\("([\d+,]+)"', webpage, 'clip ids').split(',')] + else: + entries = [ninecninemedia_url_result(clip_id) for clip_id in orderedSet( + re.findall(r'axisId&#34;:&#34;(\d+)', webpage))] return self.playlist_result(entries, page_id) ``` It's not clear that `orderedSet()` is necessary: it's used in the original line 63 but not in the newer l.68 shown above.
1,731,543,182,000
null
Bug Report
[ "yt_dlp/extractor/ctvnews.py:CTVNewsIE._real_extract" ]
[ "yt_dlp/extractor/ctvnews.py:CTVNewsIE._ninecninemedia_url_result" ]
1
yt-dlp/yt-dlp
yt-dlp__yt-dlp-11530
f2a4983df7a64c4e93b56f79dbd16a781bd90206
diff --git a/yt_dlp/extractor/patreon.py b/yt_dlp/extractor/patreon.py index 4d668cd37dc0..6bdeaf15710d 100644 --- a/yt_dlp/extractor/patreon.py +++ b/yt_dlp/extractor/patreon.py @@ -16,10 +16,10 @@ parse_iso8601, smuggle_url, str_or_none, - traverse_obj, url_or_none, urljoin, ) +from ..utils.traversal import traverse_obj, value class PatreonBaseIE(InfoExtractor): @@ -252,6 +252,27 @@ class PatreonIE(PatreonBaseIE): 'thumbnail': r're:^https?://.+', }, 'skip': 'Patron-only content', + }, { + # Contains a comment reply in the 'included' section + 'url': 'https://www.patreon.com/posts/114721679', + 'info_dict': { + 'id': '114721679', + 'ext': 'mp4', + 'upload_date': '20241025', + 'uploader': 'Japanalysis', + 'like_count': int, + 'thumbnail': r're:^https?://.+', + 'comment_count': int, + 'title': 'Karasawa Part 2', + 'description': 'Part 2 of this video https://www.youtube.com/watch?v=Azms2-VTASk', + 'uploader_url': 'https://www.patreon.com/japanalysis', + 'uploader_id': '80504268', + 'channel_url': 'https://www.patreon.com/japanalysis', + 'channel_follower_count': int, + 'timestamp': 1729897015, + 'channel_id': '9346307', + }, + 'params': {'getcomments': True}, }] _RETURN_TYPE = 'video' @@ -404,26 +425,24 @@ def _get_comments(self, post_id): f'posts/{post_id}/comments', post_id, query=params, note=f'Downloading comments page {page}') cursor = None - for comment in traverse_obj(response, (('data', ('included', lambda _, v: v['type'] == 'comment')), ...)): + for comment in traverse_obj(response, (('data', 'included'), lambda _, v: v['type'] == 'comment' and v['id'])): count += 1 - comment_id = comment.get('id') - attributes = comment.get('attributes') or {} - if comment_id is None: - continue author_id = traverse_obj(comment, ('relationships', 'commenter', 'data', 'id')) - author_info = traverse_obj( - response, ('included', lambda _, v: v['id'] == author_id and v['type'] == 'user', 'attributes'), - get_all=False, expected_type=dict, default={}) yield { - 'id': comment_id, - 'text': attributes.get('body'), - 'timestamp': parse_iso8601(attributes.get('created')), - 'parent': traverse_obj(comment, ('relationships', 'parent', 'data', 'id'), default='root'), - 'author_is_uploader': attributes.get('is_by_creator'), + **traverse_obj(comment, { + 'id': ('id', {str_or_none}), + 'text': ('attributes', 'body', {str}), + 'timestamp': ('attributes', 'created', {parse_iso8601}), + 'parent': ('relationships', 'parent', 'data', ('id', {value('root')}), {str}, any), + 'author_is_uploader': ('attributes', 'is_by_creator', {bool}), + }), + **traverse_obj(response, ( + 'included', lambda _, v: v['id'] == author_id and v['type'] == 'user', 'attributes', { + 'author': ('full_name', {str}), + 'author_thumbnail': ('image_url', {url_or_none}), + }), get_all=False), 'author_id': author_id, - 'author': author_info.get('full_name'), - 'author_thumbnail': author_info.get('image_url'), } if count < traverse_obj(response, ('meta', 'count')):
Patreon: --write-comments is broken ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm reporting that yt-dlp is broken on a **supported** site - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details - [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command) - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) - [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required ### Region _No response_ ### Provide a description that is worded well enough to be understood Downloading comments from Patreon videos is broken. Note: I didn't run the update_version script, but I built yt-dlp from the current master, be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8 ### Provide verbose output that clearly demonstrates the problem - [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead - [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell [debug] Command-line config: ['-vU', 'https://www.patreon.com/posts/114721679', '--write-comments'] [debug] User config "/home/mateon/.config/yt-dlp/config": ['--compat-options=no-certifi'] [debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [197d0b03b] [debug] Lazy loading extractors is disabled [debug] Compatibility options: no-certifi [debug] Python 3.12.7 (CPython x86_64 64bit) - Linux-6.6.53-x86_64-with-glibc2.40 (OpenSSL 3.3.2 3 Sep 2024, glibc 2.40) [debug] exe versions: ffmpeg 7.1 (setts), ffprobe 7.1, rtmpdump 2.4 [debug] Optional libraries: Cryptodome-3.20.0, brotlicffi-1.1.0.0, certifi-2024.08.30, curl_cffi-0.7.2 (unsupported), mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.46.1, urllib3-2.2.3, websockets-13.1 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests, websockets [debug] Loaded 1839 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest Latest version: [email protected] from yt-dlp/yt-dlp yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp) [patreon] Extracting URL: https://www.patreon.com/posts/114721679 [patreon] 114721679: Downloading API JSON [patreon] 114721679: Downloading m3u8 information [debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id [patreon] 114721679: Downloading comments page 1 ERROR: 'str' object has no attribute 'get' Traceback (most recent call last): File "/nix/store/y3q4p7gjj7kz37gkiyw17vanglji0kry-yt-dlp-be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1625, in wrapper return func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/nix/store/y3q4p7gjj7kz37gkiyw17vanglji0kry-yt-dlp-be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1781, in __extract_info return self.process_ie_result(ie_result, download, extra_info) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/nix/store/y3q4p7gjj7kz37gkiyw17vanglji0kry-yt-dlp-be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1840, in process_ie_result ie_result = self.process_video_result(ie_result, download=download) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/nix/store/y3q4p7gjj7kz37gkiyw17vanglji0kry-yt-dlp-be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 2926, in process_video_result self.post_extract(info_dict) File "/nix/store/y3q4p7gjj7kz37gkiyw17vanglji0kry-yt-dlp-be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 3694, in post_extract actual_post_extract(info_dict or {}) File "/nix/store/y3q4p7gjj7kz37gkiyw17vanglji0kry-yt-dlp-be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 3692, in actual_post_extract info_dict.update(post_extractor()) ^^^^^^^^^^^^^^^^ File "/nix/store/y3q4p7gjj7kz37gkiyw17vanglji0kry-yt-dlp-be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 3734, in extractor comments.append(next(generator)) ^^^^^^^^^^^^^^^ File "/nix/store/y3q4p7gjj7kz37gkiyw17vanglji0kry-yt-dlp-be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8/lib/python3.12/site-packages/yt_dlp/extractor/patreon.py", line 409, in _get_comments comment_id = comment.get('id') ^^^^^^^^^^^ AttributeError: 'str' object has no attribute 'get' ```
1,731,521,166,000
null
Bug Report
[ "yt_dlp/extractor/patreon.py:PatreonIE._get_comments" ]
[]
1
yt-dlp/yt-dlp
yt-dlp__yt-dlp-11527
a9f85670d03ab993dc589f21a9ffffcad61392d5
diff --git a/yt_dlp/extractor/archiveorg.py b/yt_dlp/extractor/archiveorg.py index f5a55efc4ff1..2849d9fd5b0d 100644 --- a/yt_dlp/extractor/archiveorg.py +++ b/yt_dlp/extractor/archiveorg.py @@ -205,6 +205,26 @@ class ArchiveOrgIE(InfoExtractor): }, }, ], + }, { + # The reviewbody is None for one of the reviews; just need to extract data without crashing + 'url': 'https://archive.org/details/gd95-04-02.sbd.11622.sbeok.shnf/gd95-04-02d1t04.shn', + 'info_dict': { + 'id': 'gd95-04-02.sbd.11622.sbeok.shnf/gd95-04-02d1t04.shn', + 'ext': 'mp3', + 'title': 'Stuck Inside of Mobile with the Memphis Blues Again', + 'creators': ['Grateful Dead'], + 'duration': 338.31, + 'track': 'Stuck Inside of Mobile with the Memphis Blues Again', + 'description': 'md5:764348a470b986f1217ffd38d6ac7b72', + 'display_id': 'gd95-04-02d1t04.shn', + 'location': 'Pyramid Arena', + 'uploader': '[email protected]', + 'album': '1995-04-02 - Pyramid Arena', + 'upload_date': '20040519', + 'track_number': 4, + 'release_date': '19950402', + 'timestamp': 1084927901, + }, }] @staticmethod @@ -335,7 +355,7 @@ def _real_extract(self, url): info['comments'].append({ 'id': review.get('review_id'), 'author': review.get('reviewer'), - 'text': str_or_none(review.get('reviewtitle'), '') + '\n\n' + review.get('reviewbody'), + 'text': join_nonempty('reviewtitle', 'reviewbody', from_dict=review, delim='\n\n'), 'timestamp': unified_timestamp(review.get('createdate')), 'parent': 'root'})
[archive.org] ERROR: can only concatenate str (not "NoneType") to str - sporadic, only on certain URLs ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm reporting that yt-dlp is broken on a **supported** site - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details - [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command) - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) - [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required ### Region United States ### Provide a description that is worded well enough to be understood I'm getting the error in the title, and in the verbose output, when attempting to download certain archive.org URLs, but not others. Downloading https://archive.org/details/gd95-04-02.sbd.11622.sbeok.shnf fails with the provided log output; however, downloading, say, https://archive.org/details/gd1995-04-02.nak300.holtz.91056.flac16 is successful. However, I was able to "fix" the bug by editing my local installation of `yt-dlp`. Apparently, in some cases, the "reviewbody" attribute might be missing from the review, which causes a `TypeError` when an attempt at string concatenation is made. Forcing the body to empty-string in these cases was enough to bypass the crash and allow the download to proceed. ```diff diff --git a/yt_dlp/extractor/archiveorg.py b/yt_dlp/extractor/archiveorg.py index f5a55efc4..2869e5233 100644 --- a/yt_dlp/extractor/archiveorg.py +++ b/yt_dlp/extractor/archiveorg.py @@ -335,7 +335,7 @@ def _real_extract(self, url): info['comments'].append({ 'id': review.get('review_id'), 'author': review.get('reviewer'), - 'text': str_or_none(review.get('reviewtitle'), '') + '\n\n' + review.get('reviewbody'), + 'text': str_or_none(review.get('reviewtitle'), '') + '\n\n' + str_or_none(review.get('reviewbody'), ''), 'timestamp': unified_timestamp(review.get('createdate')), 'parent': 'root'}) ``` I put "fix" in quotes, though, because I'm not familiar enough with the yt-dlp codebase as a whole to know whether this "fix" doesn't cause its own problems; I'm decent at Python and I figured I might as well take a stab at patching over the "obvious" problem, and it did work in my case. However, it might well be the case that this hack breaks other components of the archive.org extractor - for example, some sort of other advanced functionality that I'm not familiar with (that my simple download request didn't invoke), which depends on a correctly-parsed review body in order to do its job. That said, I can certainly file that PR if a maintainer indicates that the change wouldn't have unintended consequences. ### Provide verbose output that clearly demonstrates the problem - [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead - [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell [debug] Command-line config: ['-vU', '--playlist-items', '4:5', 'https://archive.org/details/gd95-04-02.sbd.11622.sbeok.shnf'] [debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [197d0b03b] [debug] Python 3.12.7 (CPython x86_64 64bit) - Linux-6.11.6-arch1-1-x86_64-with-glibc2.40 (OpenSSL 3.4.0 22 Oct 2024, glibc 2.40) [debug] exe versions: ffmpeg 7.1 (setts), ffprobe 7.1, rtmpdump 2.4 [debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.46.1, urllib3-1.26.20, websockets-12.0 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests [debug] Loaded 1838 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest Latest version: [email protected] from yt-dlp/yt-dlp yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp) [archive.org] Extracting URL: https://archive.org/details/gd95-04-02.sbd.11622.sbeok.shnf [archive.org] gd95-04-02.sbd.11622.sbeok.shnf: Downloading webpage [archive.org] gd95-04-02.sbd.11622.sbeok.shnf: Downloading JSON metadata ERROR: can only concatenate str (not "NoneType") to str Traceback (most recent call last): File "/usr/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1625, in wrapper return func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1760, in __extract_info ie_result = ie.extract(url) ^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 742, in extract ie_result = self._real_extract(url) ^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/site-packages/yt_dlp/extractor/archiveorg.py", line 338, in _real_extract 'text': str_or_none(review.get('reviewtitle'), '') + '\n\n' + review.get('reviewbody'), ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~ TypeError: can only concatenate str (not "NoneType") to str ```
ideally, `join_nonempty` would've been used here ```diff diff --git a/yt_dlp/extractor/archiveorg.py b/yt_dlp/extractor/archiveorg.py index f5a55efc4..52fd02acc 100644 --- a/yt_dlp/extractor/archiveorg.py +++ b/yt_dlp/extractor/archiveorg.py @@ -335,7 +335,7 @@ def _real_extract(self, url): info['comments'].append({ 'id': review.get('review_id'), 'author': review.get('reviewer'), - 'text': str_or_none(review.get('reviewtitle'), '') + '\n\n' + review.get('reviewbody'), + 'text': join_nonempty('reviewtitle', 'reviewbody', from_dict=review, delim='\n\n'), 'timestamp': unified_timestamp(review.get('createdate')), 'parent': 'root'}) ```
1,731,452,423,000
null
Bug Report
[ "yt_dlp/extractor/archiveorg.py:ArchiveOrgIE._real_extract" ]
[]
1
yt-dlp/yt-dlp
yt-dlp__yt-dlp-11513
a9f85670d03ab993dc589f21a9ffffcad61392d5
diff --git a/yt_dlp/extractor/facebook.py b/yt_dlp/extractor/facebook.py index 2bcb5a8411f1..91e2f3489cea 100644 --- a/yt_dlp/extractor/facebook.py +++ b/yt_dlp/extractor/facebook.py @@ -563,13 +563,13 @@ def extract_from_jsmods_instances(js_data): return extract_video_data(try_get( js_data, lambda x: x['jsmods']['instances'], list) or []) - def extract_dash_manifest(video, formats): + def extract_dash_manifest(vid_data, formats, mpd_url=None): dash_manifest = traverse_obj( - video, 'dash_manifest', 'playlist', 'dash_manifest_xml_string', expected_type=str) + vid_data, 'dash_manifest', 'playlist', 'dash_manifest_xml_string', 'manifest_xml', expected_type=str) if dash_manifest: formats.extend(self._parse_mpd_formats( compat_etree_fromstring(urllib.parse.unquote_plus(dash_manifest)), - mpd_url=url_or_none(video.get('dash_manifest_url')))) + mpd_url=url_or_none(video.get('dash_manifest_url')) or mpd_url)) def process_formats(info): # Downloads with browser's User-Agent are rate limited. Working around @@ -619,9 +619,12 @@ def parse_graphql_video(video): video = video['creation_story'] video['owner'] = traverse_obj(video, ('short_form_video_context', 'video_owner')) video.update(reel_info) - fmt_data = traverse_obj(video, ('videoDeliveryLegacyFields', {dict})) or video + formats = [] q = qualities(['sd', 'hd']) + + # Legacy formats extraction + fmt_data = traverse_obj(video, ('videoDeliveryLegacyFields', {dict})) or video for key, format_id in (('playable_url', 'sd'), ('playable_url_quality_hd', 'hd'), ('playable_url_dash', ''), ('browser_native_hd_url', 'hd'), ('browser_native_sd_url', 'sd')): @@ -629,7 +632,7 @@ def parse_graphql_video(video): if not playable_url: continue if determine_ext(playable_url) == 'mpd': - formats.extend(self._extract_mpd_formats(playable_url, video_id)) + formats.extend(self._extract_mpd_formats(playable_url, video_id, fatal=False)) else: formats.append({ 'format_id': format_id, @@ -638,6 +641,28 @@ def parse_graphql_video(video): 'url': playable_url, }) extract_dash_manifest(fmt_data, formats) + + # New videoDeliveryResponse formats extraction + fmt_data = traverse_obj(video, ('videoDeliveryResponseFragment', 'videoDeliveryResponseResult')) + mpd_urls = traverse_obj(fmt_data, ('dash_manifest_urls', ..., 'manifest_url', {url_or_none})) + dash_manifests = traverse_obj(fmt_data, ('dash_manifests', lambda _, v: v['manifest_xml'])) + for idx, dash_manifest in enumerate(dash_manifests): + extract_dash_manifest(dash_manifest, formats, mpd_url=traverse_obj(mpd_urls, idx)) + if not dash_manifests: + # Only extract from MPD URLs if the manifests are not already provided + for mpd_url in mpd_urls: + formats.extend(self._extract_mpd_formats(mpd_url, video_id, fatal=False)) + for prog_fmt in traverse_obj(fmt_data, ('progressive_urls', lambda _, v: v['progressive_url'])): + format_id = traverse_obj(prog_fmt, ('metadata', 'quality', {str.lower})) + formats.append({ + 'format_id': format_id, + # sd, hd formats w/o resolution info should be deprioritized below DASH + 'quality': q(format_id) - 3, + 'url': prog_fmt['progressive_url'], + }) + for m3u8_url in traverse_obj(fmt_data, ('hls_playlist_urls', ..., 'hls_playlist_url', {url_or_none})): + formats.extend(self._extract_m3u8_formats(m3u8_url, video_id, 'mp4', fatal=False, m3u8_id='hls')) + if not formats: # Do not append false positive entry w/o any formats return
[facebook] ERROR: No video formats found (on >= 2024.11.04) ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm reporting that yt-dlp is broken on a **supported** site - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details - [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command) - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) - [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required ### Region US ### Provide a description that is worded well enough to be understood Please note this is not related to #11337 and is new. I had the problem reported in that issue, and it went away for about a week or so after updating to the nightly build. This problem is new in the past week. I've duplicated this on Windows and Mac, and it appears as though it's specifically related to private group videos. I've tested `--cookies-from-browser` for `chrome`, `firefox`, and `safari`, all with the same results. If needed, I can invite any developers to the group for troubleshooting, videos are SFW (youth hockey video). ### Provide verbose output that clearly demonstrates the problem - [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead - [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell [debug] Command-line config: ['--cookies-from-browser', 'safari', 'https://www.facebook.com/1358150084/videos/7350931248365050/', '-vU'] [debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-nightly-builds [f13df591d] (pip) [debug] Python 3.12.2 (CPython arm64 64bit) - macOS-14.6.1-arm64-arm-64bit (OpenSSL 3.3.2 3 Sep 2024) [debug] exe versions: ffmpeg 7.1 (setts), ffprobe 7.1, rtmpdump 2.4 [debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.43.2, urllib3-2.2.3, websockets-13.1 [debug] Proxy map: {} [debug] Trying secondary cookie location [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 2[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 2[debug] skipping 4 bytes (unknown record field 1): b'\x01\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x01\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 4[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x02\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 3[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 2[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 4[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 3[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 2[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x02\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 2[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 9[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 7[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 2[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 3[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 3[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 2[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 8[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 3[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 3[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 2[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown page header field): b'\x00\x00\x00\x00' [Cookies] Loading cookie 0/ 1[debug] skipping 4 bytes (unknown record field 1): b'\x00\x00\x00\x00' [debug] skipping 4 bytes (unknown record field 2): b'\x00\x00\x00\x00' [debug] skipping 8 bytes (unknown record field 3): b'\x00\x00\x00\x00\x00\x00\x00\x00' [debug] skipping 87 bytes (footer): b'\x00\x02\xe4R\x07\x17 \x05\x00\x00\x00Kbplist00\xd1\x01\x02_\x10\x18NSHTTPCookieAcceptPolicy\x10\x02\x08\x0b&\x00\x00\x00\x00\x00\x00\x01\x01\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00(' Extracted 82 cookies from safari [debug] Request Handlers: urllib, requests, websockets [debug] Loaded 1839 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest Latest version: [email protected] from yt-dlp/yt-dlp-nightly-builds yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp-nightly-builds) [facebook] Extracting URL: https://www.facebook.com/1358150084/videos/7350931248365050/ [facebook] 7350931248365050: Downloading webpage ERROR: [facebook] 7350931248365050: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U Traceback (most recent call last): File "/Users/justine/.pyenv/versions/3.12.2/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1625, in wrapper return func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/justine/.pyenv/versions/3.12.2/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1781, in __extract_info return self.process_ie_result(ie_result, download, extra_info) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/justine/.pyenv/versions/3.12.2/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1840, in process_ie_result ie_result = self.process_video_result(ie_result, download=download) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/justine/.pyenv/versions/3.12.2/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 2846, in process_video_result self.raise_no_formats(info_dict) File "/Users/justine/.pyenv/versions/3.12.2/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1122, in raise_no_formats raise ExtractorError(msg, video_id=info['id'], ie=info['extractor'], yt_dlp.utils.ExtractorError: [facebook] 7350931248365050: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U ```
The debug output during cookies extraction is a bit concerning; are you sure the facebook cookies are being successfully extracted/passed? Have you tried with `--cookies` instead? I didn't, but here's the debug output pulling cookies from chrome giving the same end result without all the cookie parsing output: ``` [debug] Command-line config: ['--cookies-from-browser', 'chrome', 'https://www.facebook.com/1358150084/videos/7350931248365050/', '-vU'] [debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-nightly-builds [f13df591d] (pip) [debug] Python 3.12.2 (CPython arm64 64bit) - macOS-14.6.1-arm64-arm-64bit (OpenSSL 3.3.2 3 Sep 2024) [debug] exe versions: ffmpeg 7.1 (setts), ffprobe 7.1, rtmpdump 2.4 [debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.43.2, urllib3-2.2.3, websockets-13.1 [debug] Proxy map: {} Extracting cookies from chrome [debug] Extracting cookies from: "/Users/justine/Library/Application Support/Google/Chrome/Default/Cookies" [debug] using find-generic-password to obtain password from OSX keychain Extracted 308 cookies from chrome [debug] cookie version breakdown: {'v10': 314, 'other': 0, 'unencrypted': 0} [debug] Request Handlers: urllib, requests, websockets [debug] Loaded 1839 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest [debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/latest/download/_update_spec Current version: [email protected] from yt-dlp/yt-dlp-nightly-builds Latest version: [email protected] from yt-dlp/yt-dlp-nightly-builds ERROR: You installed yt-dlp with pip or using the wheel from PyPi; Use that to update [facebook] Extracting URL: https://www.facebook.com/1358150084/videos/7350931248365050/ [facebook] 7350931248365050: Downloading webpage ERROR: [facebook] 7350931248365050: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U Traceback (most recent call last): File "/Users/justine/.pyenv/versions/3.12.2/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1625, in wrapper return func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/justine/.pyenv/versions/3.12.2/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1781, in __extract_info return self.process_ie_result(ie_result, download, extra_info) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/justine/.pyenv/versions/3.12.2/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1840, in process_ie_result ie_result = self.process_video_result(ie_result, download=download) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/justine/.pyenv/versions/3.12.2/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 2846, in process_video_result self.raise_no_formats(info_dict) File "/Users/justine/.pyenv/versions/3.12.2/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1122, in raise_no_formats raise ExtractorError(msg, video_id=info['id'], ie=info['extractor'], yt_dlp.utils.ExtractorError: [facebook] 7350931248365050: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U ``` If you want to add `--write-pages` to your command (with `--cookies-from-browser chrome`) and send me the resulting `.dump` file(s), I could take a look at what can be done (if they tell me anything at all). I'd advise not to post them publicly, since they could contain personal information (e.g. your FB username / ID / display name). You could send them to me over [discord](https://discord.gg/H5MNcFW63r) (same username) or via email: `bashonly ( a t) proton mail [d o t] com` Sent to your proton mail. I have a similar but maybe different issue. Same kind of response but with a publicly available video. https://www.facebook.com/watch/?v=1085099419908696&rdid=tfjgd4h6VuK74V0w [1085099419908696-560p-סוף שבוע טוב וגם מצחיק קצת 🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣 | By ‎שלמה טל‎.info.json](https://github.com/user-attachments/files/17709270/1085099419908696-560p-.By.info.json) [fb1085099419908696.http.txt](https://github.com/user-attachments/files/17709274/fb1085099419908696.http.txt) [1085099419908696_https_-_www.facebook.com_100037646286664_videos_1085099419908696_rdid=tfjgd4h6VuK74V0w.dump.html.txt](https://github.com/user-attachments/files/17709277/1085099419908696_https_-_www.facebook.com_100037646286664_videos_1085099419908696_rdid.tfjgd4h6VuK74V0w.dump.html.txt) and this patch get the job done: ```patch diff --git a/yt_dlp/extractor/facebook.py b/yt_dlp/extractor/facebook.py index 2bcb5a841..c4fa88c05 100644 --- a/yt_dlp/extractor/facebook.py +++ b/yt_dlp/extractor/facebook.py @@ -566,6 +566,10 @@ def extract_from_jsmods_instances(js_data): def extract_dash_manifest(video, formats): dash_manifest = traverse_obj( video, 'dash_manifest', 'playlist', 'dash_manifest_xml_string', expected_type=str) + if not dash_manifest: + videoDeliveryResponseFragment = ( + 'videoDeliveryResponseFragment', 'videoDeliveryResponseResult', 'dash_manifests', 0, 'manifest_xml') + dash_manifest = traverse_obj(video, videoDeliveryResponseFragment, expected_type=str) if dash_manifest: formats.extend(self._parse_mpd_formats( compat_etree_fromstring(urllib.parse.unquote_plus(dash_manifest)), ``` Is it PR worthy, or too kludgy. Give me some feedback and I'll submit this I manually applied the fix from @refack above to my installation and can confirm my videos are downloading now
1,731,379,665,000
null
Bug Report
[ "yt_dlp/extractor/facebook.py:FacebookIE._extract_from_url" ]
[]
1
yt-dlp/yt-dlp
yt-dlp__yt-dlp-11480
be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8
diff --git a/yt_dlp/extractor/rutube.py b/yt_dlp/extractor/rutube.py index 2c416811af35..abf9aec7276b 100644 --- a/yt_dlp/extractor/rutube.py +++ b/yt_dlp/extractor/rutube.py @@ -2,15 +2,18 @@ from .common import InfoExtractor from ..utils import ( + UnsupportedError, bool_or_none, determine_ext, int_or_none, + js_to_json, parse_qs, - traverse_obj, + str_or_none, try_get, unified_timestamp, url_or_none, ) +from ..utils.traversal import traverse_obj class RutubeBaseIE(InfoExtractor): @@ -19,7 +22,7 @@ def _download_api_info(self, video_id, query=None): query = {} query['format'] = 'json' return self._download_json( - f'http://rutube.ru/api/video/{video_id}/', + f'https://rutube.ru/api/video/{video_id}/', video_id, 'Downloading video JSON', 'Unable to download video JSON', query=query) @@ -61,18 +64,21 @@ def _download_api_options(self, video_id, query=None): query = {} query['format'] = 'json' return self._download_json( - f'http://rutube.ru/api/play/options/{video_id}/', + f'https://rutube.ru/api/play/options/{video_id}/', video_id, 'Downloading options JSON', 'Unable to download options JSON', headers=self.geo_verification_headers(), query=query) - def _extract_formats(self, options, video_id): + def _extract_formats_and_subtitles(self, options, video_id): formats = [] + subtitles = {} for format_id, format_url in options['video_balancer'].items(): ext = determine_ext(format_url) if ext == 'm3u8': - formats.extend(self._extract_m3u8_formats( - format_url, video_id, 'mp4', m3u8_id=format_id, fatal=False)) + fmts, subs = self._extract_m3u8_formats_and_subtitles( + format_url, video_id, 'mp4', m3u8_id=format_id, fatal=False) + formats.extend(fmts) + self._merge_subtitles(subs, target=subtitles) elif ext == 'f4m': formats.extend(self._extract_f4m_formats( format_url, video_id, f4m_id=format_id, fatal=False)) @@ -82,11 +88,19 @@ def _extract_formats(self, options, video_id): 'format_id': format_id, }) for hls_url in traverse_obj(options, ('live_streams', 'hls', ..., 'url', {url_or_none})): - formats.extend(self._extract_m3u8_formats(hls_url, video_id, ext='mp4', fatal=False)) - return formats - - def _download_and_extract_formats(self, video_id, query=None): - return self._extract_formats( + fmts, subs = self._extract_m3u8_formats_and_subtitles( + hls_url, video_id, 'mp4', fatal=False, m3u8_id='hls') + formats.extend(fmts) + self._merge_subtitles(subs, target=subtitles) + for caption in traverse_obj(options, ('captions', lambda _, v: url_or_none(v['file']))): + subtitles.setdefault(caption.get('code') or 'ru', []).append({ + 'url': caption['file'], + 'name': caption.get('langTitle'), + }) + return formats, subtitles + + def _download_and_extract_formats_and_subtitles(self, video_id, query=None): + return self._extract_formats_and_subtitles( self._download_api_options(video_id, query=query), video_id) @@ -97,8 +111,8 @@ class RutubeIE(RutubeBaseIE): _EMBED_REGEX = [r'<iframe[^>]+?src=(["\'])(?P<url>(?:https?:)?//rutube\.ru/(?:play/)?embed/[\da-z]{32}.*?)\1'] _TESTS = [{ - 'url': 'http://rutube.ru/video/3eac3b4561676c17df9132a9a1e62e3e/', - 'md5': 'e33ac625efca66aba86cbec9851f2692', + 'url': 'https://rutube.ru/video/3eac3b4561676c17df9132a9a1e62e3e/', + 'md5': '3d73fdfe5bb81b9aef139e22ef3de26a', 'info_dict': { 'id': '3eac3b4561676c17df9132a9a1e62e3e', 'ext': 'mp4', @@ -111,26 +125,25 @@ class RutubeIE(RutubeBaseIE): 'upload_date': '20131016', 'age_limit': 0, 'view_count': int, - 'thumbnail': 'http://pic.rutubelist.ru/video/d2/a0/d2a0aec998494a396deafc7ba2c82add.jpg', + 'thumbnail': 'https://pic.rutubelist.ru/video/d2/a0/d2a0aec998494a396deafc7ba2c82add.jpg', 'categories': ['Новости и СМИ'], 'chapters': [], }, - 'expected_warnings': ['Unable to download f4m'], }, { - 'url': 'http://rutube.ru/play/embed/a10e53b86e8f349080f718582ce4c661', + 'url': 'https://rutube.ru/play/embed/a10e53b86e8f349080f718582ce4c661', 'only_matching': True, }, { - 'url': 'http://rutube.ru/embed/a10e53b86e8f349080f718582ce4c661', + 'url': 'https://rutube.ru/embed/a10e53b86e8f349080f718582ce4c661', 'only_matching': True, }, { - 'url': 'http://rutube.ru/video/3eac3b4561676c17df9132a9a1e62e3e/?pl_id=4252', + 'url': 'https://rutube.ru/video/3eac3b4561676c17df9132a9a1e62e3e/?pl_id=4252', 'only_matching': True, }, { 'url': 'https://rutube.ru/video/10b3a03fc01d5bbcc632a2f3514e8aab/?pl_type=source', 'only_matching': True, }, { 'url': 'https://rutube.ru/video/private/884fb55f07a97ab673c7d654553e0f48/?p=x2QojCumHTS3rsKHWXN8Lg', - 'md5': 'd106225f15d625538fe22971158e896f', + 'md5': '4fce7b4fcc7b1bcaa3f45eb1e1ad0dd7', 'info_dict': { 'id': '884fb55f07a97ab673c7d654553e0f48', 'ext': 'mp4', @@ -143,11 +156,10 @@ class RutubeIE(RutubeBaseIE): 'upload_date': '20221210', 'age_limit': 0, 'view_count': int, - 'thumbnail': 'http://pic.rutubelist.ru/video/f2/d4/f2d42b54be0a6e69c1c22539e3152156.jpg', + 'thumbnail': 'https://pic.rutubelist.ru/video/f2/d4/f2d42b54be0a6e69c1c22539e3152156.jpg', 'categories': ['Видеоигры'], 'chapters': [], }, - 'expected_warnings': ['Unable to download f4m'], }, { 'url': 'https://rutube.ru/video/c65b465ad0c98c89f3b25cb03dcc87c6/', 'info_dict': { @@ -156,17 +168,16 @@ class RutubeIE(RutubeBaseIE): 'chapters': 'count:4', 'categories': ['Бизнес и предпринимательство'], 'description': 'md5:252feac1305257d8c1bab215cedde75d', - 'thumbnail': 'http://pic.rutubelist.ru/video/71/8f/718f27425ea9706073eb80883dd3787b.png', + 'thumbnail': 'https://pic.rutubelist.ru/video/71/8f/718f27425ea9706073eb80883dd3787b.png', 'duration': 782, 'age_limit': 0, 'uploader_id': '23491359', 'timestamp': 1677153329, 'view_count': int, 'upload_date': '20230223', - 'title': 'Бизнес с нуля: найм сотрудников. Интервью с директором строительной компании', + 'title': 'Бизнес с нуля: найм сотрудников. Интервью с директором строительной компании #1', 'uploader': 'Стас Быков', }, - 'expected_warnings': ['Unable to download f4m'], }, { 'url': 'https://rutube.ru/live/video/c58f502c7bb34a8fcdd976b221fca292/', 'info_dict': { @@ -174,7 +185,7 @@ class RutubeIE(RutubeBaseIE): 'ext': 'mp4', 'categories': ['Телепередачи'], 'description': '', - 'thumbnail': 'http://pic.rutubelist.ru/video/14/19/14190807c0c48b40361aca93ad0867c7.jpg', + 'thumbnail': 'https://pic.rutubelist.ru/video/14/19/14190807c0c48b40361aca93ad0867c7.jpg', 'live_status': 'is_live', 'age_limit': 0, 'uploader_id': '23460655', @@ -184,6 +195,24 @@ class RutubeIE(RutubeBaseIE): 'title': r're:Первый канал. Прямой эфир \d{4}-\d{2}-\d{2} \d{2}:\d{2}$', 'uploader': 'Первый канал', }, + }, { + 'url': 'https://rutube.ru/play/embed/03a9cb54bac3376af4c5cb0f18444e01/', + 'info_dict': { + 'id': '03a9cb54bac3376af4c5cb0f18444e01', + 'ext': 'mp4', + 'age_limit': 0, + 'description': '', + 'title': 'Церемония начала торгов акциями ПАО «ЕвроТранс»', + 'chapters': [], + 'upload_date': '20240829', + 'duration': 293, + 'uploader': 'MOEX - Московская биржа', + 'timestamp': 1724946628, + 'thumbnail': 'https://pic.rutubelist.ru/video/2e/24/2e241fddb459baf0fa54acfca44874f4.jpg', + 'view_count': int, + 'uploader_id': '38420507', + 'categories': ['Интервью'], + }, }, { 'url': 'https://rutube.ru/video/5ab908fccfac5bb43ef2b1e4182256b0/', 'only_matching': True, @@ -192,40 +221,46 @@ class RutubeIE(RutubeBaseIE): 'only_matching': True, }] - @classmethod - def suitable(cls, url): - return False if RutubePlaylistIE.suitable(url) else super().suitable(url) - def _real_extract(self, url): video_id = self._match_id(url) query = parse_qs(url) info = self._download_and_extract_info(video_id, query) - info['formats'] = self._download_and_extract_formats(video_id, query) - return info + formats, subtitles = self._download_and_extract_formats_and_subtitles(video_id, query) + return { + **info, + 'formats': formats, + 'subtitles': subtitles, + } class RutubeEmbedIE(RutubeBaseIE): IE_NAME = 'rutube:embed' IE_DESC = 'Rutube embedded videos' - _VALID_URL = r'https?://rutube\.ru/(?:video|play)/embed/(?P<id>[0-9]+)' + _VALID_URL = r'https?://rutube\.ru/(?:video|play)/embed/(?P<id>[0-9]+)(?:[?#/]|$)' _TESTS = [{ - 'url': 'http://rutube.ru/video/embed/6722881?vk_puid37=&vk_puid38=', + 'url': 'https://rutube.ru/video/embed/6722881?vk_puid37=&vk_puid38=', 'info_dict': { 'id': 'a10e53b86e8f349080f718582ce4c661', 'ext': 'mp4', 'timestamp': 1387830582, 'upload_date': '20131223', 'uploader_id': '297833', - 'description': 'Видео группы ★http://vk.com/foxkidsreset★ музей Fox Kids и Jetix<br/><br/> восстановлено и сделано в шикоформате subziro89 http://vk.com/subziro89', 'uploader': 'subziro89 ILya', 'title': 'Мистический городок Эйри в Индиан 5 серия озвучка subziro89', + 'age_limit': 0, + 'duration': 1395, + 'chapters': [], + 'description': 'md5:a5acea57bbc3ccdc3cacd1f11a014b5b', + 'view_count': int, + 'thumbnail': 'https://pic.rutubelist.ru/video/d3/03/d3031f4670a6e6170d88fb3607948418.jpg', + 'categories': ['Сериалы'], }, 'params': { 'skip_download': True, }, }, { - 'url': 'http://rutube.ru/play/embed/8083783', + 'url': 'https://rutube.ru/play/embed/8083783', 'only_matching': True, }, { # private video @@ -240,11 +275,12 @@ def _real_extract(self, url): query = parse_qs(url) options = self._download_api_options(embed_id, query) video_id = options['effective_video'] - formats = self._extract_formats(options, video_id) + formats, subtitles = self._extract_formats_and_subtitles(options, video_id) info = self._download_and_extract_info(video_id, query) info.update({ 'extractor_key': 'Rutube', 'formats': formats, + 'subtitles': subtitles, }) return info @@ -295,14 +331,14 @@ class RutubeTagsIE(RutubePlaylistBaseIE): IE_DESC = 'Rutube tags' _VALID_URL = r'https?://rutube\.ru/tags/video/(?P<id>\d+)' _TESTS = [{ - 'url': 'http://rutube.ru/tags/video/1800/', + 'url': 'https://rutube.ru/tags/video/1800/', 'info_dict': { 'id': '1800', }, 'playlist_mincount': 68, }] - _PAGE_TEMPLATE = 'http://rutube.ru/api/tags/video/%s/?page=%s&format=json' + _PAGE_TEMPLATE = 'https://rutube.ru/api/tags/video/%s/?page=%s&format=json' class RutubeMovieIE(RutubePlaylistBaseIE): @@ -310,8 +346,8 @@ class RutubeMovieIE(RutubePlaylistBaseIE): IE_DESC = 'Rutube movies' _VALID_URL = r'https?://rutube\.ru/metainfo/tv/(?P<id>\d+)' - _MOVIE_TEMPLATE = 'http://rutube.ru/api/metainfo/tv/%s/?format=json' - _PAGE_TEMPLATE = 'http://rutube.ru/api/metainfo/tv/%s/video?page=%s&format=json' + _MOVIE_TEMPLATE = 'https://rutube.ru/api/metainfo/tv/%s/?format=json' + _PAGE_TEMPLATE = 'https://rutube.ru/api/metainfo/tv/%s/video?page=%s&format=json' def _real_extract(self, url): movie_id = self._match_id(url) @@ -327,62 +363,82 @@ class RutubePersonIE(RutubePlaylistBaseIE): IE_DESC = 'Rutube person videos' _VALID_URL = r'https?://rutube\.ru/video/person/(?P<id>\d+)' _TESTS = [{ - 'url': 'http://rutube.ru/video/person/313878/', + 'url': 'https://rutube.ru/video/person/313878/', 'info_dict': { 'id': '313878', }, - 'playlist_mincount': 37, + 'playlist_mincount': 36, }] - _PAGE_TEMPLATE = 'http://rutube.ru/api/video/person/%s/?page=%s&format=json' + _PAGE_TEMPLATE = 'https://rutube.ru/api/video/person/%s/?page=%s&format=json' class RutubePlaylistIE(RutubePlaylistBaseIE): IE_NAME = 'rutube:playlist' IE_DESC = 'Rutube playlists' - _VALID_URL = r'https?://rutube\.ru/(?:video|(?:play/)?embed)/[\da-z]{32}/\?.*?\bpl_id=(?P<id>\d+)' + _VALID_URL = r'https?://rutube\.ru/plst/(?P<id>\d+)' _TESTS = [{ - 'url': 'https://rutube.ru/video/cecd58ed7d531fc0f3d795d51cee9026/?pl_id=3097&pl_type=tag', + 'url': 'https://rutube.ru/plst/308547/', 'info_dict': { - 'id': '3097', + 'id': '308547', }, - 'playlist_count': 27, - }, { - 'url': 'https://rutube.ru/video/10b3a03fc01d5bbcc632a2f3514e8aab/?pl_id=4252&pl_type=source', - 'only_matching': True, + 'playlist_mincount': 22, }] - - _PAGE_TEMPLATE = 'http://rutube.ru/api/playlist/%s/%s/?page=%s&format=json' - - @classmethod - def suitable(cls, url): - from ..utils import int_or_none, parse_qs - - if not super().suitable(url): - return False - params = parse_qs(url) - return params.get('pl_type', [None])[0] and int_or_none(params.get('pl_id', [None])[0]) - - def _next_page_url(self, page_num, playlist_id, item_kind): - return self._PAGE_TEMPLATE % (item_kind, playlist_id, page_num) - - def _real_extract(self, url): - qs = parse_qs(url) - playlist_kind = qs['pl_type'][0] - playlist_id = qs['pl_id'][0] - return self._extract_playlist(playlist_id, item_kind=playlist_kind) + _PAGE_TEMPLATE = 'https://rutube.ru/api/playlist/custom/%s/videos?page=%s&format=json' class RutubeChannelIE(RutubePlaylistBaseIE): IE_NAME = 'rutube:channel' IE_DESC = 'Rutube channel' - _VALID_URL = r'https?://rutube\.ru/channel/(?P<id>\d+)/videos' + _VALID_URL = r'https?://rutube\.ru/(?:channel/(?P<id>\d+)|u/(?P<slug>\w+))(?:/(?P<section>videos|shorts|playlists))?' _TESTS = [{ 'url': 'https://rutube.ru/channel/639184/videos/', 'info_dict': { - 'id': '639184', + 'id': '639184_videos', + }, + 'playlist_mincount': 129, + }, { + 'url': 'https://rutube.ru/channel/25902603/shorts/', + 'info_dict': { + 'id': '25902603_shorts', }, - 'playlist_mincount': 133, + 'playlist_mincount': 277, + }, { + 'url': 'https://rutube.ru/channel/25902603/', + 'info_dict': { + 'id': '25902603', + }, + 'playlist_mincount': 406, + }, { + 'url': 'https://rutube.ru/u/rutube/videos/', + 'info_dict': { + 'id': '23704195_videos', + }, + 'playlist_mincount': 113, }] - _PAGE_TEMPLATE = 'http://rutube.ru/api/video/person/%s/?page=%s&format=json' + _PAGE_TEMPLATE = 'https://rutube.ru/api/video/person/%s/?page=%s&format=json&origin__type=%s' + + def _next_page_url(self, page_num, playlist_id, section): + origin_type = { + 'videos': 'rtb,rst,ifrm,rspa', + 'shorts': 'rshorts', + None: '', + }.get(section) + return self._PAGE_TEMPLATE % (playlist_id, page_num, origin_type) + + def _real_extract(self, url): + playlist_id, slug, section = self._match_valid_url(url).group('id', 'slug', 'section') + if section == 'playlists': + raise UnsupportedError(url) + if slug: + webpage = self._download_webpage(url, slug) + redux_state = self._search_json( + r'window\.reduxState\s*=', webpage, 'redux state', slug, transform_source=js_to_json) + playlist_id = traverse_obj(redux_state, ( + 'api', 'queries', lambda k, _: k.startswith('channelIdBySlug'), + 'data', 'channel_id', {int}, {str_or_none}, any)) + playlist = self._extract_playlist(playlist_id, section=section) + if section: + playlist['id'] = f'{playlist_id}_{section}' + return playlist
[rutube] Connection to bl.rutube.ru timed out => No video formats found! ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm reporting that yt-dlp is broken on a **supported** site - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details - [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command) - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) - [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required ### Region Russia ### Provide a description that is worded well enough to be understood Current yt-dlp doesn't download any videos from rutube.ru at all. The errors are `Connection to bl.rutube.ru timed out. (connect timeout=20.0)` followed by `No video formats found!`. With and without VPN. Also tried `--user-agent "Mozilla/5.0 (X11; Linux x86_64; rv:131.0) Gecko/20100101 Firefox/131.0" --cookies-from-browser firefox` with no help. But in Firefox video plays. ### Provide verbose output that clearly demonstrates the problem - [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [X] If using API, add `'verbose': True` to `YoutubeDL` params instead - [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell [debug] Command-line config: ['https://rutube.ru/video/d144161b680dbc3a864c1587632b18f7/', '-vU'] [debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [197d0b03b] (zip) [debug] Python 3.12.6 (CPython x86_64 64bit) - Linux-6.11.2-amd64-x86_64-with-glibc2.40 (OpenSSL 3.3.2 3 Sep 2024, glibc 2.40) [debug] exe versions: ffmpeg 7.0.2-3 (setts), ffprobe 7.0.2-3, phantomjs 2.1.1 [debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.46.1, urllib3-2.0.7, websockets-10.4 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests [debug] Loaded 1838 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest Latest version: [email protected] from yt-dlp/yt-dlp yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp) [rutube] Extracting URL: https://rutube.ru/video/d144161b680dbc3a864c1587632b18f7/ [rutube] d144161b680dbc3a864c1587632b18f7: Downloading video JSON [rutube] d144161b680dbc3a864c1587632b18f7: Downloading options JSON [rutube] d144161b680dbc3a864c1587632b18f7: Downloading m3u8 information WARNING: [rutube] Failed to download m3u8 information: (<urllib3.connection.HTTPConnection object at 0x7fca20f0ccb0>, 'Connection to bl.rutube.ru timed out. (connect timeout=20.0)') [rutube] d144161b680dbc3a864c1587632b18f7: Downloading m3u8 information WARNING: [rutube] Failed to download m3u8 information: (<urllib3.connection.HTTPConnection object at 0x7fca20f0d460>, 'Connection to bl.rutube.ru timed out. (connect timeout=20.0)') ERROR: [rutube] d144161b680dbc3a864c1587632b18f7: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U Traceback (most recent call last): File "/usr/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1625, in wrapper return func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1781, in __extract_info return self.process_ie_result(ie_result, download, extra_info) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1840, in process_ie_result ie_result = self.process_video_result(ie_result, download=download) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 2846, in process_video_result self.raise_no_formats(info_dict) File "/usr/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1122, in raise_no_formats raise ExtractorError(msg, video_id=info['id'], ie=info['extractor'], yt_dlp.utils.ExtractorError: [rutube] d144161b680dbc3a864c1587632b18f7: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U ```
Exactly same behavior for both master and nightly: ``` [...] Latest version: [email protected] from yt-dlp/yt-dlp-master-builds yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp-master-builds) [...] WARNING: [rutube] Failed to download m3u8 information: (<urllib3.connection.HTTPConnection object at 0x7fbde7180410>, 'Connection to bl.rutube.ru timed out. (connect timeout=20.0)') [...] ERROR: [rutube] d144161b680dbc3a864c1587632b18f7: No video formats found! ``` ``` [...] Latest version: [email protected] from yt-dlp/yt-dlp-nightly-builds yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp-nightly-builds) [...] WARNING: [rutube] Failed to download m3u8 information: (<urllib3.connection.HTTPConnection object at 0x7fbde7180410>, 'Connection to bl.rutube.ru timed out. (connect timeout=20.0)') [...] ERROR: [rutube] d144161b680dbc3a864c1587632b18f7: No video formats found! ``` The Firefox's Developer Tools are stating the browser does connect successfully to bl.rutube.ru: ``` GET https://bl.rutube.ru/route/d144161b680dbc3a864c1587632b18f7.m3u8?... HTTP/2 200 server: QRATOR date: Fri, 08 Nov 2024 08:05:52 GMT content-type: text/html; charset=utf-8 temperature: {'short': 2, 'long': 4} dest-server: river-2-301 client-ip: X.X.X.X x-frontend-config-version: 3.8.2 access-control-allow-origin: * access-control-allow-methods: GET, HEAD access-control-allow-headers: Range access-control-expose-headers: Range, Content-Length, Accept-Ranges, Date access-control-allow-credentials: true content-encoding: br X-Firefox-Spdy: h2 ``` Seems like QRATOR (rutube's anti-ddos service of choice) somehow detects yt-dlp's requests and block them? Just tried to "wget" the m3u8 link directly from console and it works too. So it allows any user-agent, even wget. The problem must be somewhere inside the extractor or the urllib3: ``` $ wget "https://bl.rutube.ru/route/d144161b680dbc3a864c1587632b18f7.m3u8?..." Resolving bl.rutube.ru (bl.rutube.ru)... 178.248.232.241 Connecting to bl.rutube.ru (bl.rutube.ru)|178.248.232.241|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 1642 (1.6K) [text/html] => correct m3u8 file. ``` Seems like this fixes it: ```diff diff --git a/yt_dlp/extractor/rutube.py b/yt_dlp/extractor/rutube.py index 2c416811a..c77e0bf8a 100644 --- a/yt_dlp/extractor/rutube.py +++ b/yt_dlp/extractor/rutube.py @@ -61,7 +61,7 @@ def _download_api_options(self, video_id, query=None): query = {} query['format'] = 'json' return self._download_json( - f'http://rutube.ru/api/play/options/{video_id}/', + f'https://rutube.ru/api/play/options/{video_id}/', video_id, 'Downloading options JSON', 'Unable to download options JSON', headers=self.geo_verification_headers(), query=query) ``` Yep, can confirm this patch does the job. It works with it: ``` [debug] Command-line config: ['https://rutube.ru/video/d144161b680dbc3a864c1587632b18f7/', '-vU'] [debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [197d0b03b] (linux_exe) [debug] Python 3.12.6 (CPython x86_64 64bit) - Linux-6.11.2-amd64-x86_64-with-glibc2.40 (OpenSSL 3.3.2 3 Sep 2024, glibc 2.40) [debug] exe versions: ffmpeg 7.0.2-3 (setts), ffprobe 7.0.2-3, phantomjs broken [debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.46.1, urllib3-2.2.3, websockets-13.1 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests, websockets [debug] Loaded 1839 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest Latest version: [email protected] from yt-dlp/yt-dlp yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp) [rutube] Extracting URL: https://rutube.ru/video/d144161b680dbc3a864c1587632b18f7/ [rutube] d144161b680dbc3a864c1587632b18f7: Downloading video JSON [rutube] d144161b680dbc3a864c1587632b18f7: Downloading options JSON [rutube] d144161b680dbc3a864c1587632b18f7: Downloading m3u8 information [rutube] d144161b680dbc3a864c1587632b18f7: Downloading m3u8 information [debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id [debug] Default format spec: bestvideo*+bestaudio/best [info] d144161b680dbc3a864c1587632b18f7: Downloading 1 format(s): m3u8-406-1 [debug] Invoking hlsnative downloader on "https://river-1.rutube.ru/hls-vod/QJgc4k_gFwQXtJK3H3QXaA/1731661417/1944/rutube-ds-origin-118-1/ebb02cbd26b04bf5a83e50c038e49b90.mp4.m3u8?i=1280x720_406" [hlsnative] Downloading m3u8 manifest [hlsnative] Total fragments: 45 [download] Destination: yt-dlp, the best youtube-dl alternative with more features [d144161b680dbc3a864c1587632b18f7].mp4 [download] 100% of 12.26MiB in 00:00:08 at 1.50MiB/s ``` Additional info: patch is working. Without the patch it's not possible to download any video. Hello! Could you create a pull request? I have the same problem. If you can't do it, I can try to do it. I will create a PR and merge it later today I confirm that this works from UK. There are several other http: URLs used in the extractor that appear to be unaffected. Maybe also adopt patch from #11415 ? Maybe change ALL "http" to "https" in extractor?
1,731,086,610,000
null
Bug Report
[ "yt_dlp/extractor/rutube.py:RutubeBaseIE._download_api_info", "yt_dlp/extractor/rutube.py:RutubeBaseIE._download_api_options", "yt_dlp/extractor/rutube.py:RutubeBaseIE._extract_formats", "yt_dlp/extractor/rutube.py:RutubeBaseIE._download_and_extract_formats", "yt_dlp/extractor/rutube.py:RutubeIE.suitable", "yt_dlp/extractor/rutube.py:RutubeIE._real_extract", "yt_dlp/extractor/rutube.py:RutubeEmbedIE._real_extract", "yt_dlp/extractor/rutube.py:RutubePlaylistIE.suitable", "yt_dlp/extractor/rutube.py:RutubePlaylistIE._next_page_url", "yt_dlp/extractor/rutube.py:RutubePlaylistIE._real_extract" ]
[ "yt_dlp/extractor/rutube.py:RutubeBaseIE._extract_formats_and_subtitles", "yt_dlp/extractor/rutube.py:RutubeBaseIE._download_and_extract_formats_and_subtitles", "yt_dlp/extractor/rutube.py:RutubeChannelIE._next_page_url", "yt_dlp/extractor/rutube.py:RutubeChannelIE._real_extract" ]
10
yt-dlp/yt-dlp
yt-dlp__yt-dlp-11478
be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8
diff --git a/yt_dlp/extractor/cloudflarestream.py b/yt_dlp/extractor/cloudflarestream.py index 8a409461a8bc..9e9e89a801fa 100644 --- a/yt_dlp/extractor/cloudflarestream.py +++ b/yt_dlp/extractor/cloudflarestream.py @@ -8,7 +8,7 @@ class CloudflareStreamIE(InfoExtractor): _DOMAIN_RE = r'(?:cloudflarestream\.com|(?:videodelivery|bytehighway)\.net)' _EMBED_RE = rf'(?:embed\.|{_SUBDOMAIN_RE}){_DOMAIN_RE}/embed/[^/?#]+\.js\?(?:[^#]+&)?video=' _ID_RE = r'[\da-f]{32}|eyJ[\w-]+\.[\w-]+\.[\w-]+' - _VALID_URL = rf'https?://(?:{_SUBDOMAIN_RE}{_DOMAIN_RE}/|{_EMBED_RE})(?P<id>{_ID_RE})' + _VALID_URL = rf'https?://(?:{_SUBDOMAIN_RE}(?P<domain>{_DOMAIN_RE})/|{_EMBED_RE})(?P<id>{_ID_RE})' _EMBED_REGEX = [ rf'<script[^>]+\bsrc=(["\'])(?P<url>(?:https?:)?//{_EMBED_RE}(?:{_ID_RE})(?:(?!\1).)*)\1', rf'<iframe[^>]+\bsrc=["\'](?P<url>https?://{_SUBDOMAIN_RE}{_DOMAIN_RE}/[\da-f]{{32}})', @@ -19,7 +19,7 @@ class CloudflareStreamIE(InfoExtractor): 'id': '31c9291ab41fac05471db4e73aa11717', 'ext': 'mp4', 'title': '31c9291ab41fac05471db4e73aa11717', - 'thumbnail': 'https://videodelivery.net/31c9291ab41fac05471db4e73aa11717/thumbnails/thumbnail.jpg', + 'thumbnail': 'https://cloudflarestream.com/31c9291ab41fac05471db4e73aa11717/thumbnails/thumbnail.jpg', }, 'params': { 'skip_download': 'm3u8', @@ -30,7 +30,7 @@ class CloudflareStreamIE(InfoExtractor): 'id': '0e8e040aec776862e1d632a699edf59e', 'ext': 'mp4', 'title': '0e8e040aec776862e1d632a699edf59e', - 'thumbnail': 'https://videodelivery.net/0e8e040aec776862e1d632a699edf59e/thumbnails/thumbnail.jpg', + 'thumbnail': 'https://cloudflarestream.com/0e8e040aec776862e1d632a699edf59e/thumbnails/thumbnail.jpg', }, }, { 'url': 'https://watch.cloudflarestream.com/9df17203414fd1db3e3ed74abbe936c1', @@ -54,7 +54,7 @@ class CloudflareStreamIE(InfoExtractor): 'id': 'eaef9dea5159cf968be84241b5cedfe7', 'ext': 'mp4', 'title': 'eaef9dea5159cf968be84241b5cedfe7', - 'thumbnail': 'https://videodelivery.net/eaef9dea5159cf968be84241b5cedfe7/thumbnails/thumbnail.jpg', + 'thumbnail': 'https://cloudflarestream.com/eaef9dea5159cf968be84241b5cedfe7/thumbnails/thumbnail.jpg', }, 'params': { 'skip_download': 'm3u8', @@ -62,8 +62,9 @@ class CloudflareStreamIE(InfoExtractor): }] def _real_extract(self, url): - video_id = self._match_id(url) - domain = 'bytehighway.net' if 'bytehighway.net/' in url else 'videodelivery.net' + video_id, domain = self._match_valid_url(url).group('id', 'domain') + if domain != 'bytehighway.net': + domain = 'cloudflarestream.com' base_url = f'https://{domain}/{video_id}/' if '.' in video_id: video_id = self._parse_json(base64.urlsafe_b64decode(
CloudFlareStream "No video formats found!" ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm reporting that yt-dlp is broken on a **supported** site - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details - [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command) - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) - [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required ### Region Spain ### Used URL https://watch.cloudflarestream.com/e8766bfd7771e53b5dc26c76b650d91a/ ### Provide a description that is worded well enough to be understood Can't download a public CloudFlareStream video. I get CERTIFICATE_VERIFY_FAILED warnings and then an error stating "No video formats found!" Expected result: A video download from the provided link. ### Provide verbose output that clearly demonstrates the problem - [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead - [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell [debug] Command-line config: ['-vU', 'https://watch.cloudflarestream.com/e8766bfd7771e53b5dc26c76b650d91a/'] [debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [197d0b03b] (win_exe) [debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1t 7 Feb 2023) [debug] exe versions: ffmpeg 5.1.2-essentials_build-www.gyan.dev (setts), ffprobe 5.1.2-essentials_build-www.gyan.dev [debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-13.1 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests, websockets, curl_cffi [debug] Loaded 1838 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest Latest version: [email protected] from yt-dlp/yt-dlp yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp) [CloudflareStream] Extracting URL: https://watch.cloudflarestream.com/e8766bfd7771e53b5dc26c76b650d91a/ [CloudflareStream] e8766bfd7771e53b5dc26c76b650d91a: Downloading m3u8 information WARNING: [CloudflareStream] Failed to download m3u8 information: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1007) [CloudflareStream] e8766bfd7771e53b5dc26c76b650d91a: Downloading MPD manifest WARNING: [CloudflareStream] Failed to download MPD manifest: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1007) ERROR: [CloudflareStream] e8766bfd7771e53b5dc26c76b650d91a: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U Traceback (most recent call last): File "yt_dlp\YoutubeDL.py", line 1625, in wrapper File "yt_dlp\YoutubeDL.py", line 1781, in __extract_info File "yt_dlp\YoutubeDL.py", line 1840, in process_ie_result File "yt_dlp\YoutubeDL.py", line 2846, in process_video_result File "yt_dlp\YoutubeDL.py", line 1122, in raise_no_formats yt_dlp.utils.ExtractorError: [CloudflareStream] e8766bfd7771e53b5dc26c76b650d91a: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U ```
i still meet this issue too I am able to manually download the video.mpd file with https://videodelivery.net/eaef9dea5159cf968be84241b5cedfe7/manifest/video.mpd So I'm not sure what's going wrong, maybe the extractor is malforming the url? When running the command with "--no-check-certificate" I get a 404 error when trying to fetch m3u8 and mpd files: ```shell [debug] Command-line config: ['-vU', 'https://watch.cloudflarestream.com/e8766bfd7771e53b5dc26c76b650d91a/', '--no-check-certificate'] [debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [197d0b03b] (win_exe) [debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1t 7 Feb 2023) [debug] exe versions: ffmpeg 5.1.2-essentials_build-www.gyan.dev (setts), ffprobe 5.1.2-essentials_build-www.gyan.dev [debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-13.1 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests, websockets, curl_cffi [debug] Loaded 1838 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest Latest version: [email protected] from yt-dlp/yt-dlp yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp) [CloudflareStream] Extracting URL: https://watch.cloudflarestream.com/e8766bfd7771e53b5dc26c76b650d91a/ [CloudflareStream] e8766bfd7771e53b5dc26c76b650d91a: Downloading m3u8 information WARNING: [CloudflareStream] Failed to download m3u8 information: HTTP Error 404: Not Found [CloudflareStream] e8766bfd7771e53b5dc26c76b650d91a: Downloading MPD manifest WARNING: [CloudflareStream] Failed to download MPD manifest: HTTP Error 404: Not Found ERROR: [CloudflareStream] e8766bfd7771e53b5dc26c76b650d91a: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U Traceback (most recent call last): File "yt_dlp\YoutubeDL.py", line 1625, in wrapper File "yt_dlp\YoutubeDL.py", line 1781, in __extract_info File "yt_dlp\YoutubeDL.py", line 1840, in process_ie_result File "yt_dlp\YoutubeDL.py", line 2846, in process_video_result File "yt_dlp\YoutubeDL.py", line 1122, in raise_no_formats yt_dlp.utils.ExtractorError: [CloudflareStream] e8766bfd7771e53b5dc26c76b650d91a: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U ``` Playing around with Postman and GET requests to videodelivery.net don't go through but they do for cloudflarestream.com GET https://videodelivery.net/eaef9dea5159cf968be84241b5cedfe7/manifest/video.mpd ``` <html> <body>Object not found</body> </html> ``` GET https://cloudflarestream.com/eaef9dea5159cf968be84241b5cedfe7/manifest/video.mpd ``` <?xml version="1.0" encoding="UTF-8"?> <MPD xmlns="urn:mpeg:dash:schema:mpd:2011" profiles="urn:mpeg:dash:profile:isoff-live:2011" type="static" mediaPresentationDuration="PT38S" minBufferTime="PT8S"> <Period id="0"> <AdaptationSet id="800962650" mimeType="audio/mp4" segmentAlignment="true" lang="original"> <Representation id="449134446" audioSamplingRate="44100" bandwidth="142554" codecs="mp4a.40.2"> <AudioChannelConfiguration schemeIdUri="urn:mpeg:dash:23003:3:audio_channel_configuration:2011" value="1"></AudioChannelConfiguration> <SegmentTemplate duration="172695" initialization="../../eaef9dea5159cf968be84241b5cedfe7/audio/142/init.mp4?p=eyJ0eXBlIjoiaW5pdCIsInZpZGVvSUQiOiJlYWVmOWRlYTUxNTljZjk2OGJlODQyNDFiNWNlZGZlNyIsIm93bmVySUQiOjY2MzQxNTgsImNyZWF0b3JJRCI6InVwcmlkZSIsInVzZVZPRE9URkUiOmZhbHNlLCJmcm9tTWV6emFuaW5lIjpmYWxzZSwic3RvcmFnZVByb3ZpZGVyIjoyLCJ0cmFjayI6ImFjZjUxZDAwYTlkNmNiODNmNGNhNzI1ZDZiOTM2MjI3IiwicmVuZGl0aW9uIjoiNDQ5MTM0NDQ2IiwibXV4aW5nIjoiNDk5ODMzNjE1In0&amp;s=L3jChcK8L8Kgwo0zwrTDhU_DpncjwpzCqMO0esOCC8O5wonCvRTCohMOQsOQwpBMAg" media="../../eaef9dea5159cf968be84241b5cedfe7/audio/142/seg_$Number$.mp4?p=eyJ0eXBlIjoic2VnbWVudCIsInZpZGVvSUQiOiJlYWVmOWRlYTUxNTljZjk2OGJlODQyNDFiNWNlZGZlNyIsIm93bmVySUQiOjY2MzQxNTgsImNyZWF0b3JJRCI6InVwcmlkZSIsInNlZ21lbnREdXJhdGlvblNlY3MiOjMuOTE1OTgzMTU1MTY2ODI4NSwidXNlVk9ET1RGRSI6ZmFsc2UsImZyb21NZXp6YW5pbmUiOmZhbHNlLCJzdG9yYWdlUHJvdmlkZXIiOjIsInRyYWNrIjoiYWNmNTFkMDBhOWQ2Y2I4M2Y0Y2E3MjVkNmI5MzYyMjciLCJyZW5kaXRpb24iOiI0NDkxMzQ0NDYiLCJtdXhpbmciOiI0OTk4MzM2MTUifQ&amp;s=XW0iRcO7w4zCvsO5wqbCrxo0TMO3w5bDgcKhb8Oaw7dtXMKFScKfwpHCt8OBwpJaOcOv" startNumber="1" timescale="44100"></SegmentTemplate> </Representation> </AdaptationSet> <AdaptationSet id="386409604" mimeType="video/mp4" segmentAlignment="true" > <Representation id="449134449" bandwidth="405430" codecs="avc1.42c015" frameRate="30/1" height="240" width="426"> <SegmentTemplate duration="120000" initialization="../../eaef9dea5159cf968be84241b5cedfe7/video/240/init.mp4?p=eyJ0eXBlIjoiaW5pdCIsInZpZGVvSUQiOiJlYWVmOWRlYTUxNTljZjk2OGJlODQyNDFiNWNlZGZlNyIsIm93bmVySUQiOjY2MzQxNTgsImNyZWF0b3JJRCI6InVwcmlkZSIsInVzZVZPRE9URkUiOmZhbHNlLCJmcm9tTWV6emFuaW5lIjpmYWxzZSwic3RvcmFnZVByb3ZpZGVyIjoyLCJ0cmFjayI6IjI5ODBlNzIyOTQ5NjkzOGQ4OGQyODEwZmRjNWM5ZWQ5IiwicmVuZGl0aW9uIjoiNDQ5MTM0NDQ5IiwibXV4aW5nIjoiNDk5ODMzNjE4In0&amp;s=w6XCjsKzw7fCmsKKwo_DoAvDq0LCniYHa39Tw6JEw6BUwojDiENvDMO9wqw7ccOwwpM" media="../../eaef9dea5159cf968be84241b5cedfe7/video/240/seg_$Number$.mp4?p=eyJ0eXBlIjoic2VnbWVudCIsInZpZGVvSUQiOiJlYWVmOWRlYTUxNTljZjk2OGJlODQyNDFiNWNlZGZlNyIsIm93bmVySUQiOjY2MzQxNTgsImNyZWF0b3JJRCI6InVwcmlkZSIsInNlZ21lbnREdXJhdGlvblNlY3MiOjQsInVzZVZPRE9URkUiOmZhbHNlLCJmcm9tTWV6emFuaW5lIjpmYWxzZSwic3RvcmFnZVByb3ZpZGVyIjoyLCJ0cmFjayI6IjI5ODBlNzIyOTQ5NjkzOGQ4OGQyODEwZmRjNWM5ZWQ5IiwicmVuZGl0aW9uIjoiNDQ5MTM0NDQ5IiwibXV4aW5nIjoiNDk5ODMzNjE4In0&amp;s=bcOaIsOHPV_Cu8KiwqppCzkvw70Uwp8XHTbDqztcY8KVfkg6wqIHS8Ktw54" startNumber="1" timescale="30000"></SegmentTemplate> </Representation> <Representation id="449134457" bandwidth="680674" codecs="avc1.4d401e" frameRate="30/1" height="360" width="640"> <SegmentTemplate duration="120000" initialization="../../eaef9dea5159cf968be84241b5cedfe7/video/360/init.mp4?p=eyJ0eXBlIjoiaW5pdCIsInZpZGVvSUQiOiJlYWVmOWRlYTUxNTljZjk2OGJlODQyNDFiNWNlZGZlNyIsIm93bmVySUQiOjY2MzQxNTgsImNyZWF0b3JJRCI6InVwcmlkZSIsInVzZVZPRE9URkUiOmZhbHNlLCJmcm9tTWV6emFuaW5lIjpmYWxzZSwic3RvcmFnZVByb3ZpZGVyIjoyLCJ0cmFjayI6IjI5ODBlNzIyOTQ5NjkzOGQ4OGQyODEwZmRjNWM5ZWQ5IiwicmVuZGl0aW9uIjoiNDQ5MTM0NDU3IiwibXV4aW5nIjoiNDk5ODMzNjI2In0&amp;s=w5_DpDHCncOVwpVtw4HDnMOOwqfDl8Oyw6PDrMO1w5Y5PmlSJnUxFyHCq8KHwpbDqMO0w4M" media="../../eaef9dea5159cf968be84241b5cedfe7/video/360/seg_$Number$.mp4?p=eyJ0eXBlIjoic2VnbWVudCIsInZpZGVvSUQiOiJlYWVmOWRlYTUxNTljZjk2OGJlODQyNDFiNWNlZGZlNyIsIm93bmVySUQiOjY2MzQxNTgsImNyZWF0b3JJRCI6InVwcmlkZSIsInNlZ21lbnREdXJhdGlvblNlY3MiOjQsInVzZVZPRE9URkUiOmZhbHNlLCJmcm9tTWV6emFuaW5lIjpmYWxzZSwic3RvcmFnZVByb3ZpZGVyIjoyLCJ0cmFjayI6IjI5ODBlNzIyOTQ5NjkzOGQ4OGQyODEwZmRjNWM5ZWQ5IiwicmVuZGl0aW9uIjoiNDQ5MTM0NDU3IiwibXV4aW5nIjoiNDk5ODMzNjI2In0&amp;s=wp07RcKIN8ORB8Oww5fCkB02AMO2dEEgwqpRAhVcZMOywqbCnxhuwqDCqHrDrg" startNumber="1" timescale="30000"></SegmentTemplate> </Representation> <Representation id="449134468" bandwidth="1113178" codecs="avc1.4d401f" frameRate="30/1" height="480" width="854"> <SegmentTemplate duration="120000" initialization="../../eaef9dea5159cf968be84241b5cedfe7/video/480/init.mp4?p=eyJ0eXBlIjoiaW5pdCIsInZpZGVvSUQiOiJlYWVmOWRlYTUxNTljZjk2OGJlODQyNDFiNWNlZGZlNyIsIm93bmVySUQiOjY2MzQxNTgsImNyZWF0b3JJRCI6InVwcmlkZSIsInVzZVZPRE9URkUiOmZhbHNlLCJmcm9tTWV6emFuaW5lIjpmYWxzZSwic3RvcmFnZVByb3ZpZGVyIjoyLCJ0cmFjayI6IjI5ODBlNzIyOTQ5NjkzOGQ4OGQyODEwZmRjNWM5ZWQ5IiwicmVuZGl0aW9uIjoiNDQ5MTM0NDY4IiwibXV4aW5nIjoiNDk5ODMzNjM3In0&amp;s=w4LCtMKUw5InVcKkRcKJw6dNw55_USATD8KCw5zDi0rCpMOyWjXDqsKAwqDCnQzDrMKd" media="../../eaef9dea5159cf968be84241b5cedfe7/video/480/seg_$Number$.mp4?p=eyJ0eXBlIjoic2VnbWVudCIsInZpZGVvSUQiOiJlYWVmOWRlYTUxNTljZjk2OGJlODQyNDFiNWNlZGZlNyIsIm93bmVySUQiOjY2MzQxNTgsImNyZWF0b3JJRCI6InVwcmlkZSIsInNlZ21lbnREdXJhdGlvblNlY3MiOjQsInVzZVZPRE9URkUiOmZhbHNlLCJmcm9tTWV6emFuaW5lIjpmYWxzZSwic3RvcmFnZVByb3ZpZGVyIjoyLCJ0cmFjayI6IjI5ODBlNzIyOTQ5NjkzOGQ4OGQyODEwZmRjNWM5ZWQ5IiwicmVuZGl0aW9uIjoiNDQ5MTM0NDY4IiwibXV4aW5nIjoiNDk5ODMzNjM3In0&amp;s=SBvCiTjDkMKfw7YTcsOeJcOhw6ltbEtMVXTCv8KGBBTCkFwUKsOyVFl2Bg" startNumber="1" timescale="30000"></SegmentTemplate> </Representation> <Representation id="449134513" bandwidth="2380128" codecs="avc1.4d401f" frameRate="30/1" height="720" width="1280"> <SegmentTemplate duration="120000" initialization="../../eaef9dea5159cf968be84241b5cedfe7/video/720/init.mp4?p=eyJ0eXBlIjoiaW5pdCIsInZpZGVvSUQiOiJlYWVmOWRlYTUxNTljZjk2OGJlODQyNDFiNWNlZGZlNyIsIm93bmVySUQiOjY2MzQxNTgsImNyZWF0b3JJRCI6InVwcmlkZSIsInVzZVZPRE9URkUiOmZhbHNlLCJmcm9tTWV6emFuaW5lIjpmYWxzZSwic3RvcmFnZVByb3ZpZGVyIjoyLCJ0cmFjayI6IjI5ODBlNzIyOTQ5NjkzOGQ4OGQyODEwZmRjNWM5ZWQ5IiwicmVuZGl0aW9uIjoiNDQ5MTM0NTEzIiwibXV4aW5nIjoiNDk5ODMzNjgyIn0&amp;s=XcOPGjHClhTDn8KcBMKXw5HCo8ONIcKuQMKqbSp8wpLCnMOZXMODw6zDqMKew4_CoEpu" media="../../eaef9dea5159cf968be84241b5cedfe7/video/720/seg_$Number$.mp4?p=eyJ0eXBlIjoic2VnbWVudCIsInZpZGVvSUQiOiJlYWVmOWRlYTUxNTljZjk2OGJlODQyNDFiNWNlZGZlNyIsIm93bmVySUQiOjY2MzQxNTgsImNyZWF0b3JJRCI6InVwcmlkZSIsInNlZ21lbnREdXJhdGlvblNlY3MiOjQsInVzZVZPRE9URkUiOmZhbHNlLCJmcm9tTWV6emFuaW5lIjpmYWxzZSwic3RvcmFnZVByb3ZpZGVyIjoyLCJ0cmFjayI6IjI5ODBlNzIyOTQ5NjkzOGQ4OGQyODEwZmRjNWM5ZWQ5IiwicmVuZGl0aW9uIjoiNDQ5MTM0NTEzIiwibXV4aW5nIjoiNDk5ODMzNjgyIn0&amp;s=SMOOEMOcw5HDq8KTw67CqUMhYzVUGWobQXVvK8OYLMKdw4TDk8KMw5_DnV5ow48" startNumber="1" timescale="30000"></SegmentTemplate> </Representation> </AdaptationSet> </Period> </MPD> ```
1,731,071,079,000
null
Bug Report
[ "yt_dlp/extractor/cloudflarestream.py:CloudflareStreamIE._real_extract" ]
[]
1
yt-dlp/yt-dlp
yt-dlp__yt-dlp-11472
282e19db827f0951c783ac946429f662bcf2200c
diff --git a/yt_dlp/extractor/adobepass.py b/yt_dlp/extractor/adobepass.py index 7cc15ec7b6f2..f1b87792713f 100644 --- a/yt_dlp/extractor/adobepass.py +++ b/yt_dlp/extractor/adobepass.py @@ -1362,7 +1362,7 @@ class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should en def _download_webpage_handle(self, *args, **kwargs): headers = self.geo_verification_headers() - headers.update(kwargs.get('headers', {})) + headers.update(kwargs.get('headers') or {}) kwargs['headers'] = headers return super()._download_webpage_handle( *args, **kwargs)
[NBC]/[adobepass] ERROR: 'NoneType' object is not iterable ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm reporting that yt-dlp is broken on a **supported** site - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details - [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command) - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) - [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required ### Region United States ### Provide a description that is worded well enough to be understood Unable to pull Law and Order SVU or any NBC shows. Getting ERROR: 'NoneType' object is not iterable. I also tried cookies-from-browser but the application still returns to use ap-mso credentials. ### Provide verbose output that clearly demonstrates the problem - [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead - [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell [debug] Command-line config: ['https://www.nbc.com/law-and-order-special-victims-unit/video/economics-of-shame/9000392650', '--ap-mso', 'Verizon', '--ap-username', 'PRIVATE', '--ap-password', 'PRIVATE', '-vU'] [debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [197d0b03b] (win_exe) [debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1t 7 Feb 2023) [debug] exe versions: ffmpeg N-93302-g147ef1d947, ffprobe N-93302-g147ef1d947 [debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-13.1 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests, websockets, curl_cffi [debug] Loaded 1838 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest Latest version: [email protected] from yt-dlp/yt-dlp yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp) [NBC] Extracting URL: https://www.nbc.com/law-and-order-special-victims-unit/video/economics-of-shame/9000392650 [NBC] 9000392650: Downloading JSON metadata [NBC] 9000392650: Downloading JSON metadata ERROR: 'NoneType' object is not iterable Traceback (most recent call last): File "yt_dlp\YoutubeDL.py", line 1625, in wrapper File "yt_dlp\YoutubeDL.py", line 1760, in __extract_info File "yt_dlp\extractor\common.py", line 742, in extract File "yt_dlp\extractor\nbc.py", line 212, in _real_extract File "yt_dlp\extractor\adobepass.py", line 1449, in _extract_mvpd_auth File "yt_dlp\extractor\adobepass.py", line 1365, in _download_webpage_handle TypeError: 'NoneType' object is not iterable ```
Regression introduced in dcfeea4dd5e5686821350baa6c7767a011944867 This should be the fix: ```diff diff --git a/yt_dlp/extractor/adobepass.py b/yt_dlp/extractor/adobepass.py index 7cc15ec7b..f1b877927 100644 --- a/yt_dlp/extractor/adobepass.py +++ b/yt_dlp/extractor/adobepass.py @@ -1362,7 +1362,7 @@ class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should en def _download_webpage_handle(self, *args, **kwargs): headers = self.geo_verification_headers() - headers.update(kwargs.get('headers', {})) + headers.update(kwargs.get('headers') or {}) kwargs['headers'] = headers return super()._download_webpage_handle( *args, **kwargs) ```
1,730,927,829,000
null
Bug Report
[ "yt_dlp/extractor/adobepass.py:AdobePassIE._download_webpage_handle" ]
[]
1
yt-dlp/yt-dlp
yt-dlp__yt-dlp-11466
282e19db827f0951c783ac946429f662bcf2200c
diff --git a/yt_dlp/extractor/goplay.py b/yt_dlp/extractor/goplay.py index dfe5afe63514..32300f75c2f5 100644 --- a/yt_dlp/extractor/goplay.py +++ b/yt_dlp/extractor/goplay.py @@ -5,56 +5,63 @@ import hmac import json import os +import re +import urllib.parse from .common import InfoExtractor from ..utils import ( ExtractorError, + int_or_none, + js_to_json, + remove_end, traverse_obj, - unescapeHTML, ) class GoPlayIE(InfoExtractor): - _VALID_URL = r'https?://(www\.)?goplay\.be/video/([^/]+/[^/]+/|)(?P<display_id>[^/#]+)' + _VALID_URL = r'https?://(www\.)?goplay\.be/video/([^/?#]+/[^/?#]+/|)(?P<id>[^/#]+)' _NETRC_MACHINE = 'goplay' _TESTS = [{ - 'url': 'https://www.goplay.be/video/de-container-cup/de-container-cup-s3/de-container-cup-s3-aflevering-2#autoplay', + 'url': 'https://www.goplay.be/video/de-slimste-mens-ter-wereld/de-slimste-mens-ter-wereld-s22/de-slimste-mens-ter-wereld-s22-aflevering-1', 'info_dict': { - 'id': '9c4214b8-e55d-4e4b-a446-f015f6c6f811', + 'id': '2baa4560-87a0-421b-bffc-359914e3c387', 'ext': 'mp4', - 'title': 'S3 - Aflevering 2', - 'series': 'De Container Cup', - 'season': 'Season 3', - 'season_number': 3, - 'episode': 'Episode 2', - 'episode_number': 2, + 'title': 'S22 - Aflevering 1', + 'description': r're:In aflevering 1 nemen Daan Alferink, Tess Elst en Xander De Rycke .{66}', + 'series': 'De Slimste Mens ter Wereld', + 'episode': 'Episode 1', + 'season_number': 22, + 'episode_number': 1, + 'season': 'Season 22', }, + 'params': {'skip_download': True}, 'skip': 'This video is only available for registered users', }, { - 'url': 'https://www.goplay.be/video/a-family-for-thr-holidays-s1-aflevering-1#autoplay', + 'url': 'https://www.goplay.be/video/1917', 'info_dict': { - 'id': '74e3ed07-748c-49e4-85a0-393a93337dbf', + 'id': '40cac41d-8d29-4ef5-aa11-75047b9f0907', 'ext': 'mp4', - 'title': 'A Family for the Holidays', + 'title': '1917', + 'description': r're:Op het hoogtepunt van de Eerste Wereldoorlog krijgen twee jonge .{94}', }, + 'params': {'skip_download': True}, 'skip': 'This video is only available for registered users', }, { 'url': 'https://www.goplay.be/video/de-mol/de-mol-s11/de-mol-s11-aflevering-1#autoplay', 'info_dict': { - 'id': '03eb8f2f-153e-41cb-9805-0d3a29dab656', + 'id': 'ecb79672-92b9-4cd9-a0d7-e2f0250681ee', 'ext': 'mp4', 'title': 'S11 - Aflevering 1', + 'description': r're:Tien kandidaten beginnen aan hun verovering van Amerika en ontmoeten .{102}', 'episode': 'Episode 1', 'series': 'De Mol', 'season_number': 11, 'episode_number': 1, 'season': 'Season 11', }, - 'params': { - 'skip_download': True, - }, + 'params': {'skip_download': True}, 'skip': 'This video is only available for registered users', }] @@ -69,27 +76,42 @@ def _real_initialize(self): if not self._id_token: raise self.raise_login_required(method='password') + def _find_json(self, s): + return self._search_json( + r'\w+\s*:\s*', s, 'next js data', None, contains_pattern=r'\[(?s:.+)\]', default=None) + def _real_extract(self, url): - url, display_id = self._match_valid_url(url).group(0, 'display_id') + display_id = self._match_id(url) webpage = self._download_webpage(url, display_id) - video_data_json = self._html_search_regex(r'<div\s+data-hero="([^"]+)"', webpage, 'video_data') - video_data = self._parse_json(unescapeHTML(video_data_json), display_id).get('data') - - movie = video_data.get('movie') - if movie: - video_id = movie['videoUuid'] - info_dict = { - 'title': movie.get('title'), - } - else: - episode = traverse_obj(video_data, ('playlists', ..., 'episodes', lambda _, v: v['pageInfo']['url'] == url), get_all=False) - video_id = episode['videoUuid'] - info_dict = { - 'title': episode.get('episodeTitle'), - 'series': traverse_obj(episode, ('program', 'title')), - 'season_number': episode.get('seasonNumber'), - 'episode_number': episode.get('episodeNumber'), - } + + nextjs_data = traverse_obj( + re.findall(r'<script[^>]*>\s*self\.__next_f\.push\(\s*(\[.+?\])\s*\);?\s*</script>', webpage), + (..., {js_to_json}, {json.loads}, ..., {self._find_json}, ...)) + meta = traverse_obj(nextjs_data, ( + ..., lambda _, v: v['meta']['path'] == urllib.parse.urlparse(url).path, 'meta', any)) + + video_id = meta['uuid'] + info_dict = traverse_obj(meta, { + 'title': ('title', {str}), + 'description': ('description', {str.strip}), + }) + + if traverse_obj(meta, ('program', 'subtype')) != 'movie': + for season_data in traverse_obj(nextjs_data, (..., 'children', ..., 'playlists', ...)): + episode_data = traverse_obj( + season_data, ('videos', lambda _, v: v['videoId'] == video_id, any)) + if not episode_data: + continue + + episode_title = traverse_obj( + episode_data, 'contextualTitle', 'episodeTitle', expected_type=str) + info_dict.update({ + 'title': episode_title or info_dict.get('title'), + 'series': remove_end(info_dict.get('title'), f' - {episode_title}'), + 'season_number': traverse_obj(season_data, ('season', {int_or_none})), + 'episode_number': traverse_obj(episode_data, ('episodeNumber', {int_or_none})), + }) + break api = self._download_json( f'https://api.goplay.be/web/v1/videos/long-form/{video_id}',
[GoPlay] ERROR: [GoPlay] Unable to extract video_data ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm reporting that yt-dlp is broken on a **supported** site - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details - [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command) - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) - [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required ### Region Belgium ### Provide a description that is worded well enough to be understood I cannot download a video from Goplay.be. ERROR: [GoPlay] Unable to extract video_data Thank you in advance. ### Provide verbose output that clearly demonstrates the problem - [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead - [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell [debug] Command-line config: ['-vU', 'https://www.goplay.be/video/nonkels/nonkels-s2/nonkels-2-s2-aflevering-4', '--username', 'PRIVATE', '--password', 'PRIVATE'] [debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-master-builds [41be32e78] (win_exe) [debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021) [debug] exe versions: none [debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-13.0 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests, websockets, curl_cffi [debug] Loaded 1831 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-master-builds/releases/latest Latest version: [email protected] from yt-dlp/yt-dlp-master-builds yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp-master-builds) [GoPlay] Logging in [GoPlay] Authenticating username [GoPlay] Authenticating password [GoPlay] Extracting URL: https://www.goplay.be/video/nonkels/nonkels-s2/nonkels-2-s2-aflevering-4 [GoPlay] nonkels-2-s2-aflevering-4: Downloading webpage ERROR: [GoPlay] Unable to extract video_data; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U File "yt_dlp\extractor\common.py", line 740, in extract File "yt_dlp\extractor\goplay.py", line 75, in _real_extract File "yt_dlp\extractor\common.py", line 1369, in _html_search_regex File "yt_dlp\extractor\common.py", line 1333, in _search_regex ```
Do **not** download the above spam links, they are malware > Do not download these spam links, they are malware The respond was too quick, so I didn't downloaded them. Thx :-) Maybe some more information: The video has many advertising at the begin and in the middle of the video. > The video is protected by DRM Then it won't be downloadable anyways. Does downloading non-DRM videos from this site still work? > > The video is protected by DRM > > Then it won't be downloadable anyways. Does downloading non-DRM videos from this site still work? After some researche, the video is actually NOT protected by DRM. So the error must be from something else. I tried 2 other videos, I got the same error: "ERROR: [GoPlay] Unable to extract video_data"...
1,730,837,991,000
null
Bug Report
[ "yt_dlp/extractor/goplay.py:GoPlayIE._real_extract" ]
[ "yt_dlp/extractor/goplay.py:GoPlayIE._find_json" ]
1
gaogaotiantian/viztracer
gaogaotiantian__viztracer-528
2ed22b5b16dc232f966235a6a89fa678515a50a4
diff --git a/src/viztracer/main.py b/src/viztracer/main.py index 7cbf972c..eb996124 100644 --- a/src/viztracer/main.py +++ b/src/viztracer/main.py @@ -676,7 +676,7 @@ def exit_routine(self) -> None: self.save() if self.options.open: # pragma: no cover import subprocess - subprocess.run(["vizviewer", "--once", os.path.abspath(self.ofile)]) + subprocess.run([sys.executable, "-m", "viztracer.viewer", "--once", os.path.abspath(self.ofile)]) def main():
Cannot import name 'viewer_main' from 'viztracer' in 1.0.0 ### Phenomenon: I've been using viztracer through the viztracer plugin in vscode, but after upgrading to 1.0.0 ,viztracer doesn't work. ### Error message: ```powershell C:\ProgramData\anaconda3\python.exe -m viztracer --ignore_frozen --open --log_print --quiet -u -- c:\...\something.py Traceback (most recent call last): File "C:\ProgramData\anaconda3\Scripts\vizviewer-script.py", line 6, in <module> from viztracer import viewer_main ImportError: cannot import name 'viewer_main' from 'viztracer' (C:\Users\29267\AppData\Roaming\Python\Python311\site-packages\viztracer\__init__.py) ``` ### What I tried: 1. downgraded to 0.17.1 : **works fine** 2. upgraded to 1.0.0 : **bugs still there**
You have multiple versions of viztracers. The `vizviewer` viztracer tried to use is a different version. `viztracer` is from conda but seems like `vizviewer` used the version from your system Python. But this is still partially my fault, `viztracer` should always use the same version `vizviewer`. For now you can either upgrade both version, or do not use `--open` option. Just do a vizviewer in the same environment as viztracer, which should work.
1,733,202,811,000
null
Bug Report
[ "src/viztracer/main.py:VizUI.exit_routine" ]
[]
1
locustio/locust
locustio__locust-2976
a8510a466dd358a5d2956079cf10f25dc9beb380
diff --git a/locust/runners.py b/locust/runners.py index 9552d519c7..a4165cfa40 100644 --- a/locust/runners.py +++ b/locust/runners.py @@ -1025,7 +1025,9 @@ def client_listener(self) -> NoReturn: # if abs(time() - msg.data["time"]) > 5.0: # warnings.warn("The worker node's clock seem to be out of sync. For the statistics to be correct the different locust servers need to have synchronized clocks.") elif msg.type == "locustfile": - if msg.data["version"][0:4] == __version__[0:4]: + if not msg.data["version"]: + logger.error("A very old worker version requested locustfile. This probably won't work.") + elif msg.data["version"][0:4] == __version__[0:4]: logger.debug( f"A worker ({msg.node_id}) running a different patch version ({msg.data['version']}) connected, master version is {__version__}" )
master crash with different version worker ### Prerequisites - [X] I am using [the latest version of Locust](https://github.com/locustio/locust/releases/) - [X] I am reporting a bug, not asking a question ### Description I ran distributed locust with master node locust version 2.32.2 and some worker node locust version 2.25.0 (python3.8 default version). The master node crash with the following message ``` ➜ load-test locust -f locust.py --master [2024-11-09 14:41:07,519] nasa33/INFO/locust.main: Starting Locust 2.32.2 [2024-11-09 14:41:07,524] nasa33/INFO/locust.main: Starting web interface at http://0.0.0.0:8089 Traceback (most recent call last): File "src/gevent/greenlet.py", line 906, in gevent._gevent_cgreenlet.Greenlet.run File "/home/uniform64/.local/lib/python3.10/site-packages/locust/runners.py", line 1030, in client_listener if msg.data["version"][0:4] == __version__[0:4]: TypeError: 'NoneType' object is not subscriptable 2024-11-09T06:41:13Z <Greenlet at 0x7f254a980cc0: <bound method MasterRunner.client_listener of <locust.runners.MasterRunner object at 0x7f254a963100>>> failed with TypeError [2024-11-09 14:41:13,652] nasa33/CRITICAL/locust.runners: Unhandled exception in greenlet: <Greenlet at 0x7f254a980cc0: <bound method MasterRunner.client_listener of <locust.runners.MasterRunner object at 0x7f254a963100>>> Traceback (most recent call last): File "src/gevent/greenlet.py", line 906, in gevent._gevent_cgreenlet.Greenlet.run File "/home/uniform64/.local/lib/python3.10/site-packages/locust/runners.py", line 1030, in client_listener if msg.data["version"][0:4] == __version__[0:4]: TypeError: 'NoneType' object is not subscriptable ``` when I use the following command on worker node. ``` ~/.local/bin/locust -f - --worker --master-host 172.16.0.33 --processes -1 ``` ### Command line locust -f locust.py --master ### Locustfile contents ```python3 import random import string from locust import HttpUser, task def generate_random_string(length): return "".join(random.choices(string.ascii_lowercase+string.digits, k=length)) def generate_random_bytes(length): return random.randbytes(length) class SimpleClient(HttpUser): @task def upload(self): # random generate a index and some data (both string) index = generate_random_string(random.randint(10, 20)) data = generate_random_bytes(random.randint(100, 200)) self.client.post("/upload", headers={"Index": index}, data=data) ``` ### Python version 3.10 ### Locust version 2.32.2 ### Operating system ubuntu22.04
1,731,139,675,000
null
Bug Report
[ "locust/runners.py:MasterRunner.client_listener" ]
[]
1
ranaroussi/yfinance
ranaroussi__yfinance-2173
3ac85397cbaee4b28baea8e900e1de6e7b2fbe52
diff --git a/yfinance/base.py b/yfinance/base.py index 81733ba9..c3150759 100644 --- a/yfinance/base.py +++ b/yfinance/base.py @@ -30,7 +30,7 @@ import pandas as pd import requests -from . import utils, cache, Search +from . import utils, cache from .data import YfData from .exceptions import YFEarningsDateMissing from .scrapers.analysis import Analysis @@ -534,19 +534,45 @@ def get_isin(self, proxy=None) -> Optional[str]: self._isin = data.split(search_str)[1].split('"')[0].split('|')[0] return self._isin - def get_news(self, proxy=None) -> list: + def get_news(self, count=10, tab="news", proxy=None) -> list: + """Allowed options for tab: "news", "all", "press releases""" if self._news: return self._news - search = Search( - query=self.ticker, - news_count=10, - session=self.session, - proxy=proxy, - raise_errors=True - ) - self._news = search.news + logger = utils.get_yf_logger() + + tab_queryrefs = { + "all": "newsAll", + "news": "latestNews", + "press releases": "pressRelease", + } + + query_ref = tab_queryrefs.get(tab.lower()) + if not query_ref: + raise ValueError(f"Invalid tab name '{tab}'. Choose from: {', '.join(tab_queryrefs.keys())}") + + url = f"{_ROOT_URL_}/xhr/ncp?queryRef={query_ref}&serviceKey=ncp_fin" + payload = { + "serviceConfig": { + "snippetCount": count, + "s": [self.ticker] + } + } + + data = self._data.post(url, body=payload, proxy=proxy) + if data is None or "Will be right back" in data.text: + raise RuntimeError("*** YAHOO! FINANCE IS CURRENTLY DOWN! ***\n" + "Our engineers are working quickly to resolve " + "the issue. Thank you for your patience.") + try: + data = data.json() + except _json.JSONDecodeError: + logger.error(f"{self.ticker}: Failed to retrieve the news and received faulty response instead.") + data = {} + + news = data.get("data", {}).get("tickerStream", {}).get("stream", []) + self._news = [article for article in news if not article.get('ad', [])] return self._news @utils.log_indent_decorator
Any way to get more news? `ticker.news` seems to return 8 to 10 news articles. However, Yahoo Finance can offer many more than 8 to 10 news articles per ticker: https://finance.yahoo.com/quote/MSFT/news/ (keep scrolling down). Is there a way to get more than 8 to 10 news articles with yfinance?
Someone began working on a solution but abandoned it: #1949
1,733,699,514,000
null
Feature Request
[ "yfinance/base.py:TickerBase.get_news" ]
[]
1
ranaroussi/yfinance
ranaroussi__yfinance-2122
f05f99c2b8101576911b35cbd3129afb04fb140d
diff --git a/yfinance/utils.py b/yfinance/utils.py index 0968f9d1..ebc8b99a 100644 --- a/yfinance/utils.py +++ b/yfinance/utils.py @@ -613,7 +613,7 @@ def fix_Yahoo_returning_live_separate(quotes, interval, tz_exchange, repair=Fals # - exception is volume, *slightly* greater on final row (and matches website) if dt1.date() == dt2.date(): # Last two rows are on same day. Drop second-to-last row - quotes = quotes.drop(quotes.index[n - 2]) + quotes = _pd.concat([quotes.iloc[:-2], quotes.iloc[-1:]]) else: if interval == "1wk": last_rows_same_interval = dt1.year == dt2.year and dt1.week == dt2.week
0.2.42 and onwards fails to pull most recent trading days data for ASX stocks ### Describe bug Pulling stock data using versions 0.2.42 and onwards fails to pull the last trading days data for ASX stocks. This could be related to timezones but the issue doesn't exist in 0.2.41. ### Simple code that reproduces your problem `stock_data_daily = yf.download('CSL.AX', period='1y', interval='1d')` ### Debug log DEBUG Entering download() DEBUG:yfinance:Entering download() DEBUG Disabling multithreading because DEBUG logging enabled DEBUG:yfinance: Disabling multithreading because DEBUG logging enabled DEBUG Entering history() DEBUG:yfinance: Entering history() DEBUG Entering history() DEBUG:yfinance: Entering history() DEBUG CSL.AX: Yahoo GET parameters: {'range': '1y', 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'} DEBUG:yfinance: CSL.AX: Yahoo GET parameters: {'range': '1y', 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'} DEBUG Entering get() DEBUG:yfinance: Entering get() DEBUG url=https://query2.finance.yahoo.com/v8/finance/chart/CSL.AX DEBUG:yfinance: url=https://query2.finance.yahoo.com/v8/finance/chart/CSL.AX DEBUG params={'range': '1y', 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'} DEBUG:yfinance: params={'range': '1y', 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'} DEBUG Entering _get_cookie_and_crumb() DEBUG:yfinance: Entering _get_cookie_and_crumb() DEBUG cookie_mode = 'basic' DEBUG:yfinance: cookie_mode = 'basic' DEBUG Entering _get_cookie_and_crumb_basic() DEBUG:yfinance: Entering _get_cookie_and_crumb_basic() DEBUG reusing cookie DEBUG:yfinance: reusing cookie DEBUG reusing crumb DEBUG:yfinance: reusing crumb DEBUG Exiting _get_cookie_and_crumb_basic() DEBUG:yfinance: Exiting _get_cookie_and_crumb_basic() DEBUG Exiting _get_cookie_and_crumb() DEBUG:yfinance: Exiting _get_cookie_and_crumb() DEBUG response code=200 DEBUG:yfinance: response code=200 DEBUG Exiting get() DEBUG:yfinance: Exiting get() DEBUG CSL.AX: yfinance received OHLC data: 2023-11-07 23:00:00 -> 2024-11-08 05:10:12 DEBUG:yfinance: CSL.AX: yfinance received OHLC data: 2023-11-07 23:00:00 -> 2024-11-08 05:10:12 DEBUG CSL.AX: OHLC after cleaning: 2023-11-08 10:00:00+11:00 -> 2024-11-08 16:10:12+11:00 DEBUG:yfinance: CSL.AX: OHLC after cleaning: 2023-11-08 10:00:00+11:00 -> 2024-11-08 16:10:12+11:00 DEBUG CSL.AX: OHLC after combining events: 2023-11-08 00:00:00+11:00 -> 2024-11-08 00:00:00+11:00 DEBUG:yfinance: CSL.AX: OHLC after combining events: 2023-11-08 00:00:00+11:00 -> 2024-11-08 00:00:00+11:00 DEBUG CSL.AX: yfinance returning OHLC: 2023-11-08 00:00:00+11:00 -> 2024-11-07 00:00:00+11:00 DEBUG:yfinance: CSL.AX: yfinance returning OHLC: 2023-11-08 00:00:00+11:00 -> 2024-11-07 00:00:00+11:00 DEBUG Exiting history() DEBUG:yfinance: Exiting history() DEBUG Exiting history() DEBUG:yfinance: Exiting history() DEBUG Exiting download() DEBUG:yfinance:Exiting download() ### Bad data proof _No response_ ### `yfinance` version >= 0.2.42 ### Python version _No response_ ### Operating system _No response_
1,731,237,392,000
null
Bug Report
[ "yfinance/utils.py:fix_Yahoo_returning_live_separate" ]
[]
1
scipy/scipy
scipy__scipy-22106
15d6284e5a0f3333394ca4498eb56bce14a6245b
diff --git a/scipy/sparse/_construct.py b/scipy/sparse/_construct.py index 0326c9963f0b..f483976badb7 100644 --- a/scipy/sparse/_construct.py +++ b/scipy/sparse/_construct.py @@ -349,7 +349,7 @@ def eye_array(m, n=None, *, k=0, dtype=float, format=None): Parameters ---------- - m : int or tuple of ints + m : int Number of rows requested. n : int, optional Number of columns. Default: `m`.
DOC: sparse: `sparse.eye_array` does not accept `tuple[int, int]` as the docs say that it should ### Describe your issue. `scipy.sparse.eye_array` does not accept `m: tuple[int, int]` as the docs suggest is should: https://github.com/scipy/scipy/blob/964f0bb6701dc17b51b842382ced0fa2ee318377/scipy/sparse/_construct.py#L350-L353 This is the case with at least `1.14.1` and `1.15.0rc1` ### Reproducing Code Example ```python from scipy.sparse import eye_array eye_array((1, 1)) ``` ### Error message ```shell Traceback (most recent call last): File "<python-input-2>", line 1, in <module> eye_array((1, 1)) ~~~~~~~~~^^^^^^^^ File "/home/joren/.pyenv/versions/3.13.1/lib/python3.13/site-packages/scipy/sparse/_construct.py", line 377, in eye_array return _eye(m, n, k, dtype, format) File "/home/joren/.pyenv/versions/3.13.1/lib/python3.13/site-packages/scipy/sparse/_construct.py", line 394, in _eye m, n = int(m), int(n) ~~~^^^ TypeError: int() argument must be a string, a bytes-like object or a real number, not 'tuple' ``` ### SciPy/NumPy/Python version and system information ```shell 1.15.0rc1 2.2.0 sys.version_info(major=3, minor=13, micro=1, releaselevel='final', serial=0) Build Dependencies: blas: detection method: pkgconfig found: true include directory: /opt/_internal/cpython-3.13.0/lib/python3.13/site-packages/scipy_openblas32/include lib directory: /opt/_internal/cpython-3.13.0/lib/python3.13/site-packages/scipy_openblas32/lib name: scipy-openblas openblas configuration: OpenBLAS 0.3.28 DYNAMIC_ARCH NO_AFFINITY Haswell MAX_THREADS=64 pc file directory: /project version: 0.3.28 lapack: detection method: pkgconfig found: true include directory: /opt/_internal/cpython-3.13.0/lib/python3.13/site-packages/scipy_openblas32/include lib directory: /opt/_internal/cpython-3.13.0/lib/python3.13/site-packages/scipy_openblas32/lib name: scipy-openblas openblas configuration: OpenBLAS 0.3.28 DYNAMIC_ARCH NO_AFFINITY Haswell MAX_THREADS=64 pc file directory: /project version: 0.3.28 pybind11: detection method: config-tool include directory: unknown name: pybind11 version: 2.13.6 Compilers: c: commands: cc linker: ld.bfd name: gcc version: 10.2.1 c++: commands: c++ linker: ld.bfd name: gcc version: 10.2.1 cython: commands: cython linker: cython name: cython version: 3.0.11 fortran: commands: gfortran linker: ld.bfd name: gcc version: 10.2.1 pythran: include directory: ../../tmp/pip-build-env-fa6gfmf0/overlay/lib/python3.13/site-packages/pythran version: 0.17.0 Machine Information: build: cpu: x86_64 endian: little family: x86_64 system: linux cross-compiled: false host: cpu: x86_64 endian: little family: x86_64 system: linux Python Information: path: /opt/python/cp313-cp313/bin/python version: '3.13' ``` DOC: sparse: `sparse.eye_array` does not accept `tuple[int, int]` as the docs say that it should ### Describe your issue. `scipy.sparse.eye_array` does not accept `m: tuple[int, int]` as the docs suggest is should: https://github.com/scipy/scipy/blob/964f0bb6701dc17b51b842382ced0fa2ee318377/scipy/sparse/_construct.py#L350-L353 This is the case with at least `1.14.1` and `1.15.0rc1` ### Reproducing Code Example ```python from scipy.sparse import eye_array eye_array((1, 1)) ``` ### Error message ```shell Traceback (most recent call last): File "<python-input-2>", line 1, in <module> eye_array((1, 1)) ~~~~~~~~~^^^^^^^^ File "/home/joren/.pyenv/versions/3.13.1/lib/python3.13/site-packages/scipy/sparse/_construct.py", line 377, in eye_array return _eye(m, n, k, dtype, format) File "/home/joren/.pyenv/versions/3.13.1/lib/python3.13/site-packages/scipy/sparse/_construct.py", line 394, in _eye m, n = int(m), int(n) ~~~^^^ TypeError: int() argument must be a string, a bytes-like object or a real number, not 'tuple' ``` ### SciPy/NumPy/Python version and system information ```shell 1.15.0rc1 2.2.0 sys.version_info(major=3, minor=13, micro=1, releaselevel='final', serial=0) Build Dependencies: blas: detection method: pkgconfig found: true include directory: /opt/_internal/cpython-3.13.0/lib/python3.13/site-packages/scipy_openblas32/include lib directory: /opt/_internal/cpython-3.13.0/lib/python3.13/site-packages/scipy_openblas32/lib name: scipy-openblas openblas configuration: OpenBLAS 0.3.28 DYNAMIC_ARCH NO_AFFINITY Haswell MAX_THREADS=64 pc file directory: /project version: 0.3.28 lapack: detection method: pkgconfig found: true include directory: /opt/_internal/cpython-3.13.0/lib/python3.13/site-packages/scipy_openblas32/include lib directory: /opt/_internal/cpython-3.13.0/lib/python3.13/site-packages/scipy_openblas32/lib name: scipy-openblas openblas configuration: OpenBLAS 0.3.28 DYNAMIC_ARCH NO_AFFINITY Haswell MAX_THREADS=64 pc file directory: /project version: 0.3.28 pybind11: detection method: config-tool include directory: unknown name: pybind11 version: 2.13.6 Compilers: c: commands: cc linker: ld.bfd name: gcc version: 10.2.1 c++: commands: c++ linker: ld.bfd name: gcc version: 10.2.1 cython: commands: cython linker: cython name: cython version: 3.0.11 fortran: commands: gfortran linker: ld.bfd name: gcc version: 10.2.1 pythran: include directory: ../../tmp/pip-build-env-fa6gfmf0/overlay/lib/python3.13/site-packages/pythran version: 0.17.0 Machine Information: build: cpu: x86_64 endian: little family: x86_64 system: linux cross-compiled: false host: cpu: x86_64 endian: little family: x86_64 system: linux Python Information: path: /opt/python/cp313-cp313/bin/python version: '3.13' ```
Thank you for pointing this out!! We should be using the [array_api specification](https://data-apis.org/array-api/latest/API_specification) for the [`eye` function](https://data-apis.org/array-api/latest/API_specification/generated/array_api.eye.html). That should also align us with the numpy interface. The function name is not `eye`, but we intend to rename it after the spmatrix functions are deprecated and removed. Luckily it looks like the code does the right thing. The docs are not what we want. So we need to change the docs by removing ` or tuple of ints`. And it would be good to backport the change to the maintenance branch (I believe it is far too late to update the 1.14.1 docs). > Thank you for pointing this out!! We should be using the [array_api specification](https://data-apis.org/array-api/latest/API_specification) for the [`eye` function](https://data-apis.org/array-api/latest/API_specification/generated/array_api.eye.html). That should also align us with the numpy interface. The function name is not `eye`, but we intend to rename it after the spmatrix functions are deprecated and removed. > > Luckily it looks like the code does the right thing. The docs are not what we want. So we need to change the docs by removing ` or tuple of ints`. > > And it would be good to backport the change to the maintenance branch (I believe it is far too late to update the 1.14.1 docs). I like that; it's easier to annotate that way :) Thank you for pointing this out!! We should be using the [array_api specification](https://data-apis.org/array-api/latest/API_specification) for the [`eye` function](https://data-apis.org/array-api/latest/API_specification/generated/array_api.eye.html). That should also align us with the numpy interface. The function name is not `eye`, but we intend to rename it after the spmatrix functions are deprecated and removed. Luckily it looks like the code does the right thing. The docs are not what we want. So we need to change the docs by removing ` or tuple of ints`. And it would be good to backport the change to the maintenance branch (I believe it is far too late to update the 1.14.1 docs). > Thank you for pointing this out!! We should be using the [array_api specification](https://data-apis.org/array-api/latest/API_specification) for the [`eye` function](https://data-apis.org/array-api/latest/API_specification/generated/array_api.eye.html). That should also align us with the numpy interface. The function name is not `eye`, but we intend to rename it after the spmatrix functions are deprecated and removed. > > Luckily it looks like the code does the right thing. The docs are not what we want. So we need to change the docs by removing ` or tuple of ints`. > > And it would be good to backport the change to the maintenance branch (I believe it is far too late to update the 1.14.1 docs). I like that; it's easier to annotate that way :)
1,734,439,741,000
null
Bug Report
[ "scipy/sparse/_construct.py:eye_array" ]
[]
1
scipy/scipy
scipy__scipy-22103
caa7e2ab245a808a1c55a20fb5d5b49daf8bad93
diff --git a/scipy/stats/_stats_py.py b/scipy/stats/_stats_py.py index de7be104289b..71ae19acabc2 100644 --- a/scipy/stats/_stats_py.py +++ b/scipy/stats/_stats_py.py @@ -4298,7 +4298,7 @@ def pearsonr(x, y, *, alternative='two-sided', method=None, axis=0): Axis along which to perform the calculation. Default is 0. If None, ravel both arrays before performing the calculation. - .. versionadded:: 1.13.0 + .. versionadded:: 1.14.0 alternative : {'two-sided', 'greater', 'less'}, optional Defines the alternative hypothesis. Default is 'two-sided'. The following options are available:
DOC: stats.pearsonr: incorrect `versionadded` for `axis` param ### Issue with current documentation: Regarding the documentation of function scipy.stats.pearsonr. Typo in the version reference. The axis option is not in v1.13.0. It first appears in v1.14.0 ### Idea or request for content: Correct the version reference in the docstring. 1.13.0 --> 1.14.0 ### Additional context (e.g. screenshots, GIFs) ``` def pearsonr(x, y, *, alternative='two-sided', method=None, axis=0): r""" Pearson correlation coefficient and p-value for testing non-correlation. The Pearson correlation coefficient [1]_ measures the linear relationship between two datasets. Like other correlation coefficients, this one varies between -1 and +1 with 0 implying no correlation. Correlations of -1 or +1 imply an exact linear relationship. Positive correlations imply that as x increases, so does y. Negative correlations imply that as x increases, y decreases. This function also performs a test of the null hypothesis that the distributions underlying the samples are uncorrelated and normally distributed. (See Kowalski [3]_ for a discussion of the effects of non-normality of the input on the distribution of the correlation coefficient.) The p-value roughly indicates the probability of an uncorrelated system producing datasets that have a Pearson correlation at least as extreme as the one computed from these datasets. Parameters ---------- x : array_like Input array. y : array_like Input array. axis : int or None, default Axis along which to perform the calculation. Default is 0. If None, ravel both arrays before performing the calculation. .. versionadded:: 1.13.0 ```
Thanks @biopzhang, agreed that this is a typo. Would you like to submit a PR to fix this?
1,734,406,832,000
null
Bug Report
[ "scipy/stats/_stats_py.py:pearsonr" ]
[]
1
scipy/scipy
scipy__scipy-22052
7f03fbaf30c400ff4bb14020f7f284ec2703c4d1
diff --git a/scipy/sparse/linalg/_dsolve/linsolve.py b/scipy/sparse/linalg/_dsolve/linsolve.py index d1ab77883163..560cb75bbf99 100644 --- a/scipy/sparse/linalg/_dsolve/linsolve.py +++ b/scipy/sparse/linalg/_dsolve/linsolve.py @@ -371,6 +371,10 @@ def splu(A, permc_spec=None, diag_pivot_thresh=None, Notes ----- + When a real array is factorized and the returned SuperLU object's ``solve()`` method + is used with complex arguments an error is generated. Instead, cast the initial + array to complex and then factorize. + This function uses the SuperLU library. References @@ -468,6 +472,10 @@ def spilu(A, drop_tol=None, fill_factor=None, drop_rule=None, permc_spec=None, Notes ----- + When a real array is factorized and the returned SuperLU object's ``solve()`` method + is used with complex arguments an error is generated. Instead, cast the initial + array to complex and then factorize. + To improve the better approximation to the inverse, you may need to increase `fill_factor` AND decrease `drop_tol`.
sparse LU decomposition does not solve with complex right-hand side The `solve` method of the sparse LU-decomposition `splu` or `spilu` throws a `TypeError` if called with a `numpy.array` of type `numpy.complex`. I am actually using `spilu` for preconditioning a gmres-solver required to perform a linear solve in a non-symmetric generalized eigenvalue problem. The eigenvectors are complex and hence the right-hand side for the linear solve can be complex. ### Reproducing code example: ``` import numpy as np from scipy.sparse import csr_matrix import scipy.sparse.linalg as sp_sparse_la A = csr_matrix([[2.,-1.],[-1.,2.]]) n = A.shape[0] v_real = np.random.randn(n) v_cmplx = np.random.randn(n) + 1.0J * np.random.randn(n) luA = sp_sparse_la.splu(A) x_real = luA.solve(v_real) x_cmplx = luA.solve(v_cmplx) ``` ### Error message: ``` Traceback (most recent call last): File "dump.py", line 20, in <module> x_cmplx = luA.solve(v_cmplx) TypeError: Cannot cast array data from dtype('complex128') to dtype('float64') according to the rule 'safe' ``` ### Scipy/Numpy/Python version information: ``` ('1.0.0', '1.13.3', sys.version_info(major=2, minor=7, micro=12, releaselevel='final', serial=0)) ``` sparse LU decomposition does not solve with complex right-hand side The `solve` method of the sparse LU-decomposition `splu` or `spilu` throws a `TypeError` if called with a `numpy.array` of type `numpy.complex`. I am actually using `spilu` for preconditioning a gmres-solver required to perform a linear solve in a non-symmetric generalized eigenvalue problem. The eigenvectors are complex and hence the right-hand side for the linear solve can be complex. ### Reproducing code example: ``` import numpy as np from scipy.sparse import csr_matrix import scipy.sparse.linalg as sp_sparse_la A = csr_matrix([[2.,-1.],[-1.,2.]]) n = A.shape[0] v_real = np.random.randn(n) v_cmplx = np.random.randn(n) + 1.0J * np.random.randn(n) luA = sp_sparse_la.splu(A) x_real = luA.solve(v_real) x_cmplx = luA.solve(v_cmplx) ``` ### Error message: ``` Traceback (most recent call last): File "dump.py", line 20, in <module> x_cmplx = luA.solve(v_cmplx) TypeError: Cannot cast array data from dtype('complex128') to dtype('float64') according to the rule 'safe' ``` ### Scipy/Numpy/Python version information: ``` ('1.0.0', '1.13.3', sys.version_info(major=2, minor=7, micro=12, releaselevel='final', serial=0)) ```
if you cast your A matrix as complex, then it works in both cases. So probably when the LHS is real it selects a real-typed solver and complains. Thank you, you are right. Maybe some comments regarding this issue should be added in the documentation. Good first issue, depending on familiarity with the math. Hi I'm working on it I'll try to do it by the end of next week A note such as the following in the docstring of `splu` and `spilu` would close this issue ````When a real array is factorized and the returned SuperLU object ``solve()`` method is used with complex arguments an error is generated. Instead cast the initial matrix to complex and then factorize.```` Hi @j-bowhay, thanks for the comment, I am a first-time contributor to scipy, I would like to start from this issue We don't assign issues to specific people but please feel free to have a go if you cast your A matrix as complex, then it works in both cases. So probably when the LHS is real it selects a real-typed solver and complains. Thank you, you are right. Maybe some comments regarding this issue should be added in the documentation. Good first issue, depending on familiarity with the math. Hi I'm working on it I'll try to do it by the end of next week A note such as the following in the docstring of `splu` and `spilu` would close this issue ````When a real array is factorized and the returned SuperLU object ``solve()`` method is used with complex arguments an error is generated. Instead cast the initial matrix to complex and then factorize.```` Hi @j-bowhay, thanks for the comment, I am a first-time contributor to scipy, I would like to start from this issue We don't assign issues to specific people but please feel free to have a go
1,733,917,709,000
null
Bug Report
[ "scipy/sparse/linalg/_dsolve/linsolve.py:splu", "scipy/sparse/linalg/_dsolve/linsolve.py:spilu" ]
[]
2
DS4SD/docling
DS4SD__docling-528
c830b92b2e043ea63d216f65b3f9d88d2a8c33f7
diff --git a/docling/backend/msword_backend.py b/docling/backend/msword_backend.py index 05508712..bab956a7 100644 --- a/docling/backend/msword_backend.py +++ b/docling/backend/msword_backend.py @@ -133,7 +133,6 @@ def get_level(self) -> int: def walk_linear(self, body, docx_obj, doc) -> DoclingDocument: for element in body: tag_name = etree.QName(element).localname - # Check for Inline Images (blip elements) namespaces = { "a": "http://schemas.openxmlformats.org/drawingml/2006/main", @@ -153,6 +152,7 @@ def walk_linear(self, body, docx_obj, doc) -> DoclingDocument: self.handle_pictures(element, docx_obj, drawing_blip, doc) # Check for Text elif tag_name in ["p"]: + # "tcPr", "sectPr" self.handle_text_elements(element, docx_obj, doc) else: _log.debug(f"Ignoring element in DOCX with tag: {tag_name}") @@ -219,7 +219,6 @@ def handle_text_elements(self, element, docx_obj, doc): if paragraph.text is None: return text = paragraph.text.strip() - # if len(text)==0 # keep empty paragraphs, they seperate adjacent lists! # Common styles for bullet and numbered lists. # "List Bullet", "List Number", "List Paragraph" @@ -291,9 +290,7 @@ def handle_text_elements(self, element, docx_obj, doc): def add_header(self, element, docx_obj, doc, curr_name, curr_level, text: str): level = self.get_level() if isinstance(curr_level, int): - if curr_level > level: - # add invisible group for i in range(level, curr_level): self.parents[i] = doc.add_group( @@ -301,9 +298,7 @@ def add_header(self, element, docx_obj, doc, curr_name, curr_level, text: str): label=GroupLabel.SECTION, name=f"header-{i}", ) - elif curr_level < level: - # remove the tail for key, val in self.parents.items(): if key >= curr_level: @@ -314,7 +309,6 @@ def add_header(self, element, docx_obj, doc, curr_name, curr_level, text: str): text=text, level=curr_level, ) - else: self.parents[self.level] = doc.add_heading( parent=self.parents[self.level - 1], @@ -346,7 +340,7 @@ def add_listitem( label=GroupLabel.LIST, name="list", parent=self.parents[level - 1] ) - # TODO: Set marker and enumerated arguments if this is an enumeration element. + # Set marker and enumerated arguments if this is an enumeration element. self.listIter += 1 if is_numbered: enum_marker = str(self.listIter) + "." @@ -365,8 +359,8 @@ def add_listitem( self.level_at_new_list + self.prev_indent() + 1, self.level_at_new_list + ilevel + 1, ): - # TODO: determine if this is an unordered list or an ordered list. - # Set GroupLabel.ORDERED_LIST when it fits. + # Determine if this is an unordered list or an ordered list. + # Set GroupLabel.ORDERED_LIST when it fits. self.listIter = 0 if is_numbered: self.parents[i] = doc.add_group( @@ -467,6 +461,19 @@ def get_rowspan(cell): row_span = get_rowspan(cell) col_span = get_colspan(cell) + cell_text = cell.text + # In case cell doesn't return text via docx library: + if len(cell_text) == 0: + cell_xml = cell._element + + texts = [""] + for elem in cell_xml.iter(): + if elem.tag.endswith("t"): # <w:t> tags that contain text + if elem.text: + texts.append(elem.text) + # Join the collected text + cell_text = " ".join(texts).strip() + # Find the next available column in the grid while table_grid[row_idx][col_idx] is not None: col_idx += 1 @@ -477,15 +484,15 @@ def get_rowspan(cell): table_grid[row_idx + i][col_idx + j] = "" cell = TableCell( - text=cell.text, + text=cell_text, row_span=row_span, col_span=col_span, start_row_offset_idx=row_idx, end_row_offset_idx=row_idx + row_span, start_col_offset_idx=col_idx, end_col_offset_idx=col_idx + col_span, - col_header=False, # col_header, - row_header=False, # ((not col_header) and html_cell.name=='th') + col_header=False, + row_header=False, ) data.table_cells.append(cell)
What is the meaning of `missing-text`? ### Question When exporting docx documents as text, I always seem to get some `missing-text` in the output. I was not able to find this string in the project repository, `python-docx`, or documentation. Snippet: ```py doc_converter = DocumentConverter(allowed_formats=[InputFormat.DOCX]) conv_res = doc_converter.convert(input_doc_path) print(conv_res.document.export_to_text()) ``` Output: ```py <missing-text> <missing-text> <missing-text> <missing-text> <missing-text> <missing-text> <missing-text> <missing-text> <missing-text> <missing-text> <missing-text> <missing-text> <missing-text> <missing-text> <missing-text> ``` Documents: - Complete failure, all text is "missing-text": [doc.docx](https://github.com/user-attachments/files/17983955/doc.docx) - Partial failure, only some of the text is "missing-text": [doc2.docx](https://github.com/user-attachments/files/17983962/doc2.docx) Both documents are public. What causes `missing-text`? What should be my mental model for it when processing documents? Thanks!
@Belval, thanks for sharing with sample documents, I will check this!
1,733,475,107,000
null
Bug Report
[ "docling/backend/msword_backend.py:MsWordDocumentBackend.handle_tables" ]
[]
1
DS4SD/docling
DS4SD__docling-472
cc46c938b66b2d24f601acc9646782dc83326e1f
diff --git a/docling/models/tesseract_ocr_cli_model.py b/docling/models/tesseract_ocr_cli_model.py index 9a50eee0..a6b2f7fb 100644 --- a/docling/models/tesseract_ocr_cli_model.py +++ b/docling/models/tesseract_ocr_cli_model.py @@ -1,3 +1,4 @@ +import csv import io import logging import tempfile @@ -95,7 +96,7 @@ def _run_tesseract(self, ifilename: str): # _log.info(decoded_data) # Read the TSV file generated by Tesseract - df = pd.read_csv(io.StringIO(decoded_data), sep="\t") + df = pd.read_csv(io.StringIO(decoded_data), quoting=csv.QUOTE_NONE, sep="\t") # Display the dataframe (optional) # _log.info("df: ", df.head())
pandas.errors.ParserError: Error tokenizing data. C error: EOF inside string starting at row 656 ### Bug Trying to convert a PDF I get the following error, the same options works on other PDFs. **Seems related to `pandas.read_csv()` on the TSV output of Tesseract.** ``` Encountered an error during conversion of document b137be2685712845d8afee55fe6327d2901290f9a852a25b3f7b19010df64e10: Traceback (most recent call last): File ".../docling/pipeline/base_pipeline.py", line 149, in _build_document for p in pipeline_pages: # Must exhaust! ^^^^^^^^^^^^^^ File ".../docling/pipeline/base_pipeline.py", line 116, in _apply_on_pages yield from page_batch File ".../docling/models/page_assemble_model.py", line 59, in __call__ for page in page_batch: ^^^^^^^^^^ File ".../docling/models/table_structure_model.py", line 93, in __call__ for page in page_batch: ^^^^^^^^^^ File ".../docling/models/layout_model.py", line 281, in __call__ for page in page_batch: ^^^^^^^^^^ File ".../docling/models/tesseract_ocr_cli_model.py", line 140, in __call__ df = self._run_tesseract(fname) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".../docling/models/tesseract_ocr_cli_model.py", line 98, in _run_tesseract df = pd.read_csv(io.StringIO(decoded_data), sep="\t") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".../pandas/io/parsers/readers.py", line 1026, in read_csv return _read(filepath_or_buffer, kwds) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".../pandas/io/parsers/readers.py", line 626, in _read return parser.read(nrows) ^^^^^^^^^^^^^^^^^^ File ".../pandas/io/parsers/readers.py", line 1923, in read ) = self._engine.read( # type: ignore[attr-defined] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".../pandas/io/parsers/c_parser_wrapper.py", line 234, in read chunks = self._reader.read_low_memory(nrows) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "parsers.pyx", line 838, in pandas._libs.parsers.TextReader.read_low_memory File "parsers.pyx", line 905, in pandas._libs.parsers.TextReader._read_rows File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status File "parsers.pyx", line 2061, in pandas._libs.parsers.raise_parser_error pandas.errors.ParserError: Error tokenizing data. C error: EOF inside string starting at row 656 ``` ### Steps to reproduce ``` ocr_options = TesseractCliOcrOptions(force_full_page_ocr=True) pipeline_options = PdfPipelineOptions() pipeline_options.do_ocr = True pipeline_options.do_table_structure = True pipeline_options.table_structure_options.do_cell_matching = True pipeline_options.ocr_options = ocr_options converter = DocumentConverter( format_options={ InputFormat.PDF: PdfFormatOption( pipeline_options=pipeline_options, ) } ) conv_res = converter.convert(Path(my_pdf_path)) ``` ### Docling version ``` Docling version: 2.5.2 Docling Core version: 2.4.0 Docling IBM Models version: 2.0.3 Docling Parse version: 2.0.4 ``` ### Python version `Python 3.12.7`
1,732,897,993,000
null
Bug Report
[ "docling/models/tesseract_ocr_cli_model.py:TesseractOcrCliModel._run_tesseract" ]
[]
1
DS4SD/docling
DS4SD__docling-442
6666d9ec070650df35a8b156643a78c32dcfefb5
diff --git a/docling/backend/msword_backend.py b/docling/backend/msword_backend.py index 496bdb7b..05508712 100644 --- a/docling/backend/msword_backend.py +++ b/docling/backend/msword_backend.py @@ -507,18 +507,19 @@ def get_docx_image(element, drawing_blip): image_data = get_docx_image(element, drawing_blip) image_bytes = BytesIO(image_data) + level = self.get_level() # Open the BytesIO object with PIL to create an Image try: pil_image = Image.open(image_bytes) doc.add_picture( - parent=self.parents[self.level], + parent=self.parents[level - 1], image=ImageRef.from_pil(image=pil_image, dpi=72), caption=None, ) except (UnidentifiedImageError, OSError) as e: _log.warning("Warning: image cannot be loaded by Pillow") doc.add_picture( - parent=self.parents[self.level], + parent=self.parents[level - 1], caption=None, ) return
Image location in Word Document is wrong ### Bug The image placeholder in parsed docx documents is wrong. An incorrect index is used resulting in a wrong location for images in downstream export formats like markdown. ### Steps to reproduce Parsing a simple .docx with docling [image_within_text.docx](https://github.com/user-attachments/files/17919742/image_within_text.docx) ### Docling version Docling version: 2.7.0 Docling Core version: 2.4.1 Docling IBM Models version: 2.0.6 Docling Parse version: 2.1.2 ### Python version 3.12.4 <!-- ⚠️ ATTENTION: When sharing screenshots, attachments, or other data make sure not to include any sensitive information. -->
1,732,630,531,000
null
Bug Report
[ "docling/backend/msword_backend.py:MsWordDocumentBackend.handle_pictures" ]
[]
1
DS4SD/docling
DS4SD__docling-375
e6f89d520fc4608dfee9a7c9357e974681d02372
diff --git a/docling/cli/main.py b/docling/cli/main.py index c95128ac..a2a86bf4 100644 --- a/docling/cli/main.py +++ b/docling/cli/main.py @@ -1,6 +1,7 @@ import importlib import json import logging +import re import time import warnings from enum import Enum @@ -129,6 +130,12 @@ def export_documents( ) +def _split_list(raw: Optional[str]) -> Optional[List[str]]: + if raw is None: + return None + return re.split(r"[;,]", raw) + + @app.command(no_args_is_help=True) def convert( input_sources: Annotated[ @@ -163,6 +170,13 @@ def convert( ocr_engine: Annotated[ OcrEngine, typer.Option(..., help="The OCR engine to use.") ] = OcrEngine.EASYOCR, + ocr_lang: Annotated[ + Optional[str], + typer.Option( + ..., + help="Provide a comma-separated list of languages used by the OCR engine. Note that each OCR engine has different values for the language names.", + ), + ] = None, pdf_backend: Annotated[ PdfBackend, typer.Option(..., help="The PDF backend to use.") ] = PdfBackend.DLPARSE_V1, @@ -248,6 +262,10 @@ def convert( case _: raise RuntimeError(f"Unexpected OCR engine type {ocr_engine}") + ocr_lang_list = _split_list(ocr_lang) + if ocr_lang_list is not None: + ocr_options.lang = ocr_lang_list + pipeline_options = PdfPipelineOptions( do_ocr=ocr, ocr_options=ocr_options, diff --git a/docling/datamodel/pipeline_options.py b/docling/datamodel/pipeline_options.py index 2b9d228c..6c0711cc 100644 --- a/docling/datamodel/pipeline_options.py +++ b/docling/datamodel/pipeline_options.py @@ -22,6 +22,7 @@ class TableStructureOptions(BaseModel): class OcrOptions(BaseModel): kind: str + lang: List[str] force_full_page_ocr: bool = False # If enabled a full page OCR is always applied bitmap_area_threshold: float = ( 0.05 # percentage of the area for a bitmap to processed with OCR
Specific language for easyOCR ### Question Well, this library perform pretty good tasks, but for some cases, the languages is incorrect (in my Vietnamese language) Just want to ask `How can I specific language parser to make result more concise?`
the same question? Hey, I want to work on this. Can a mentor explain me please? "Temporary use before completion of work" You can change language at docling/datamodel/pipeline_options.py ocr options class is not made yet, but you can change lang: list directly and save. class EasyOcrOptions(OcrOptions): kind: Literal["easyocr"] = "easyocr" lang: List[str] = ['ko','en']#["fr", "de", "es", "en"] Oh i see. Thank you EasyOCR does NOT support Hebrew. Any work-around to process Hebrew image files in Docling? Thank you. [https://www.jaided.ai/easyocr/](https://www.jaided.ai/easyocr/) @danhertztech Docling supports multiple OCR engines (and the possibility to also bring your own). Out-of-the-box we have already Tesseract which could cover your use case. See more here https://ds4sd.github.io/docling/installation/
1,732,010,628,000
null
Feature Request
[ "docling/cli/main.py:convert" ]
[ "docling/cli/main.py:_split_list" ]
1
DS4SD/docling
DS4SD__docling-322
2c0c439a4417d87aa712964acadb8618ea96ee65
diff --git a/docling/models/ds_glm_model.py b/docling/models/ds_glm_model.py index e63bad3a..0a066bfa 100644 --- a/docling/models/ds_glm_model.py +++ b/docling/models/ds_glm_model.py @@ -43,7 +43,8 @@ class GlmModel: def __init__(self, options: GlmOptions): self.options = options - load_pretrained_nlp_models() + if self.options.model_names != "": + load_pretrained_nlp_models() self.model = init_nlp_model(model_names=self.options.model_names) def _to_legacy_document(self, conv_res) -> DsDocument:
Unable to run. ### Bug <!-- Describe the buggy behavior you have observed. --> PS C:\Users\genco> & C:/ProgramData/anaconda3/envs/docling/python.exe c:/Users/genco/OneDrive/Documents/marker_new/docling_convertor_testing.py Fetching 9 files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:00<?, ?it/s] Traceback (most recent call last): File "c:\Users\genco\OneDrive\Documents\marker_new\docling_convertor_testing.py", line 5, in <module> result = converter.convert(source) File "C:\ProgramData\anaconda3\envs\docling\lib\site-packages\pydantic\validate_call_decorator.py", line 60, in wrapper_function return validate_call_wrapper(*args, **kwargs) File "C:\ProgramData\anaconda3\envs\docling\lib\site-packages\pydantic\_internal\_validate_call.py", line 96, in __call__ res = self.__pydantic_validator__.validate_python(pydantic_core.ArgsKwargs(args, kwargs)) File "C:\ProgramData\anaconda3\envs\docling\lib\site-packages\docling\document_converter.py", line 161, in convert return next(all_res) File "C:\ProgramData\anaconda3\envs\docling\lib\site-packages\docling\document_converter.py", line 180, in convert_all for conv_res in conv_res_iter: File "C:\ProgramData\anaconda3\envs\docling\lib\site-packages\docling\document_converter.py", line 211, in _convert for item in map( File "C:\ProgramData\anaconda3\envs\docling\lib\site-packages\docling\document_converter.py", line 255, in _process_document conv_res = self._execute_pipeline(in_doc, raises_on_error=raises_on_error) File "C:\ProgramData\anaconda3\envs\docling\lib\site-packages\docling\document_converter.py", line 263, in _execute_pipeline pipeline = self._get_pipeline(in_doc.format) File "C:\ProgramData\anaconda3\envs\docling\lib\site-packages\docling\document_converter.py", line 244, in _get_pipeline self.initialized_pipelines[pipeline_class] = pipeline_class( File "C:\ProgramData\anaconda3\envs\docling\lib\site-packages\docling\pipeline\standard_pdf_pipeline.py", line 54, in __init__ self.glm_model = GlmModel(options=GlmOptions()) File "C:\ProgramData\anaconda3\envs\docling\lib\site-packages\docling\models\ds_glm_model.py", line 46, in __init__ load_pretrained_nlp_models() File "C:\ProgramData\anaconda3\envs\docling\lib\site-packages\deepsearch_glm\utils\load_pretrained_models.py", line 120, in load_pretrained_nlp_models done, data = download_items(downloads) File "C:\ProgramData\anaconda3\envs\docling\lib\site-packages\deepsearch_glm\utils\load_pretrained_models.py", line 50, in download_items with target.open("wb") as fw: File "C:\ProgramData\anaconda3\envs\docling\lib\pathlib.py", line 1119, in open return self._accessor.open(self, mode, buffering, encoding, errors, PermissionError: [Errno 13] Permission denied: 'C:\\ProgramData\\anaconda3\\envs\\docling\\lib\\site-packages\\deepsearch_glm\\resources\\models\\crf\\part-of-speech\\crf_pos_model_en.bin' ### Steps to reproduce <!-- Describe the sequence of steps for reproducing the bug. --> run code: from docling.document_converter import DocumentConverter source = "https://arxiv.org/pdf/2408.09869" # PDF path or URL converter = DocumentConverter() result = converter.convert(source) print(result.document.export_to_markdown()) # output: "### Docling Technical Report[...]" ### Docling version <!-- Copy the output of `docling --version`. --> latest version. ### Python version <!-- Copy the output of `python --version`. --> 3.10.15 <!-- ⚠️ ATTENTION: When sharing screenshots, attachments, or other data make sure not to include any sensitive information. -->
@ashunaveed Can you please tell us the exact version. There should be no need to download `crf_pos_model_en.bin`. Please run, ``` docling --version ``` We suspect that you have by chance an older version, but we want to be 100% sure. I'm trying to run Docling on a server without internet connection so I have downloaded the layout model and tableformer from Hugging Face and then I try to run with custom artifact path as per your documentation: ``` pipeline_options = PdfPipelineOptions(artifacts_path=artifacts_path) doc_converter = DocumentConverter( format_options={ InputFormat.PDF: PdfFormatOption(pipeline_options=pipeline_options) } ) ``` But I get an error similar to the OP (though for me the problem is timeout due to connection error). I have tried with these versions: Docling version: 2.5.1 Docling Core version: 2.3.2 Docling IBM Models version: 2.0.3 Docling Parse version: 2.0.3 and an older version: Docling version: 2.3.1 Docling Core version: 2.3.1 Docling IBM Models version: 2.0.3 Docling Parse version: 2.0.2 And it tries to download the glm files in both versions. I'm mostly curious to understand if the GLM files are needed as your answer above indicates that, at least crf_pos_model_en.bin, shouldn't be needed at all.
1,731,480,648,000
null
Bug Report
[ "docling/models/ds_glm_model.py:GlmModel.__init__" ]
[]
1
DS4SD/docling
DS4SD__docling-307
1239ade2750349d13d4e865d88449b232bbad944
diff --git a/docling/backend/mspowerpoint_backend.py b/docling/backend/mspowerpoint_backend.py index cbec761c..b71cd859 100644 --- a/docling/backend/mspowerpoint_backend.py +++ b/docling/backend/mspowerpoint_backend.py @@ -358,41 +358,36 @@ def walk_linear(self, pptx_obj, doc) -> DoclingDocument: size = Size(width=slide_width, height=slide_height) parent_page = doc.add_page(page_no=slide_ind + 1, size=size) - # parent_page = doc.add_page(page_no=slide_ind, size=size, hash=hash) - - # Loop through each shape in the slide - for shape in slide.shapes: + def handle_shapes(shape, parent_slide, slide_ind, doc): + handle_groups(shape, parent_slide, slide_ind, doc) if shape.has_table: # Handle Tables self.handle_tables(shape, parent_slide, slide_ind, doc) - if shape.shape_type == MSO_SHAPE_TYPE.PICTURE: - # Handle Tables + # Handle Pictures self.handle_pictures(shape, parent_slide, slide_ind, doc) - # If shape doesn't have any text, move on to the next shape if not hasattr(shape, "text"): - continue + return if shape.text is None: - continue + return if len(shape.text.strip()) == 0: - continue + return if not shape.has_text_frame: - _log.warn("Warning: shape has text but not text_frame") - continue - - # if shape.is_placeholder: - # Handle Titles (Headers) and Subtitles - # Check if the shape is a placeholder (titles are placeholders) - # self.handle_title(shape, parent_slide, slide_ind, doc) - # self.handle_text_elements(shape, parent_slide, slide_ind, doc) - # else: - + _log.warning("Warning: shape has text but not text_frame") + return # Handle other text elements, including lists (bullet lists, numbered lists) self.handle_text_elements(shape, parent_slide, slide_ind, doc) + return + + def handle_groups(shape, parent_slide, slide_ind, doc): + if shape.shape_type == MSO_SHAPE_TYPE.GROUP: + for groupedshape in shape.shapes: + handle_shapes(groupedshape, parent_slide, slide_ind, doc) - # figures... - # doc.add_figure(data=BaseFigureData(), parent=self.parents[self.level], caption=None) + # Loop through each shape in the slide + for shape in slide.shapes: + handle_shapes(shape, parent_slide, slide_ind, doc) return doc
In a specific PowerPoint, an issue with missing text occurred during parsing. ### Bug <!-- In a specific PowerPoint, an issue with missing text occurred during parsing. --> ... [specific PowerPoint] [powerpoint_sample.pptx](https://github.com/user-attachments/files/17694015/powerpoint_sample.pptx) ... ### Python version docling 2.4.0 Python version: 3.12.7 ...
@Crespo522 I'm working on the fix, in short - we need to handle grouped elements correctly.
1,731,333,112,000
null
Bug Report
[ "docling/backend/mspowerpoint_backend.py:MsPowerpointDocumentBackend.walk_linear" ]
[]
1
DS4SD/docling
DS4SD__docling-302
97f214efddcf66f0734a95c17c08936f6111d113
diff --git a/docling/backend/html_backend.py b/docling/backend/html_backend.py index 7d14c2eb..9cd1e29b 100644 --- a/docling/backend/html_backend.py +++ b/docling/backend/html_backend.py @@ -120,6 +120,8 @@ def analyse_element(self, element, idx, doc): self.handle_header(element, idx, doc) elif element.name in ["p"]: self.handle_paragraph(element, idx, doc) + elif element.name in ["pre"]: + self.handle_code(element, idx, doc) elif element.name in ["ul", "ol"]: self.handle_list(element, idx, doc) elif element.name in ["li"]: @@ -205,6 +207,16 @@ def handle_header(self, element, idx, doc): level=hlevel, ) + def handle_code(self, element, idx, doc): + """Handles monospace code snippets (pre).""" + if element.text is None: + return + text = element.text.strip() + label = DocItemLabel.CODE + if len(text) == 0: + return + doc.add_text(parent=self.parents[self.level], label=label, text=text) + def handle_paragraph(self, element, idx, doc): """Handles paragraph tags (p).""" if element.text is None:
Unable to extract code block in HTML page When I try to extract the content from a webpage using ```docling```, I found it cannot extract **code blocks** in the webpage. # Reproduce steps HTML URL: https://requests.readthedocs.io/en/latest/user/quickstart/ ```python from docling.document_converter import DocumentConverter converter = DocumentConverter() result = converter.convert('https://requests.readthedocs.io/en/latest/user/quickstart/') print(result.document.export_to_markdown()) ```` The code blocks in the following picture cannot be extracted in the result markdown: <img width="754" alt="image" src="https://github.com/user-attachments/assets/0175ddce-7516-4786-bdc7-95c3e830fad2"> The result markdown of this part in the above picture is : ```markdown ## Make a Request¶ Making a request with Requests is very simple. Begin by importing the Requests module: Now, let’s try to get a webpage. For this example, let’s get GitHub’s public timeline: Now, we have a Response object called r. We can get all the information we need from this object. Requests’ simple API means that all forms of HTTP request are as obvious. For example, this is how you make an HTTP POST request: Nice, right? What about the other HTTP request types: PUT, DELETE, HEAD and OPTIONS? These are all just as simple: That’s all well and good, but it’s also only the start of what Requests can do. ```
1,731,328,071,000
null
Bug Report
[ "docling/backend/html_backend.py:HTMLDocumentBackend.analyse_element" ]
[ "docling/backend/html_backend.py:HTMLDocumentBackend.handle_code" ]
1
certbot/certbot
certbot__certbot-10043
0e225dcba293441e7b8d420c9a210480f8c707d8
diff --git a/tools/finish_release.py b/tools/finish_release.py index 958d7672bc..56b92d2a1d 100755 --- a/tools/finish_release.py +++ b/tools/finish_release.py @@ -111,7 +111,7 @@ def get_snap_revisions(snap, channel, version): print('Getting revision numbers for', snap, version) cmd = ['snapcraft', 'status', snap] process = subprocess.run(cmd, check=True, stdout=subprocess.PIPE, universal_newlines=True) - pattern = f'^\s+{channel}\s+{version}\s+(\d+)\s*' + pattern = f'^\\s+{channel}\\s+{version}\\s+(\\d+)\\s*' revisions = re.findall(pattern, process.stdout, re.MULTILINE) assert len(revisions) == SNAP_ARCH_COUNT, f'Unexpected number of snaps found for {channel} {snap} {version} (expected {SNAP_ARCH_COUNT}, found {len(revisions)})' return revisions
Fix regex in finish_release.py ``` (venv) certbot [3.0.0] » python3 tools/finish_release.py certbot/tools/finish_release.py:114: SyntaxWarning: invalid escape sequence '\s' pattern = f'^\s+{channel}\s+{version}\s+(\d+)\s*' certbot/tools/finish_release.py:114: SyntaxWarning: invalid escape sequence '\s' pattern = f'^\s+{channel}\s+{version}\s+(\d+)\s*' certbot/tools/finish_release.py:114: SyntaxWarning: invalid escape sequence '\s' pattern = f'^\s+{channel}\s+{version}\s+(\d+)\s*' ```
1,730,849,552,000
null
Bug Report
[ "tools/finish_release.py:get_snap_revisions" ]
[]
1
vitalik/django-ninja
vitalik__django-ninja-1349
97ef2914a7fffd058a311394a25af1fe489df722
diff --git a/ninja/responses.py b/ninja/responses.py index babd366e..6a0fd4ca 100644 --- a/ninja/responses.py +++ b/ninja/responses.py @@ -1,10 +1,11 @@ from enum import Enum -from ipaddress import IPv4Address, IPv6Address +from ipaddress import IPv4Address, IPv4Network, IPv6Address, IPv6Network from typing import Any, FrozenSet from django.core.serializers.json import DjangoJSONEncoder from django.http import JsonResponse from pydantic import BaseModel +from pydantic_core import Url __all__ = [ "NinjaJSONEncoder", @@ -21,7 +22,9 @@ class NinjaJSONEncoder(DjangoJSONEncoder): def default(self, o: Any) -> Any: if isinstance(o, BaseModel): return o.model_dump() - if isinstance(o, (IPv4Address, IPv6Address)): + if isinstance(o, Url): + return str(o) + if isinstance(o, (IPv4Address, IPv4Network, IPv6Address, IPv6Network)): return str(o) if isinstance(o, Enum): return str(o)
[BUG] Object of type Url is not JSON serializable **Describe the bug** django-ninja = "^1.3.0" Using `HttpUrl` (or, I suspect, any *Url class) for a schema used in a response results in json serialization error. This is the same type of issue as #717. ```pytb Traceback (most recent call last): File "/home/adys/.cache/pypoetry/virtualenvs/fabrile-backend-fklT6p4J-py3.10/lib/python3.10/site-packages/ninja/operation.py", line 121, in run return self._result_to_response(request, result, temporal_response) File "/home/adys/.cache/pypoetry/virtualenvs/fabrile-backend-fklT6p4J-py3.10/lib/python3.10/site-packages/ninja/operation.py", line 278, in _result_to_response return self.api.create_response( File "/home/adys/.cache/pypoetry/virtualenvs/fabrile-backend-fklT6p4J-py3.10/lib/python3.10/site-packages/ninja/main.py", line 453, in create_response content = self.renderer.render(request, data, response_status=status) File "/home/adys/.cache/pypoetry/virtualenvs/fabrile-backend-fklT6p4J-py3.10/lib/python3.10/site-packages/ninja/renderers.py", line 25, in render return json.dumps(data, cls=self.encoder_class, **self.json_dumps_params) File "/usr/lib/python3.10/json/__init__.py", line 238, in dumps **kw).encode(obj) File "/usr/lib/python3.10/json/encoder.py", line 199, in encode chunks = self.iterencode(o, _one_shot=True) File "/usr/lib/python3.10/json/encoder.py", line 257, in iterencode return _iterencode(o, 0) File "/home/adys/.cache/pypoetry/virtualenvs/fabrile-backend-fklT6p4J-py3.10/lib/python3.10/site-packages/ninja/responses.py", line 28, in default return super().default(o) File "/home/adys/.cache/pypoetry/virtualenvs/fabrile-backend-fklT6p4J-py3.10/lib/python3.10/site-packages/django/core/serializers/json.py", line 106, in default return super().default(o) File "/usr/lib/python3.10/json/encoder.py", line 179, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type Url is not JSON serializable ```
1,733,135,333,000
null
Bug Report
[ "ninja/responses.py:NinjaJSONEncoder.default" ]
[]
1
pandas-dev/pandas
pandas-dev__pandas-60577
b0192c70610a9db593968374ea60d189daaaccc7
diff --git a/pandas/io/sql.py b/pandas/io/sql.py index 3c0c5cc64c24c..5652d7fab0c7c 100644 --- a/pandas/io/sql.py +++ b/pandas/io/sql.py @@ -241,7 +241,7 @@ def read_sql_table( # pyright: ignore[reportOverlappingOverload] schema=..., index_col: str | list[str] | None = ..., coerce_float=..., - parse_dates: list[str] | dict[str, str] | None = ..., + parse_dates: list[str] | dict[str, str] | dict[str, dict[str, Any]] | None = ..., columns: list[str] | None = ..., chunksize: None = ..., dtype_backend: DtypeBackend | lib.NoDefault = ..., @@ -255,7 +255,7 @@ def read_sql_table( schema=..., index_col: str | list[str] | None = ..., coerce_float=..., - parse_dates: list[str] | dict[str, str] | None = ..., + parse_dates: list[str] | dict[str, str] | dict[str, dict[str, Any]] | None = ..., columns: list[str] | None = ..., chunksize: int = ..., dtype_backend: DtypeBackend | lib.NoDefault = ..., @@ -268,7 +268,7 @@ def read_sql_table( schema: str | None = None, index_col: str | list[str] | None = None, coerce_float: bool = True, - parse_dates: list[str] | dict[str, str] | None = None, + parse_dates: list[str] | dict[str, str] | dict[str, dict[str, Any]] | None = None, columns: list[str] | None = None, chunksize: int | None = None, dtype_backend: DtypeBackend | lib.NoDefault = lib.no_default, @@ -372,7 +372,7 @@ def read_sql_query( # pyright: ignore[reportOverlappingOverload] index_col: str | list[str] | None = ..., coerce_float=..., params: list[Any] | Mapping[str, Any] | None = ..., - parse_dates: list[str] | dict[str, str] | None = ..., + parse_dates: list[str] | dict[str, str] | dict[str, dict[str, Any]] | None = ..., chunksize: None = ..., dtype: DtypeArg | None = ..., dtype_backend: DtypeBackend | lib.NoDefault = ..., @@ -386,7 +386,7 @@ def read_sql_query( index_col: str | list[str] | None = ..., coerce_float=..., params: list[Any] | Mapping[str, Any] | None = ..., - parse_dates: list[str] | dict[str, str] | None = ..., + parse_dates: list[str] | dict[str, str] | dict[str, dict[str, Any]] | None = ..., chunksize: int = ..., dtype: DtypeArg | None = ..., dtype_backend: DtypeBackend | lib.NoDefault = ..., @@ -399,7 +399,7 @@ def read_sql_query( index_col: str | list[str] | None = None, coerce_float: bool = True, params: list[Any] | Mapping[str, Any] | None = None, - parse_dates: list[str] | dict[str, str] | None = None, + parse_dates: list[str] | dict[str, str] | dict[str, dict[str, Any]] | None = None, chunksize: int | None = None, dtype: DtypeArg | None = None, dtype_backend: DtypeBackend | lib.NoDefault = lib.no_default,
BUG: Type Annotation Inconsistency in read_sql_* Functions ### Pandas version checks - [X] I have checked that this issue has not already been reported. - [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import pandas as pd import sqlite3 date_params = {"date_col": {"utc": True}} with sqlite3.connect("blah") as con: # Fails type check. df = pd.read_sql_query("SELECT * FROM tablename", con, parse_dates=date_params) print(df) ``` ### Issue Description The pandas type annotations for the `parse_dates` argument in `read_sql_table()` and `read_sql_query()` is overly restrictive. It incorrectly causes type checkers to complain when using the `parse_dates` argument to pass keyword arguments to `to_datetime()` as documented [here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_sql_query.html). To solve this problem, the annotated type for `parse_date` just needs to be switched from `parse_dates: list[str] | dict[str, str] | None` to `list[str] | dict[str, str] | dict[str, dict[str, Any]] | None`. This problem is not always visible because the corresponding `pandas-stubs` already does this. The inconsistency appears however in some type checkers when additional stubs are not available or configured though. To illustrate, take the provided (valid) example and run `pyright` on it (with no arguments). It will output the following. ``` (bug_venv)$ pyright example.py /home/user/Code/pandas_bug/example.py /home/user/Code/pandas_bug/example.py:8:10 - error: No overloads for "read_sql_query" match the provided arguments (reportCallIssue) /home/user/Code/pandas_bug/example.py:8:72 - error: Argument of type "dict[str, dict[str, bool]]" cannot be assigned to parameter "parse_dates" of type "list[str] |dict[str, str] | None" in function "read_sql_query" Type "dict[str, dict[str, bool]]" is not assignable to type "list[str] | dict[str, str] | None" "dict[str, dict[str, bool]]" is not assignable to "list[str]" "dict[str, dict[str, bool]]" is not assignable to "dict[str, str]" Type parameter "_VT@dict" is invariant, but "dict[str, bool]" is not the same as "str" Consider switching from "dict" to "Mapping" which is covariant in the value type "dict[str, dict[str, bool]]" is not assignable to "None" (reportArgumentType) 2 errors, 0 warnings, 0 informations ``` I am more than happy to submit a pull request for this is desired, but thought it best to put in this issue first in case I am missing something. ### Expected Behavior import pandas as pd import sqlite3 date_params = {"date_col": {"utc": True}} with sqlite3.connect("blah") as con: # Type checks correctly df = pd.read_sql_query("SELECT * FROM tablename", con, parse_dates=date_params) print(df) ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : 0691c5cf90477d3503834d983f69350f250a6ff7 python : 3.12.6 python-bits : 64 OS : Linux OS-release : 6.11.2-arch1-1 Version : #1 SMP PREEMPT_DYNAMIC Fri, 04 Oct 2024 21:51:11 +0000 machine : x86_64 processor : byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : en_US.UTF-8 pandas : 2.2.3 numpy : 2.1.2 pytz : 2024.2 dateutil : 2.9.0.post0 pip : 24.2 Cython : None sphinx : None IPython : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : None hypothesis : None gcsfs : None jinja2 : None lxml.etree : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None psycopg2 : None pymysql : None pyarrow : None pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : None sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2024.2 qtpy : None pyqt5 : None </details>
Thanks for the report! > This problem is not always visible because the corresponding `pandas-stubs` already does this. The inconsistency appears however in some type checkers when additional stubs are not available or configured though. It seems to me this is not appropriate. PEP 561 makes this quite clear I think: > Package maintainers who wish to support type checking of their code MUST add a marker file named py.typed to their package supporting typing. Since pandas does not have a `py.typed` file, its type-hints should not be considered public. I only mention this to say that I think pandas should not be obligated to spend unnecessary effort in order to support third parties that use its internal type-hints. Of course, in cases where the change would benefit pandas internal typing (as is the case here I believe), PRs are welcome!
1,734,286,166,000
null
Bug Report
[ "pandas/io/sql.py:read_sql_table", "pandas/io/sql.py:read_sql_query" ]
[]
2
pandas-dev/pandas
pandas-dev__pandas-60543
659eecf22a2e4c4a8f023c655a75a7135614a409
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py index 6fa21d9410187..b0c8ec1ffc083 100644 --- a/pandas/core/dtypes/common.py +++ b/pandas/core/dtypes/common.py @@ -430,7 +430,7 @@ def is_period_dtype(arr_or_dtype) -> bool: Check whether an array-like or dtype is of the Period dtype. .. deprecated:: 2.2.0 - Use isinstance(dtype, pd.Period) instead. + Use isinstance(dtype, pd.PeriodDtype) instead. Parameters ----------
DOC: Incorrect deprecation example for `is_period_dtype` ### Pandas version checks - [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/) ### Location of the documentation https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.api.types.is_period_dtype.html#pandas.api.types.is_period_dtype ### Documentation problem Suggests the user use `isinstance(dtype, pd.Period)` instead, when they really need to use `Use isinstance(dtype, pd.PeriodDtype)` ### Suggested fix for documentation Update message to reference correct class
1,733,944,385,000
null
Bug Report
[ "pandas/core/dtypes/common.py:is_period_dtype" ]
[]
1
pandas-dev/pandas
pandas-dev__pandas-60526
8a286fa16f3160e939b192cbe8e218992a84e6fc
diff --git a/pandas/core/computation/expressions.py b/pandas/core/computation/expressions.py index e2acd9a2c97c2..a2c3a706ae29c 100644 --- a/pandas/core/computation/expressions.py +++ b/pandas/core/computation/expressions.py @@ -65,23 +65,23 @@ def set_numexpr_threads(n=None) -> None: ne.set_num_threads(n) -def _evaluate_standard(op, op_str, a, b): +def _evaluate_standard(op, op_str, left_op, right_op): """ Standard evaluation. """ if _TEST_MODE: _store_test_result(False) - return op(a, b) + return op(left_op, right_op) -def _can_use_numexpr(op, op_str, a, b, dtype_check) -> bool: - """return a boolean if we WILL be using numexpr""" +def _can_use_numexpr(op, op_str, left_op, right_op, dtype_check) -> bool: + """return left_op boolean if we WILL be using numexpr""" if op_str is not None: # required min elements (otherwise we are adding overhead) - if a.size > _MIN_ELEMENTS: + if left_op.size > _MIN_ELEMENTS: # check for dtype compatibility dtypes: set[str] = set() - for o in [a, b]: + for o in [left_op, right_op]: # ndarray and Series Case if hasattr(o, "dtype"): dtypes |= {o.dtype.name} @@ -93,22 +93,22 @@ def _can_use_numexpr(op, op_str, a, b, dtype_check) -> bool: return False -def _evaluate_numexpr(op, op_str, a, b): +def _evaluate_numexpr(op, op_str, left_op, right_op): result = None - if _can_use_numexpr(op, op_str, a, b, "evaluate"): + if _can_use_numexpr(op, op_str, left_op, right_op, "evaluate"): is_reversed = op.__name__.strip("_").startswith("r") if is_reversed: # we were originally called by a reversed op method - a, b = b, a + left_op, right_op = right_op, left_op - a_value = a - b_value = b + left_value = left_op + right_value = right_op try: result = ne.evaluate( - f"a_value {op_str} b_value", - local_dict={"a_value": a_value, "b_value": b_value}, + f"left_value {op_str} right_value", + local_dict={"left_value": left_value, "right_value": right_op}, casting="safe", ) except TypeError: @@ -116,20 +116,20 @@ def _evaluate_numexpr(op, op_str, a, b): # (https://github.com/pydata/numexpr/issues/379) pass except NotImplementedError: - if _bool_arith_fallback(op_str, a, b): + if _bool_arith_fallback(op_str, left_op, right_op): pass else: raise if is_reversed: # reverse order to original for fallback - a, b = b, a + left_op, right_op = right_op, left_op if _TEST_MODE: _store_test_result(result is not None) if result is None: - result = _evaluate_standard(op, op_str, a, b) + result = _evaluate_standard(op, op_str, left_op, right_op) return result @@ -170,24 +170,24 @@ def _evaluate_numexpr(op, op_str, a, b): } -def _where_standard(cond, a, b): +def _where_standard(cond, left_op, right_op): # Caller is responsible for extracting ndarray if necessary - return np.where(cond, a, b) + return np.where(cond, left_op, right_op) -def _where_numexpr(cond, a, b): +def _where_numexpr(cond, left_op, right_op): # Caller is responsible for extracting ndarray if necessary result = None - if _can_use_numexpr(None, "where", a, b, "where"): + if _can_use_numexpr(None, "where", left_op, right_op, "where"): result = ne.evaluate( "where(cond_value, a_value, b_value)", - local_dict={"cond_value": cond, "a_value": a, "b_value": b}, + local_dict={"cond_value": cond, "a_value": left_op, "b_value": right_op}, casting="safe", ) if result is None: - result = _where_standard(cond, a, b) + result = _where_standard(cond, left_op, right_op) return result @@ -206,13 +206,13 @@ def _has_bool_dtype(x): _BOOL_OP_UNSUPPORTED = {"+": "|", "*": "&", "-": "^"} -def _bool_arith_fallback(op_str, a, b) -> bool: +def _bool_arith_fallback(op_str, left_op, right_op) -> bool: """ Check if we should fallback to the python `_evaluate_standard` in case of an unsupported operation by numexpr, which is the case for some boolean ops. """ - if _has_bool_dtype(a) and _has_bool_dtype(b): + if _has_bool_dtype(left_op) and _has_bool_dtype(right_op): if op_str in _BOOL_OP_UNSUPPORTED: warnings.warn( f"evaluating in Python space because the {op_str!r} " @@ -224,15 +224,15 @@ def _bool_arith_fallback(op_str, a, b) -> bool: return False -def evaluate(op, a, b, use_numexpr: bool = True): +def evaluate(op, left_op, right_op, use_numexpr: bool = True): """ - Evaluate and return the expression of the op on a and b. + Evaluate and return the expression of the op on left_op and right_op. Parameters ---------- op : the actual operand - a : left operand - b : right operand + left_op : left operand + right_op : right operand use_numexpr : bool, default True Whether to try to use numexpr. """ @@ -240,24 +240,24 @@ def evaluate(op, a, b, use_numexpr: bool = True): if op_str is not None: if use_numexpr: # error: "None" not callable - return _evaluate(op, op_str, a, b) # type: ignore[misc] - return _evaluate_standard(op, op_str, a, b) + return _evaluate(op, op_str, left_op, right_op) # type: ignore[misc] + return _evaluate_standard(op, op_str, left_op, right_op) -def where(cond, a, b, use_numexpr: bool = True): +def where(cond, left_op, right_op, use_numexpr: bool = True): """ - Evaluate the where condition cond on a and b. + Evaluate the where condition cond on left_op and right_op. Parameters ---------- cond : np.ndarray[bool] - a : return if cond is True - b : return if cond is False + left_op : return if cond is True + right_op : return if cond is False use_numexpr : bool, default True Whether to try to use numexpr. """ assert _where is not None - return _where(cond, a, b) if use_numexpr else _where_standard(cond, a, b) + return _where(cond, left_op, right_op) if use_numexpr else _where_standard(cond, left_op, right_op) def set_test_mode(v: bool = True) -> None:
DOC: Update variables a and b to names consistent with comment documentation ### Pandas version checks - [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/) ### Location of the documentation https://github.com/pandas-dev/pandas/blob/main/pandas/core/computation/expressions.py ### Documentation problem Lines 234 and 235 explain what a and b are in detail (left and right operands), but there are many of those same variables earlier in the file, making it harder to understand what they represent. ### Suggested fix for documentation Assuming a and b represent right and left operands throughout each function, change these variable names to right_op and left_op instead throughout all functions to have more descriptive variable names
1,733,658,054,000
null
Feature Request
[ "pandas/core/computation/expressions.py:_evaluate_standard", "pandas/core/computation/expressions.py:_can_use_numexpr", "pandas/core/computation/expressions.py:_evaluate_numexpr", "pandas/core/computation/expressions.py:_where_standard", "pandas/core/computation/expressions.py:_where_numexpr", "pandas/core/computation/expressions.py:_bool_arith_fallback", "pandas/core/computation/expressions.py:evaluate", "pandas/core/computation/expressions.py:where" ]
[]
8
pandas-dev/pandas
pandas-dev__pandas-60518
8a286fa16f3160e939b192cbe8e218992a84e6fc
diff --git a/pandas/core/computation/pytables.py b/pandas/core/computation/pytables.py index fe7e27f537b01..4a75acce46632 100644 --- a/pandas/core/computation/pytables.py +++ b/pandas/core/computation/pytables.py @@ -205,7 +205,7 @@ def generate(self, v) -> str: val = v.tostring(self.encoding) return f"({self.lhs} {self.op} {val})" - def convert_value(self, v) -> TermValue: + def convert_value(self, conv_val) -> TermValue: """ convert the expression that is in the term to something that is accepted by pytables @@ -219,44 +219,44 @@ def stringify(value): kind = ensure_decoded(self.kind) meta = ensure_decoded(self.meta) if kind == "datetime" or (kind and kind.startswith("datetime64")): - if isinstance(v, (int, float)): - v = stringify(v) - v = ensure_decoded(v) - v = Timestamp(v).as_unit("ns") - if v.tz is not None: - v = v.tz_convert("UTC") - return TermValue(v, v._value, kind) + if isinstance(conv_val, (int, float)): + conv_val = stringify(conv_val) + conv_val = ensure_decoded(conv_val) + conv_val = Timestamp(conv_val).as_unit("ns") + if conv_val.tz is not None: + conv_val = conv_val.tz_convert("UTC") + return TermValue(conv_val, conv_val._value, kind) elif kind in ("timedelta64", "timedelta"): - if isinstance(v, str): - v = Timedelta(v) + if isinstance(conv_val, str): + conv_val = Timedelta(conv_val) else: - v = Timedelta(v, unit="s") - v = v.as_unit("ns")._value - return TermValue(int(v), v, kind) + conv_val = Timedelta(conv_val, unit="s") + conv_val = conv_val.as_unit("ns")._value + return TermValue(int(conv_val), conv_val, kind) elif meta == "category": metadata = extract_array(self.metadata, extract_numpy=True) result: npt.NDArray[np.intp] | np.intp | int - if v not in metadata: + if conv_val not in metadata: result = -1 else: - result = metadata.searchsorted(v, side="left") + result = metadata.searchsorted(conv_val, side="left") return TermValue(result, result, "integer") elif kind == "integer": try: - v_dec = Decimal(v) + v_dec = Decimal(conv_val) except InvalidOperation: # GH 54186 # convert v to float to raise float's ValueError - float(v) + float(conv_val) else: - v = int(v_dec.to_integral_exact(rounding="ROUND_HALF_EVEN")) - return TermValue(v, v, kind) + conv_val = int(v_dec.to_integral_exact(rounding="ROUND_HALF_EVEN")) + return TermValue(conv_val, conv_val, kind) elif kind == "float": - v = float(v) - return TermValue(v, v, kind) + conv_val = float(conv_val) + return TermValue(conv_val, conv_val, kind) elif kind == "bool": - if isinstance(v, str): - v = v.strip().lower() not in [ + if isinstance(conv_val, str): + conv_val = conv_val.strip().lower() not in [ "false", "f", "no", @@ -268,13 +268,13 @@ def stringify(value): "", ] else: - v = bool(v) - return TermValue(v, v, kind) - elif isinstance(v, str): + conv_val = bool(conv_val) + return TermValue(conv_val, conv_val, kind) + elif isinstance(conv_val, str): # string quoting - return TermValue(v, stringify(v), "string") + return TermValue(conv_val, stringify(conv_val), "string") else: - raise TypeError(f"Cannot compare {v} of type {type(v)} to {kind} column") + raise TypeError(f"Cannot compare {conv_val} of type {type(conv_val)} to {kind} column") def convert_values(self) -> None: pass
DOC: Convert v to conv_val in function for pytables.py ### Pandas version checks - [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/) ### Location of the documentation pandas\pandas\core\computation\pytables.py ### Documentation problem Many instances of just v in this function. Wanted to clarify throughout ### Suggested fix for documentation Change v to conv_val
1,733,558,382,000
null
Feature Request
[ "pandas/core/computation/pytables.py:BinOp.convert_value" ]
[]
1
pandas-dev/pandas
pandas-dev__pandas-60512
659eecf22a2e4c4a8f023c655a75a7135614a409
diff --git a/pandas/core/generic.py b/pandas/core/generic.py index d1aa20501b060..de7fb3682fb4f 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -665,7 +665,7 @@ def size(self) -> int: See Also -------- - ndarray.size : Number of elements in the array. + numpy.ndarray.size : Number of elements in the array. Examples --------
DOC: methods in see also section in the pandas.DataFrame.size are not hyperlinks ### Pandas version checks - [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/) ### Location of the documentation https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.size.html ### Documentation problem In the see also section the `ndarray.size` method is listed, but it is not hyperlinks and thus the reader cannot navigate with ease but has to look for them instead. ### Suggested fix for documentation Add numpy.ndarray.size in the docstring.
take
1,733,537,109,000
null
Bug Report
[ "pandas/core/generic.py:NDFrame.size" ]
[]
1
pandas-dev/pandas
pandas-dev__pandas-60461
a4fc97e92ed938260728e3f6c2b92df5ffb57b7f
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py index 137a49c4487f6..02b9291da9b31 100644 --- a/pandas/core/dtypes/cast.py +++ b/pandas/core/dtypes/cast.py @@ -87,8 +87,8 @@ if TYPE_CHECKING: from collections.abc import ( + Collection, Sequence, - Sized, ) from pandas._typing import ( @@ -1581,7 +1581,7 @@ def _maybe_box_and_unbox_datetimelike(value: Scalar, dtype: DtypeObj): return _maybe_unbox_datetimelike(value, dtype) -def construct_1d_object_array_from_listlike(values: Sized) -> np.ndarray: +def construct_1d_object_array_from_listlike(values: Collection) -> np.ndarray: """ Transform any list-like object in a 1-dimensional numpy array of object dtype. @@ -1599,11 +1599,9 @@ def construct_1d_object_array_from_listlike(values: Sized) -> np.ndarray: ------- 1-dimensional numpy array of dtype object """ - # numpy will try to interpret nested lists as further dimensions, hence - # making a 1D array that contains list-likes is a bit tricky: - result = np.empty(len(values), dtype="object") - result[:] = values - return result + # numpy will try to interpret nested lists as further dimensions in np.array(), + # hence explicitly making a 1D array using np.fromiter + return np.fromiter(values, dtype="object", count=len(values)) def maybe_cast_to_integer_array(arr: list | np.ndarray, dtype: np.dtype) -> np.ndarray:
PERF: Melt 2x slower when future.infer_string option enabled ### Pandas version checks - [X] I have checked that this issue has not already been reported. - [X] I have confirmed this issue exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [X] I have confirmed this issue exists on the main branch of pandas. ### Reproducible Example ``` import pandas as pd import numpy as np # This configuration option makes this code slow pd.options.future.infer_string = True # Define dimensions n_rows = 10000 n_cols = 10000 # Generate random IDs for the rows ids = [f"string_id_{i}" for i in range(1, n_rows + 1)] # Generate a random sparse matrix with 10% non-NaN values data = np.random.choice([np.nan, 1], size=(n_rows, n_cols), p=[0.9, 0.1]) # Create a DataFrame from the sparse matrix and add the 'Id' column df = pd.DataFrame(data, columns=[f"column_name_{i}" for i in range(1, n_cols + 1)]) df.insert(0, 'Id', ids) # Melt the DataFrame df_melted = df.melt(id_vars=['Id'], var_name='Column', value_name='Value') # Display the first few rows of the melted DataFrame df_melted.head() ``` ### Installed Versions ``` INSTALLED VERSIONS ------------------ commit : d9cdd2ee5a58015ef6f4d15c7226110c9aab8140 python : 3.12.5.final.0 python-bits : 64 OS : Darwin OS-release : 23.6.0 Version : Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:30 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6000 machine : arm64 processor : arm byteorder : little LC_ALL : None LANG : pl_PL.UTF-8 LOCALE : pl_PL.UTF-8 pandas : 2.2.2 numpy : 2.1.0 pytz : 2024.1 dateutil : 2.9.0.post0 setuptools : 73.0.1 pip : 24.1.2 Cython : None pytest : 8.3.2 hypothesis : None sphinx : None blosc : None feather : None xlsxwriter : None lxml.etree : None html5lib : None pymysql : None psycopg2 : None jinja2 : 3.1.4 IPython : 8.26.0 pandas_datareader : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : 4.12.3 bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : 2024.6.1 gcsfs : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None pyarrow : 17.0.0 pyreadstat : None python-calamine : None pyxlsb : None s3fs : 2024.6.1 scipy : None sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None zstandard : None tzdata : 2024.1 qtpy : None pyqt5 : None ``` ### Prior Performance This code with `pd.options.future.infer_string = False` runs in: `5.23 s ± 1.35 s per loop (mean ± std. dev. of 7 runs, 1 loop each)` Memory consumption is around 14 GB. Enabling `pd.options.future.infer_string = True` makes it 2 times slower: `10.6 s ± 40.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)` Also memory consumption is bigger with peak around 25GB.
@maver1ck Thanks for the report! On main (and on my laptop), I see: ``` In [20]: pd.options.future.infer_string = False In [21]: df = pd.DataFrame(data, columns=[f"column_name_{i}" for i in range(1, n_cols + 1)]) In [22]: df.insert(0, 'Id', ids) In [23]: %timeit df_melted = df.melt(id_vars=['Id'], var_name='Column', value_name='Value') 6.25 s ± 944 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [24]: pd.options.future.infer_string = True In [25]: df = pd.DataFrame(data, columns=[f"column_name_{i}" for i in range(1, n_cols + 1)]) In [26]: df.insert(0, 'Id', ids) In [27]: %timeit df.melt(id_vars=['Id'], var_name='Column', value_name='Value') 3.55 s ± 169 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` So for me it is actually two times faster (didn't check memory usage though) And testing with release pandas 2.2.2, I indeed see that it is slower with `pd.options.future.infer_string = True`. So it seems we have fixed something in the meantime. The same problem exists in Pandas 2.2.3. So my understanding is that this will be fixed in 3.0 ? @jorisvandenbossche is that correct ?
1,733,057,561,000
null
Performance Issue
[ "pandas/core/dtypes/cast.py:construct_1d_object_array_from_listlike" ]
[]
1
pandas-dev/pandas
pandas-dev__pandas-60457
844b3191bd45b95cbaae341048bf7f367f086f2f
diff --git a/pandas/core/generic.py b/pandas/core/generic.py index a6be17a654aa7..3a48cc8a66076 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -3878,6 +3878,14 @@ def to_csv( >>> import os # doctest: +SKIP >>> os.makedirs("folder/subfolder", exist_ok=True) # doctest: +SKIP >>> df.to_csv("folder/subfolder/out.csv") # doctest: +SKIP + + Format floats to two decimal places: + + >>> df.to_csv("out1.csv", float_format="%.2f") # doctest: +SKIP + + Format floats using scientific notation: + + >>> df.to_csv("out2.csv", float_format="{{:.2e}}".format) # doctest: +SKIP """ df = self if isinstance(self, ABCDataFrame) else self.to_frame()
DOC: Add examples for float_format in to_csv documentation ### Pandas version checks - [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/) ### Location of the documentation https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_csv.html ### Documentation problem The float_format parameter in to_csv is explained but lacks examples. Users might struggle to understand how to apply this parameter effectively without concrete examples in the documentation. ### Suggested fix for documentation I suggest adding examples for float_format to make the documentation more beginner-friendly. Examples could include: ``` # Format floats to two decimal places df.to_csv("example1.csv", float_format="%.2f") # Use scientific notation df.to_csv("example2.csv", float_format="{:.2e}".format) ```
take
1,733,028,703,000
null
Feature Request
[ "pandas/core/generic.py:NDFrame.to_csv" ]
[]
1
pandas-dev/pandas
pandas-dev__pandas-60415
98f7e4deeff26a5ef993ee27104387a1a6e0d3d3
diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 039bdf9c36ee7..a6be17a654aa7 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -838,7 +838,7 @@ def pop(self, item: Hashable) -> Series | Any: return result @final - def squeeze(self, axis: Axis | None = None): + def squeeze(self, axis: Axis | None = None) -> Scalar | Series | DataFrame: """ Squeeze 1 dimensional axis objects into scalars.
DOC: Missing type hint for squeeze method ### Pandas version checks - [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/) ### Location of the documentation https://github.com/pandas-dev/pandas/blob/main/pandas/core/generic.py ### Documentation problem The squeeze method is missing a type hint. ### Suggested fix for documentation Adding a type hint to the squeeze method to be consistent with the rest of the code.
Can confirm, specifically this line: https://github.com/pandas-dev/pandas/blob/1c986d6213904fd7d9acc5622dc91d029d3f1218/pandas/core/generic.py#L841
1,732,555,390,000
null
Feature Request
[ "pandas/core/generic.py:NDFrame.squeeze" ]
[]
1
pandas-dev/pandas
pandas-dev__pandas-60398
e62fcb15a70dfb6f4c408cf801f83b216578335b
diff --git a/pandas/core/series.py b/pandas/core/series.py index 35b576da87ed7..4fa8b86fa4c16 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -567,7 +567,7 @@ def __arrow_c_stream__(self, requested_schema=None): Export the pandas Series as an Arrow C stream PyCapsule. This relies on pyarrow to convert the pandas Series to the Arrow - format (and follows the default behaviour of ``pyarrow.Array.from_pandas`` + format (and follows the default behavior of ``pyarrow.Array.from_pandas`` in its handling of the index, i.e. to ignore it). This conversion is not necessarily zero-copy. @@ -2226,7 +2226,7 @@ def drop_duplicates( 5 hippo Name: animal, dtype: object - With the 'keep' parameter, the selection behaviour of duplicated values + With the 'keep' parameter, the selection behavior of duplicated values can be changed. The value 'first' keeps the first occurrence for each set of duplicated entries. The default value of keep is 'first'. @@ -3451,7 +3451,7 @@ def sort_values( 4 5.0 dtype: float64 - Sort values ascending order (default behaviour) + Sort values ascending order (default behavior) >>> s.sort_values(ascending=True) 1 1.0 @@ -4098,7 +4098,7 @@ def swaplevel( In the following example, we will swap the levels of the indices. Here, we will swap the levels column-wise, but levels can be swapped row-wise - in a similar manner. Note that column-wise is the default behaviour. + in a similar manner. Note that column-wise is the default behavior. By not supplying any arguments for i and j, we swap the last and second to last indices.
DOC: Fix docstring typo ### Pandas version checks - [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/) ### Location of the documentation https://github.com/pandas-dev/pandas/blob/main/pandas/core/series.py ### Documentation problem The docstring for the __arrow_c_stream__ method in the Series class uses the word "behaviour". ### Suggested fix for documentation Suggested to rewrite as "behavior", which is the American English spelling, to maintain consistency with the rest of the Pandas codebase.
take
1,732,301,626,000
null
Bug Report
[ "pandas/core/series.py:Series.__arrow_c_stream__", "pandas/core/series.py:Series.drop_duplicates", "pandas/core/series.py:Series.sort_values", "pandas/core/series.py:Series.swaplevel" ]
[]
4
pandas-dev/pandas
pandas-dev__pandas-60310
61f800d7b69efa632c5f93b4be4b1e4154c698d7
diff --git a/pandas/core/frame.py b/pandas/core/frame.py index b35e2c8497fb7..34eb198b4b4da 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -2115,8 +2115,8 @@ def from_records( """ Convert structured or record ndarray to DataFrame. - Creates a DataFrame object from a structured ndarray, sequence of - tuples or dicts, or DataFrame. + Creates a DataFrame object from a structured ndarray, or sequence of + tuples or dicts. Parameters ----------
DOC: Dataframe.from_records should not say that passing in a DataFrame for data is allowed ### Pandas version checks - [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/) ### Location of the documentation https://pandas.pydata.org/docs/dev/reference/api/pandas.DataFrame.from_records.html#pandas.DataFrame.from_records ### Documentation problem The first text in the docstring says (emphasis at the end is mine) > Convert structured or record ndarray to DataFrame. > > Creates a DataFrame object from a structured ndarray, sequence of > tuples or dicts, or **DataFrame**. However, starting in 2.1.0, passing in a DataFrame has been deprecated. In 2.1.0 it would raise a FutureWarning; in main it will raise a TyperError. The documentation between 2.1.0 and main appear to have been updated to remove text in the Parameters section of the docstring that still said a DataFrame could be passed in for data, but the text in the initial section of the docstring was not. ### Suggested fix for documentation Change the initial docstring text to be: > Convert structured or record ndarray to DataFrame. > > Creates a DataFrame object from a structured ndarray or sequence of > tuples or dicts.
Thanks for the report, PRs to fix are welcome! take
1,731,578,353,000
null
Bug Report
[ "pandas/core/frame.py:DataFrame.from_records" ]
[]
1
pandas-dev/pandas
pandas-dev__pandas-60277
4fcee0e431135bf6fa97440d4d7e17a96630fe6e
diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 35014674565ff..3a83a3997f881 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -2211,8 +2211,9 @@ def to_excel( via the options ``io.excel.xlsx.writer`` or ``io.excel.xlsm.writer``. - merge_cells : bool, default True - Write MultiIndex and Hierarchical Rows as merged cells. + merge_cells : bool or 'columns', default False + If True, write MultiIndex index and columns as merged cells. + If 'columns', merge MultiIndex column cells only. {encoding_parameter} inf_rep : str, default 'inf' Representation for infinity (there is no native representation for
DOC: Document merge_cells="columns" in to_excel https://pandas.pydata.org/docs/dev/reference/api/pandas.DataFrame.to_excel.html The `merge_cells` argument can also take `"columns"` due to #35384. This should be added to the docstring.
take
1,731,306,243,000
null
Feature Request
[ "pandas/core/generic.py:NDFrame.to_excel" ]
[]
1
pandas-dev/pandas
pandas-dev__pandas-60247
5f23aced2f97f2ed481deda4eaeeb049d6c7debe
diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 7c2cc5d33a5db..56031f20faa16 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -7668,8 +7668,12 @@ def interpolate( * 'linear': Ignore the index and treat the values as equally spaced. This is the only method supported on MultiIndexes. * 'time': Works on daily and higher resolution data to interpolate - given length of interval. - * 'index', 'values': use the actual numerical values of the index. + given length of interval. This interpolates values based on + time interval between observations. + * 'index': The interpolation uses the numerical values + of the DataFrame's index to linearly calculate missing values. + * 'values': Interpolation based on the numerical values + in the DataFrame, treating them as equally spaced along the index. * 'nearest', 'zero', 'slinear', 'quadratic', 'cubic', 'barycentric', 'polynomial': Passed to `scipy.interpolate.interp1d`, whereas 'spline' is passed to
DOC: Improve documentation df.interpolate() for methods ‘time’, ‘index’ and ‘values’ ### Pandas version checks - [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/) ### Location of the documentation https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.interpolate.html ### Documentation problem It is not possible to understand what exactly the method `interpolate` does from reading the documentation. See e.g. this SE post for more details https://stackoverflow.com/questions/65511992/pandas-interpolation-type-when-method-index ### Suggested fix for documentation Rewrite doctstring and documentation page for the method
Thanks for the report, agreed this could use clarification. PRs to improve are welcome! take
1,731,082,540,000
null
Feature Request
[ "pandas/core/generic.py:NDFrame.interpolate" ]
[]
1
pandas-dev/pandas
pandas-dev__pandas-60187
dbeeb1f05bca199b3c1aed979e6ae72074a82243
diff --git a/pandas/core/series.py b/pandas/core/series.py index fe2bb0b5aa5c3..d83d9715878f8 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -2482,6 +2482,7 @@ def round(self, decimals: int = 0, *args, **kwargs) -> Series: -------- numpy.around : Round values of an np.array. DataFrame.round : Round values of a DataFrame. + Series.dt.round : Round values of data to the specified freq. Notes -----
DOC: Distinguish between Series.round and Series.dt.round ### Pandas version checks - [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/) ### Location of the documentation https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.round.html#pandas.Series.round ### Documentation problem When using Series.round, it does not work on date data. ### Suggested fix for documentation Adding Series.dt.round in the "See also" section would make it more convenient for users to find the relevant documentation. https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.round.html
I think it worth changing, can I take it? take
1,730,742,670,000
null
Feature Request
[ "pandas/core/series.py:Series.round" ]
[]
1
huggingface/accelerate
huggingface__accelerate-3279
cb8b7c637a8588668c52bd306f9b2828f69d9585
diff --git a/src/accelerate/utils/modeling.py b/src/accelerate/utils/modeling.py index 5f88e54e3c9..806f930acaa 100644 --- a/src/accelerate/utils/modeling.py +++ b/src/accelerate/utils/modeling.py @@ -1101,6 +1101,7 @@ def _init_infer_auto_device_map( special_dtypes: Optional[Dict[str, Union[str, torch.device]]] = None, ) -> Tuple[ List[Union[int, str]], + Dict[Union[int, str], Union[int, str]], List[Union[int, str]], List[int], Dict[str, int], @@ -1147,6 +1148,7 @@ def _init_infer_auto_device_map( return ( devices, + max_memory, main_devices, gpus, module_sizes, @@ -1356,6 +1358,7 @@ def infer_auto_device_map( # Initialize the variables ( devices, + max_memory, main_devices, gpus, module_sizes,
Calling infer_auto_device_map() with max_memory=None throws an error in version 1.2.0 ### System Info ```Shell accelerate==1.2.0 ``` ### Reproduction Bug is from this commit: https://github.com/huggingface/accelerate/commit/d7b1b368e9f484a18636a71600566b757d5cf87e `max_memory` initialization was moved into `_init_infer_auto_device_map`, which does not return the `max_memory` value. So if max_memory=None is passed to `infer_auto_device_map` (the default value), then it will still be None at line 1415: https://github.com/huggingface/accelerate/blob/cb8b7c637a8588668c52bd306f9b2828f69d9585/src/accelerate/utils/modeling.py#L1415 Leading to error: TypeError: 'NoneType' object is not subscriptable ### Expected behavior max_memory=None when passed to `infer_auto_device_map` does not throw an error.
@Nech-C Sorry for the oversight. I will fix it ASAP. Thanks for pointing it out!
1,733,630,086,000
null
Bug Report
[ "src/accelerate/utils/modeling.py:_init_infer_auto_device_map", "src/accelerate/utils/modeling.py:infer_auto_device_map" ]
[]
2
huggingface/accelerate
huggingface__accelerate-3261
29be4788629b772a3b722076e433b5b3b5c85da3
diff --git a/examples/by_feature/megatron_lm_gpt_pretraining.py b/examples/by_feature/megatron_lm_gpt_pretraining.py index 18488ec41e2..c9d4787ed83 100644 --- a/examples/by_feature/megatron_lm_gpt_pretraining.py +++ b/examples/by_feature/megatron_lm_gpt_pretraining.py @@ -252,7 +252,7 @@ def main(): if args.with_tracking: accelerator_log_kwargs["log_with"] = args.report_to - accelerator_log_kwargs["logging_dir"] = args.output_dir + accelerator_log_kwargs["project_dir"] = args.output_dir accelerator = Accelerator(gradient_accumulation_steps=args.gradient_accumulation_steps, **accelerator_log_kwargs)
[BUG] Accelerator.__init__() got an unexpected keyword argument 'logging_dir' ### System Info ```Shell accelerate version: main python version: 3.11 torch version: 2.4 numpy version: 1.26.4 ``` ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`) - [ ] My own task or dataset (give details below) ### Reproduction When I run the accelerate/examples/megatron_1m_gpt_pretraining. py file accelerate launch --config_file megatron_gpt_pretraining.py \ --config_name "gpt2-large" \ --tokenizer_name "gpt2-large" \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --block_size 1024 \ --learning_rate 5e-5 \ --per_device_train_batch_size 24 \ --per_device_eval_batch_size 24 \ --num_train_epochs 5 \ --with_tracking \ --report_to "wandb" \ --output_dir "awesome_model" ### Expected behavior Normal training, but I found that in megatron_1m_gpt_pretraining on line 255 of the py file, there is an undefined parameter 'logging.dir' in the __init__ method of the Accelerator function [BUG] Accelerator.__init__() got an unexpected keyword argument 'logging_dir' ### System Info ```Shell accelerate version: main python version: 3.11 torch version: 2.4 numpy version: 1.26.4 ``` ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`) - [ ] My own task or dataset (give details below) ### Reproduction When I run the accelerate/examples/megatron_1m_gpt_pretraining. py file accelerate launch --config_file megatron_gpt_pretraining.py \ --config_name "gpt2-large" \ --tokenizer_name "gpt2-large" \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --block_size 1024 \ --learning_rate 5e-5 \ --per_device_train_batch_size 24 \ --per_device_eval_batch_size 24 \ --num_train_epochs 5 \ --with_tracking \ --report_to "wandb" \ --output_dir "awesome_model" ### Expected behavior Normal training, but I found that in megatron_1m_gpt_pretraining on line 255 of the py file, there is an undefined parameter 'logging.dir' in the __init__ method of the Accelerator function
Thanks for pointing this out. I think it should be `project_dir` instead. Are you interested in submitting a PR to fix this? For clarity, the file is at `https://github.com/huggingface/accelerate/blob/main/examples/by_feature/megatron_lm_gpt_pretraining.py` :) of course Thanks for pointing this out. I think it should be `project_dir` instead. Are you interested in submitting a PR to fix this? For clarity, the file is at `https://github.com/huggingface/accelerate/blob/main/examples/by_feature/megatron_lm_gpt_pretraining.py` :) of course
1,732,582,927,000
null
Bug Report
[ "examples/by_feature/megatron_lm_gpt_pretraining.py:main" ]
[]
1
huggingface/trl
huggingface__trl-2433
9ff79a65e3d1c28b7ee8bc0912b2fbdceb3dbeec
diff --git a/trl/trainer/rloo_trainer.py b/trl/trainer/rloo_trainer.py index 106426073f..f2e3eb9674 100644 --- a/trl/trainer/rloo_trainer.py +++ b/trl/trainer/rloo_trainer.py @@ -279,7 +279,7 @@ def repeat_generator(): # trainer state initialization self.state.global_step = 0 self.state.episode = 0 - self.state.max_steps = args.num_total_batches * args.num_mini_batches + self.state.max_steps = (args.num_total_batches * args.num_mini_batches) // 2 self.state.num_train_epochs = args.total_episodes / self.train_dataset_len # Compute absolute values for logging, eval, and save if given as ratio if args.logging_steps is not None:
RLOO Trainer Stopping After 1 Epoch ### System Info - Platform: Linux-3.10.0-693.11.6.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.5 - PyTorch version: 2.4.0 - CUDA device(s): not available - Transformers version: 4.46.2 - Accelerate version: 1.1.1 - Accelerate config: not found - Datasets version: 3.1.0 - HF Hub version: 0.26.2 - TRL version: 0.13.0.dev0 - bitsandbytes version: not installed - DeepSpeed version: 0.15.4 - Diffusers version: not installed - Liger-Kernel version: not installed - LLM-Blender version: not installed - OpenAI version: 1.54.4 - PEFT version: not installed ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder - [ ] My own task or dataset (give details below) ### Reproduction While reproducing RLOO using a multi-GPU setup with official [script](https://huggingface.co/docs/trl/en/rloo_trainer#benchmark-experiments), training consistently halts midway, regardless of whether it's set for 1,000 or 1 million episodes. An example wandb [run](https://wandb.ai/omerveyselcagatan/huggingface/runs/zdftqdx5?nw=nwuseromerveyselcagatan) that ended with 1954 steps, whereas it should 3908. ### Expected behavior Should have run for 3908, or possible step miscalculation. ### Checklist - [X] I have checked that my issue isn't already filed (see [open issues](https://github.com/huggingface/trl/issues?q=is%3Aissue)) - [X] I have included my system information - [X] Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks)) - [X] Any code provided is properly formatted in code blocks, (no screenshot, [more on code blocks](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks)) - [X] Any traceback provided is complete
1,733,253,459,000
null
Bug Report
[ "trl/trainer/rloo_trainer.py:RLOOTrainer.train" ]
[]
1
huggingface/trl
huggingface__trl-2417
9c5388b69e0842f76edc46a2ff9d0b51e1db4337
diff --git a/trl/trainer/online_dpo_trainer.py b/trl/trainer/online_dpo_trainer.py index 7830d3fe64..56edd22be5 100644 --- a/trl/trainer/online_dpo_trainer.py +++ b/trl/trainer/online_dpo_trainer.py @@ -284,7 +284,10 @@ def __init__( self.reward_model = prepare_deepspeed( self.reward_model, args.per_device_train_batch_size, args.fp16, args.bf16 ) - self.ref_model = prepare_deepspeed(self.ref_model, args.per_device_train_batch_size, args.fp16, args.bf16) + if self.ref_model is not None: + self.ref_model = prepare_deepspeed( + self.ref_model, args.per_device_train_batch_size, args.fp16, args.bf16 + ) else: if self.ref_model is not None: self.ref_model = self.ref_model.to(self.accelerator.device)
Online DPO Meets Error When Using Deepspeed for Speed Up. ### System Info !pip install git+https://github.com/huggingface/trl.git ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder - [ ] My own task or dataset (give details below) ### Reproduction !ACCELERATE_LOG_LEVEL=info accelerate launch --config_file multi_gpu.yaml \ online_dpo.py \ --model_name_or_path mistralai/Mistral-7B-v0.1 \ --reward_model_path Ray2333/GRM-Llama3.2-3B-rewardmodel-ft \ --dataset_name nvidia/HelpSteer2 \ --learning_rate 5.0e-6 \ --output_dir pythia-1b-tldr-online-dpo \ --per_device_train_batch_size 16 \ --gradient_accumulation_steps 8 \ --warmup_ratio 0.1 \ --missing_eos_penalty 1.0 \ --use_peft Traceback (most recent call last): File "/home/ec2-user/SageMaker/Zhichao/UNA_online/UNA_peft/una_peft.py", line 356, in <module> [2024-11-28 16:59:10,071] [INFO] [config.py:999:print] DeepSpeedEngine configuration: trainer = OnlineDPOTrainer( File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/utils/deprecation.py", line 165, in wrapped_func return func(*args, **kwargs) Traceback (most recent call last): File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/trl/trainer/online_dpo_trainer.py", line 286, in __init__ File "/home/ec2-user/SageMaker/Zhichao/UNA_online/UNA_peft/una_peft.py", line 356, in <module> self.ref_model = prepare_deepspeed(self.ref_model, args.per_device_train_batch_size, args.fp16, args.bf16) File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/trl/trainer/utils.py", line 1212, in prepare_deepspeed trainer = OnlineDPOTrainer( File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/utils/deprecation.py", line 165, in wrapped_func return func(*args, **kwargs) File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/trl/trainer/online_dpo_trainer.py", line 286, in __init__ model, *_ = deepspeed.initialize(model=model, config=config_kwargs) File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/deepspeed/__init__.py", line 139, in initialize assert model is not None, "deepspeed.initialize requires a model" AssertionErrorself.ref_model = prepare_deepspeed(self.ref_model, args.per_device_train_batch_size, args.fp16, args.bf16): deepspeed.initialize requires a model File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/trl/trainer/utils.py", line 1212, in prepare_deepspeed ### Expected behavior It should be able to run. ### Checklist - [X] I have checked that my issue isn't already filed (see [open issues](https://github.com/huggingface/trl/issues?q=is%3Aissue)) - [X] I have included my system information - [X] Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks)) - [X] Any code provided is properly formatted in code blocks, (no screenshot, [more on code blocks](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks)) - [X] Any traceback provided is complete
Sorry, I use "deepspeed_zero2.yaml" and it should be !ACCELERATE_LOG_LEVEL=info accelerate launch --config_file deepspeed_zero2.yaml online_dpo.py --model_name_or_path mistralai/Mistral-7B-v0.1 --reward_model_path Ray2333/GRM-Llama3.2-3B-rewardmodel-ft --dataset_name nvidia/HelpSteer2 --learning_rate 5.0e-6 --output_dir pythia-1b-tldr-online-dpo --per_device_train_batch_size 16 --gradient_accumulation_steps 8 --warmup_ratio 0.1 --missing_eos_penalty 1.0 --use_peft Thanks for reporting. Please share your system info (`trl env`) /home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/utils/hub.py:128: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead. warnings.warn( Copy-paste the following information when reporting an issue: - Platform: Linux-5.10.228-219.884.amzn2.x86_64-x86_64-with-glibc2.26 - Python version: 3.10.14 - PyTorch version: 2.2.2 - CUDA device(s): NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB - Transformers version: 4.46.3 - Accelerate version: 0.34.2 - Accelerate config: not found - Datasets version: 3.1.0 - HF Hub version: 0.26.2 - TRL version: 0.13.0.dev0 - bitsandbytes version: 0.44.1 - DeepSpeed version: 0.16.0 - Diffusers version: not installed - Liger-Kernel version: not installed - LLM-Blender version: not installed - OpenAI version: not installed - PEFT version: 0.13.2
1,732,904,159,000
null
Bug Report
[ "trl/trainer/online_dpo_trainer.py:OnlineDPOTrainer.__init__" ]
[]
1
huggingface/trl
huggingface__trl-2332
74e20cbbbcbac7ac8d426df09eda5f310c637def
diff --git a/trl/trainer/dpo_trainer.py b/trl/trainer/dpo_trainer.py index b563cab2f5..0c9883387a 100644 --- a/trl/trainer/dpo_trainer.py +++ b/trl/trainer/dpo_trainer.py @@ -1086,10 +1086,10 @@ def concatenated_forward(self, model: nn.Module, batch: Dict[str, Union[List, to # Get the first column idx that is all zeros and remove every column after that empty_cols = torch.sum(attention_mask, dim=0) == 0 - first_empty_col = torch.nonzero(empty_cols)[0].item() if empty_cols.any() else attention_mask.size(1) + 1 - input_ids = input_ids[:, : first_empty_col - 1] - attention_mask = attention_mask[:, : first_empty_col - 1] - loss_mask = loss_mask[:, : first_empty_col - 1] + first_empty_col = torch.nonzero(empty_cols)[0].item() if empty_cols.any() else attention_mask.size(1) + input_ids = input_ids[:, : first_empty_col] + attention_mask = attention_mask[:, : first_empty_col] + loss_mask = loss_mask[:, : first_empty_col] # Truncate right if self.args.max_length is not None:
Wrong tensor index for roll and truncate in DPOTrainer fn concatenated_forward( ). ### System Info it is a tensor index error ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder - [ ] My own task or dataset (give details below) ### Reproduction ```python # Get the first column idx that is all zeros and remove every column after that empty_cols = torch.sum(attention_mask, dim=0) == 0 first_empty_col = torch.nonzero(empty_cols)[0].item() if empty_cols.any() else attention_mask.size(1) + 1 input_ids = input_ids[:, : first_empty_col - 1] attention_mask = attention_mask[:, : first_empty_col - 1] loss_mask = loss_mask[:, : first_empty_col - 1] ``` ### Expected behavior The returns of _torch.nonzero_ is the index (starts from 0) of non-zero elements, so there is no need to add -1 to _first_empty_col_. The correct code should be: ```python empty_cols = torch.sum(attention_mask, dim=0) == 0 first_empty_col = torch.nonzero(empty_cols)[0].item() if empty_cols.any() else attention_mask.size(1) input_ids = input_ids[:, : first_empty_col] attention_mask = attention_mask[:, : first_empty_col] loss_mask = loss_mask[:, : first_empty_col] ```
Good catch! Thanks! Do you mind opening a PR to fix that?
1,730,897,529,000
null
Bug Report
[ "trl/trainer/dpo_trainer.py:DPOTrainer.concatenated_forward" ]
[]
1
huggingface/trl
huggingface__trl-2325
74e20cbbbcbac7ac8d426df09eda5f310c637def
diff --git a/trl/trainer/rloo_trainer.py b/trl/trainer/rloo_trainer.py index 7bbd39264d..e33899f5d9 100644 --- a/trl/trainer/rloo_trainer.py +++ b/trl/trainer/rloo_trainer.py @@ -263,7 +263,6 @@ def repeat_generator(): approxkl_stats = torch.zeros(stats_shape, device=device) pg_clipfrac_stats = torch.zeros(stats_shape, device=device) pg_loss_stats = torch.zeros(stats_shape, device=device) - vf_loss_stats = torch.zeros(stats_shape, device=device) vf_clipfrac_stats = torch.zeros(stats_shape, device=device) entropy_stats = torch.zeros(stats_shape, device=device) ratio_stats = torch.zeros(stats_shape, device=device) @@ -441,7 +440,6 @@ def repeat_generator(): ratio_stats[ppo_epoch_idx, minibatch_idx, gradient_accumulation_idx] = new_ratio.mean() gradient_accumulation_idx += 1 minibatch_idx += 1 - self.state.global_step += 1 # del everything and empty cache # fmt: off del ( @@ -467,7 +465,6 @@ def repeat_generator(): metrics["policy/approxkl_avg"] = self.accelerator.gather(approxkl_stats).mean().item() metrics["policy/clipfrac_avg"] = self.accelerator.gather(pg_clipfrac_stats).mean().item() metrics["loss/policy_avg"] = self.accelerator.gather(pg_loss_stats).mean().item() - metrics["loss/value_avg"] = self.accelerator.gather(vf_loss_stats).mean().item() metrics["val/clipfrac_avg"] = self.accelerator.gather(vf_clipfrac_stats).mean().item() metrics["policy/entropy_avg"] = self.accelerator.gather(entropy_stats).mean().item() metrics["val/ratio"] = self.accelerator.gather(ratio_stats).mean().item() @@ -475,12 +472,12 @@ def repeat_generator(): metrics["val/num_eos_tokens"] = (responses == processing_class.eos_token_id).sum().item() metrics["lr"] = self.lr_scheduler.get_last_lr()[0] metrics["episode"] = self.state.episode - self.state.epoch = self.state.episode / self.train_dataset_len # used by self.log - self.state.global_step += 1 + self.state.epoch = self.state.episode / (args.rloo_k * self.train_dataset_len) # used by self.log self.log(metrics) del kl, mean_kl, mean_entropy, scores self.lr_scheduler.step() + self.state.global_step += 1 self.control = self.callback_handler.on_step_end(args, self.state, self.control) if self.control.should_save: self._save_checkpoint(model, trial=None)
Several problems in RLOOTrainer ### System Info main ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder - [ ] My own task or dataset (give details below) ### Reproduction 1. metrics["loss/value_avg"] = self.accelerator.gather(vf_loss_stats).mean().item() this metrics is useless since we don't use value func in RLOO 2. self.state.epoch = self.state.episode / self.train_dataset_len # used by self.log This will cause problems on calculation since the true epoch count is (self.state.episode / (args.rloo_k * self.train_dataset_len) Since every instruction is repeated args.rloo_k times 3. Multiple self.state.global_step += 1 will cause the saving process to go wrong ### Expected behavior this to be right
1,730,747,016,000
null
Bug Report
[ "trl/trainer/rloo_trainer.py:RLOOTrainer.train" ]
[]
1
sympy/sympy
sympy__sympy-27301
a7719e719c0b43ec1dbb964b01b57c4f3783be8d
diff --git a/sympy/plotting/plot.py b/sympy/plotting/plot.py index 63da0440dabb..50029392a1ac 100644 --- a/sympy/plotting/plot.py +++ b/sympy/plotting/plot.py @@ -301,8 +301,8 @@ def plot(*args, show=True, **kwargs): :external:meth:`~matplotlib.axes.Axes.fill_between` method. adaptive : bool, optional - The default value is set to ``True``. Set adaptive to ``False`` - and specify ``n`` if uniform sampling is required. + The default value for the ``adaptive`` parameter is now ``False``. + To enable adaptive sampling, set ``adaptive=True`` and specify ``n`` if uniform sampling is required. The plotting uses an adaptive algorithm which samples recursively to accurately plot. The adaptive algorithm uses a @@ -377,14 +377,14 @@ def plot(*args, show=True, **kwargs): [0]: cartesian line: x**2 for x over (-6.0, 6.0) [1]: cartesian line: x for x over (-5.0, 5.0) - No adaptive sampling. + No adaptive sampling by default. If adaptive sampling is required, set ``adaptive=True``. .. plot:: :context: close-figs :format: doctest :include-source: True - >>> plot(x**2, adaptive=False, n=400) + >>> plot(x**2, adaptive=True, n=400) Plot object containing: [0]: cartesian line: x**2 for x over (-10.0, 10.0)
DOC: outdated information about adaptive sampling in plot() function I have recently learned (https://github.com/mgeier/python-audio/issues/4) that SymPy doesn't use adaptive sampling by default anymore. Therefore, this documentation is outdated: https://github.com/sympy/sympy/blob/a7719e719c0b43ec1dbb964b01b57c4f3783be8d/sympy/plotting/plot.py#L304-L305 https://github.com/sympy/sympy/blob/a7719e719c0b43ec1dbb964b01b57c4f3783be8d/sympy/plotting/plot.py#L380-L389
1,732,293,434,000
null
Bug Report
[ "sympy/plotting/plot.py:plot" ]
[]
1
SYSTRAN/faster-whisper
SYSTRAN__faster-whisper-1198
b568faec40eef1fee88f8aeb27ac3f9d6e006ba4
diff --git a/faster_whisper/vad.py b/faster_whisper/vad.py index 9605931c..1f7d2057 100644 --- a/faster_whisper/vad.py +++ b/faster_whisper/vad.py @@ -260,8 +260,9 @@ def __init__(self, encoder_path, decoder_path): ) from e opts = onnxruntime.SessionOptions() - opts.inter_op_num_threads = 0 - opts.intra_op_num_threads = 0 + opts.inter_op_num_threads = 1 + opts.intra_op_num_threads = 1 + opts.enable_cpu_mem_arena = False opts.log_severity_level = 4 self.encoder_session = onnxruntime.InferenceSession( @@ -301,7 +302,16 @@ def __call__( batched_audio = batched_audio.reshape(-1, num_samples + context_size_samples) - encoder_output = self.encoder_session.run(None, {"input": batched_audio})[0] + encoder_batch_size = 10000 + num_segments = batched_audio.shape[0] + encoder_outputs = [] + for i in range(0, num_segments, encoder_batch_size): + encoder_output = self.encoder_session.run( + None, {"input": batched_audio[i : i + encoder_batch_size]} + )[0] + encoder_outputs.append(encoder_output) + + encoder_output = np.concatenate(encoder_outputs, axis=0) encoder_output = encoder_output.reshape(batch_size, -1, 128) decoder_outputs = []
OOM when using VAD Hi, does somebody else experience issues with memory consumption when transcribing audio files containing a lot of speech (~ 4 hours long)? I am running the latest version of faster-whisper in a Kubernetes pod on a g4dn AWS instance. The server has 4 cores, 1 GPU, and 16GB RAM, but the pod is limited to 2 cores. The base image is `pytorch/pytorch:2.5.1-cuda12.4-cudnn9-runtime` and as per [this pinned issue](https://github.com/SYSTRAN/faster-whisper/issues/1086) the installed versions should be compatible: - python 3.11 - torch 2.5.1+cu124 - ctranslate2 4.5.0 - cuda 12.4 - cudnn 9.1.0.7 The process gets killed during the transcription phase when VAD is enabled. I tried the solution [described here](https://github.com/snakers4/silero-vad/issues/356), but it doesn't help. See the logs attached. Anyone has any idea what could be the cause of the OOM? [libraries.txt](https://github.com/user-attachments/files/18039471/libraries.txt) [logs on sigkill.txt](https://github.com/user-attachments/files/18039459/logs.on.sigkill.txt)
1,733,855,723,000
null
Performance Issue
[ "faster_whisper/vad.py:SileroVADModel.__init__", "faster_whisper/vad.py:SileroVADModel.__call__" ]
[]
2
SYSTRAN/faster-whisper
SYSTRAN__faster-whisper-1157
bcd8ce0fc72d1fa4e42bdf5fd34d5d17bae680c2
diff --git a/faster_whisper/transcribe.py b/faster_whisper/transcribe.py index 067527f1..763d64ac 100644 --- a/faster_whisper/transcribe.py +++ b/faster_whisper/transcribe.py @@ -1699,12 +1699,14 @@ def find_alignment( # array([0.]) # This results in crashes when we lookup jump_times with float, like # IndexError: arrays used as indices must be of integer (or boolean) type - return [] + return_list.append([]) + continue word_boundaries = np.pad( np.cumsum([len(t) for t in word_tokens[:-1]]), (1, 0) ) if len(word_boundaries) <= 1: - return [] + return_list.append([]) + continue jumps = np.pad(np.diff(text_indices), (1, 0), constant_values=1).astype( bool @@ -1884,11 +1886,9 @@ def merge_punctuations(alignment: List[dict], prepended: str, appended: str) -> if previous["word"].startswith(" ") and previous["word"].strip() in prepended: # prepend it to the following word following["word"] = previous["word"] + following["word"] - if "tokens" in alignment[0].keys(): - following["tokens"] = previous["tokens"] + following["tokens"] - previous["tokens"] = [] + following["tokens"] = previous["tokens"] + following["tokens"] previous["word"] = "" - + previous["tokens"] = [] else: j = i i -= 1 @@ -1902,11 +1902,9 @@ def merge_punctuations(alignment: List[dict], prepended: str, appended: str) -> if not previous["word"].endswith(" ") and following["word"] in appended: # append it to the previous word previous["word"] = previous["word"] + following["word"] - if "tokens" in alignment[0].keys(): - previous["tokens"] = previous["tokens"] + following["tokens"] - following["tokens"] = [] + previous["tokens"] = previous["tokens"] + following["tokens"] following["word"] = "" - + following["tokens"] = [] else: i = j j += 1
IndexError: list index out of range in add_word_timestamps function Hi, I found a rare condition, with a specific wav file, specific language and prompt, when I try to transcribe with word_timestamps=True, there is a list index out of range error in add_word_timestamps function: ``` File "/usr/local/src/transcriber/lib/python3.11/site-packages/faster_whisper/transcribe.py", line 1574, in add_word_timestamps median_duration, max_duration = median_max_durations[segment_idx] ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^ IndexError: list index out of range ``` It seems in the median_max_durations list we have less elements than in the segments list. I'm using large-v3-turbo model with these transcibe settings: ``` segments, _ = asr_model.transcribe(audio_to_analize, language="fr", condition_on_previous_text=False, initial_prompt="Free", task='transcribe', word_timestamps=True, suppress_tokens=[-1, 12], beam_size=5) segments = list(segments) # The transcription will actually run here. ``` As I see, the median_max_durations is populated from alignments, so something is maybe wrong there? If i change language or prompt, or use another sound file, then there is no issue. Thank you
I'm aware that this error exists but I had no luck in reproducing it, can you write the exact steps to reproduce and upload the audio file? Yes. The sample python code that generates the issue: ``` import torch from faster_whisper import WhisperModel asr_model = WhisperModel("large-v3-turbo", device="cuda", compute_type="int8", download_root="./models") segments, _ = asr_model.transcribe('test.wav', language='fr', condition_on_previous_text=False, initial_prompt='Free', task='transcribe', word_timestamps=True, suppress_tokens=[-1, 12], beam_size=5) segments = list(segments) # The transcription will actually run here. ``` And the audio sample is attached. [test.zip](https://github.com/user-attachments/files/17646609/test.zip) I was not able to reproduce it on my machine or using colab Maybe python version, debian, pytorch... or something is slightly different on our setups. Anything I can do on my side to get more debug logs to see what is the issue? are you using the master branch? `median_max_durations` is initialized as an empty list, and since you are using sequential transcription, it will have a single value, The only reason that causes this error is that it is still an empty list which means the for loop in line 1565 was never executed, this will happen when `alignments` is an empty list, you need to figure why is this happening https://github.com/SYSTRAN/faster-whisper/blob/203dddb047fd2c3ed2a520fe1416467a527e0f37/faster_whisper/transcribe.py#L1561-L1595 the same here, while test whisper_streaming ```shell Traceback (most recent call last): File "C:\Users\kr.mao\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 187, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "C:\Users\kr.mao\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 110, in _get_module_details __import__(pkg_name) File "F:\Workspace\skills\python3\whisper_streaming\whisper_online_server.py", line 183, in <module> proc.process() File "F:\Workspace\skills\python3\whisper_streaming\whisper_online_server.py", line 162, in process o = online.process_iter() File "F:\Workspace\skills\python3\whisper_streaming\whisper_online.py", line 378, in process_iter res = self.asr.transcribe(self.audio_buffer, init_prompt=prompt) File "F:\Workspace\skills\python3\whisper_streaming\whisper_online.py", line 138, in transcribe return list(segments) File "F:\Workspace\skills\python3\whisper_streaming\venv\lib\site-packages\faster_whisper\transcribe.py", line 2016, in restore_speech_timestamps for segment in segments: File "F:\Workspace\skills\python3\whisper_streaming\venv\lib\site-packages\faster_whisper\transcribe.py", line 1256, in generate_segments self.add_word_timestamps( File "F:\Workspace\skills\python3\whisper_streaming\venv\lib\site-packages\faster_whisper\transcribe.py", line 1595, in add_word_timestamps median_duration, max_duration = median_max_durations[segment_idx] IndexError: list index out of range ``` faster_whisper ***version.py*** ```python """Version information.""" __version__ = "1.1.0rc0" ``` This problem is still non-reproducible regardless of all methods provided, it will not be solved without reproduction, someone who has the problem needs to create a colab notebook to reproduce it and if they weren't able to reproduce it on colab then they need to isolate where the problem is caused in their environment, without that there is nothing that can be done > This problem is still non-reproducible regardless of all methods provided, it will not be solved without reproduction, someone who has the problem needs to create a colab notebook to reproduce it and if they weren't able to reproduce it on colab then they need to isolate where the problem is caused in their environment, without that there is nothing that can be done https://gist.github.com/OliveSerg/cc6c409126567a40c94eb94339a13bae Was able to reproduce it on Colab with the following files [test.zip](https://github.com/user-attachments/files/17818786/test.zip). Was not able to reproduce with @formater's test file though. Files are just a French bible verse from LibriVox and a [youtube](https://youtube.com/shorts/O32nnjAmpeM?si=vDHhKdbgV27r1n8b) short. Used `ctranslate2==4.4.0` because of [1806](https://github.com/OpenNMT/CTranslate2/issues/1806). Error occurs only when `compute_type="int8"` or `int8_float16`, `task="translate"`, and `word_timestamps=True`. No further debugging with the parameters were done aside for replacing these 3. @MahmoudAshraf97 Maybe related to such weird output (that's from prebug [193 ](https://github.com/SYSTRAN/faster-whisper/tree/3d1de60ef3ce7d34f7c0ae6547f8a616aa060ac2)revision of faster-whisper): ``` { "id": 279, "seek": 132430, "start": 1542.84, "end": 1545.14, "text": " Nuðarr你可以 það hverðesskj af april", "tokens": [51225, 13612, 23436, 289, 81, 42766, 43219, 64, 23436, 276, 331, 23436, 442, 74, 73, 3238, 10992, 388, 51350], "temperature": 1.0, "avg_logprob": -4.741359252929687, "compression_ratio": 1.335164835164835, "no_speech_prob": 0.12347412109375, "words": [ {"start": 1542.84, "end": 1542.84, "word": "af", "probability": 0.002758026123046875}, {"start": 1542.84, "end": 1542.84, "word": "aprilð", "probability": 0.057145535945892334}, {"start": 1542.84, "end": 1542.84, "word": "jævîr", "probability": 0.1567896842956543}, {"start": 1542.84, "end": 1542.84, "word": "til", "probability": 0.0018939971923828125}, {"start": 1542.84, "end": 1542.84, "word": "det", "probability": 0.0033779144287109375}, {"start": 1542.84, "end": 1543.44, "word": "bældat", "probability": 0.11750292778015137}, {"start": 1543.44, "end": 1544.36, "word": "brilliant", "probability": 7.152557373046875e-07}, {"start": 1544.36, "end": 1545.14, "word": "með", "probability": 0.2783784866333008} ] }, { "id": 280, "seek": 132430, "start": 1541.32, "end": 1543.04, "text": "ð jævîr til det bældat brilliant með", "tokens": [51350, 23436, 361, 7303, 85, 7517, 81, 8440, 1141, 272, 7303, 348, 267, 10248, 385, 23436, 51436], "temperature": 1.0, "avg_logprob": -4.741359252929687, "compression_ratio": 1.335164835164835, "no_speech_prob": 0.12347412109375, "words": [] }, { "id": 281, "seek": 135430, "start": 1545.14, "end": 1546.3, "text": " Duð ena porgna prákankenin.", "tokens": [50364, 5153, 23436, 465, 64, 1515, 70, 629, 582, 842, 5225, 2653, 259, 13, 50431], "temperature": 1.0, "avg_logprob": -4.655551255031784, "compression_ratio": 1.3051771117166213, "no_speech_prob": 0.036651611328125, "words": [ {"start": 1545.14, "end": 1545.36, "word": "Duð", "probability": 0.051422119140625}, {"start": 1545.36, "end": 1545.36, "word": "ena", "probability": 0.010187149047851562}, {"start": 1545.36, "end": 1545.44, "word": "porgna", "probability": 0.004482746124267578}, {"start": 1545.44, "end": 1546.3, "word": "prákankenin.", "probability": 0.04590331315994263} ] } ``` > https://gist.github.com/OliveSerg/cc6c409126567a40c94eb94339a13bae > > Was able to reproduce it on Colab with the following files [test.zip](https://github.com/user-attachments/files/17818786/test.zip). Was not able to reproduce with @formater's test file though. Files are just a French bible verse from LibriVox and a [youtube](https://youtube.com/shorts/O32nnjAmpeM?si=vDHhKdbgV27r1n8b) short. > > Used `ctranslate2==4.4.0` because of [1806](https://github.com/OpenNMT/CTranslate2/issues/1806). > > Error occurs only when `compute_type="int8"` or `int8_float16`, `task="translate"`, and `word_timestamps=True`. No further debugging with the parameters were done aside for replacing these 3. I managed to reproduce it consistently on colab, I also reproduced it on my machine but not consistently, the reason for inconsistency is that it needs the exact encoder input and generated tokens to reproduce, and using `int8` does not guarantee that at least on my hardware(RTX 3070 Ti) so I have to try transcribing several times to reproduce. What causes the issue is that some segments produce a single timestamp token with no text tokens and that's it, `find_alignment` function returned an empty list when no words were found which was fine before #856 , but after it, we're expecting `find_alignment` to return a list of lists which happens as long as there are text tokens, but in the edge case where it doesn't it returned a single list and ignores the rest of the loop over other segments in the batch, hence returning less alignments than segments causing the `list index out of range` error I'll open a PR to solve the problem soon
1,732,098,639,000
null
Bug Report
[ "faster_whisper/transcribe.py:WhisperModel.find_alignment", "faster_whisper/transcribe.py:merge_punctuations" ]
[]
2
SYSTRAN/faster-whisper
SYSTRAN__faster-whisper-1141
85e61ea11173dce3f10ce05e4b4bc1a2939d9e4e
diff --git a/faster_whisper/transcribe.py b/faster_whisper/transcribe.py index 6d18a173..80e5d92c 100644 --- a/faster_whisper/transcribe.py +++ b/faster_whisper/transcribe.py @@ -174,6 +174,9 @@ def forward(self, features, chunks_metadata, **forward_params): compression_ratio=get_compression_ratio( self.tokenizer.decode(subsegment["tokens"]) ), + seek=int( + chunk_metadata["start_time"] * self.model.frames_per_second + ), ) for subsegment in subsegments ] @@ -496,7 +499,7 @@ def _batched_segments_generator( for segment in result: seg_idx += 1 yield Segment( - seek=int(result[-1]["end"] * self.model.frames_per_second), + seek=segment["seek"], id=seg_idx, text=segment["text"], start=round(segment["start"], 3), @@ -1318,7 +1321,7 @@ def next_words_segment(segments: List[dict]) -> Optional[dict]: yield Segment( id=idx, - seek=seek, + seek=previous_seek, start=segment["start"], end=segment["end"], text=text, @@ -1585,7 +1588,7 @@ def add_word_timestamps( for segment_idx, segment in enumerate(segments): word_index = 0 - time_offset = segment[0]["start"] + time_offset = segment[0]["seek"] / self.frames_per_second median_duration, max_duration = median_max_durations[segment_idx] for subsegment_idx, subsegment in enumerate(segment): saved_tokens = 0
Some segment has a 1 second shifted after PR #856 appreciate your hard work --- audio (2 minutes): [01.aac.zip](https://github.com/user-attachments/files/17751633/01.aac.zip) The correct SRT result (using commit fbcf58b, which is before the huge PR #856): [01.old.srt.zip](https://github.com/user-attachments/files/17751733/01.old.srt.zip) The wrong SRT result (using latest commit 85e61ea): [01.new.srt.zip](https://github.com/user-attachments/files/17751755/01.new.srt.zip) --- I am **not** using the batch version ```python model = faster_whisper.WhisperModel( model_size_or_path='large-v2', device='cuda', cpu_threads=4, ) model.transcribe( audio=audio, language=None, task='transcribe', vad_filter=False, initial_prompt=None, word_timestamps=True, repetition_penalty=1.0, ) ``` script from this project https://github.com/heimoshuiyu/whisper-fastapi --- ![image](https://github.com/user-attachments/assets/d679ce1b-8771-4310-aadd-ef5c8ab7f886) some segments on the left (wrong) has 1 second mismatch (shift +1s) than the right (correct) --- I also test on the commit of RP #856 (eb839023), which is worse result SRT: [01.eb839023.srt.zip](https://github.com/user-attachments/files/17752205/01.eb839023.srt.zip) ![image](https://github.com/user-attachments/assets/cd958dfb-fb15-4b08-ac4a-0e66f7cd0f3d) left: commit eb839023 PR #856 middle: latest commit 85e61ea right: commit fbcf58b
1,731,607,572,000
null
Bug Report
[ "faster_whisper/transcribe.py:BatchedInferencePipeline.forward", "faster_whisper/transcribe.py:BatchedInferencePipeline._batched_segments_generator", "faster_whisper/transcribe.py:WhisperModel.generate_segments", "faster_whisper/transcribe.py:WhisperModel.add_word_timestamps" ]
[]
4
mlflow/mlflow
mlflow__mlflow-13821
15dbca59de6974d1ed9ce1e801edefd86b6a87ef
diff --git a/mlflow/models/model.py b/mlflow/models/model.py index 2326c3df57402..7ae1fbede42db 100644 --- a/mlflow/models/model.py +++ b/mlflow/models/model.py @@ -1116,9 +1116,20 @@ def update_model_requirements( def _validate_langchain_model(model): - from mlflow.langchain import _validate_and_prepare_lc_model_or_path + from langchain_core.runnables.base import Runnable - return _validate_and_prepare_lc_model_or_path(model, None) + from mlflow.models.utils import _validate_and_get_model_code_path + + if isinstance(model, str): + return _validate_and_get_model_code_path(model, None) + + if not isinstance(model, Runnable): + raise MlflowException.invalid_parameter_value( + "Model must be a Langchain Runnable type or path to a Langchain model, " + f"got {type(model)}" + ) + + return model def _validate_llama_index_model(model):
[BUG] MLflow langchain does not support logging RunnableWithMessageHistory ### Issues Policy acknowledgement - [X] I have read and agree to submit bug reports in accordance with the [issues policy](https://www.github.com/mlflow/mlflow/blob/master/ISSUE_POLICY.md) ### Where did you encounter this bug? Databricks ### Willingness to contribute No. I cannot contribute a bug fix at this time. ### MLflow version - Client: 2.16.2 ### System information - **OS Platform and Distribution**: Linux (5.4.0-1135-azure-fips) - **Python version**: 3.11.0 ### Describe the problem I am trying to log a Langchain chain for conversational RAG with memory using Langchains RunnableWithMessageHistory. However, I get an error that says that this flavor is not supported. Is there a workaround for this? ### Tracking information <!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW --> ```shell System information: Linux #142+fips1-Ubuntu SMP Tue Jul 30 21:00:25 UTC 2024 Python version: 3.11.0rc1 MLflow version: 2.16.2 MLflow module location: /local_disk0/.ephemeral_nfs/envs/pythonEnv-ada40d50-865d-42f8-8ded-3ea7c11f55a6/lib/python3.11/site-packages/mlflow/__init__.py Tracking URI: databricks Registry URI: databricks-uc Databricks runtime version: 15.4 MLflow environment variables: MLFLOW_CONDA_HOME: /databricks/conda MLFLOW_DEPLOYMENTS_TARGET: databricks MLFLOW_GATEWAY_URI: databricks MLFLOW_PYTHON_EXECUTABLE: /databricks/spark/scripts/mlflow_python.sh MLFLOW_REGISTRY_URI: databricks-uc MLFLOW_TRACKING_URI: databricks MLflow dependencies: Flask: 2.2.5 Jinja2: 3.1.2 aiohttp: 3.10.5 alembic: 1.13.3 azure-storage-file-datalake: 12.14.0 boto3: 1.34.39 botocore: 1.34.39 docker: 7.1.0 fastapi: 0.115.0 google-cloud-storage: 2.10.0 graphene: 3.3 gunicorn: 20.1.0 kubernetes: 31.0.0 langchain: 0.3.0 markdown: 3.4.1 matplotlib: 3.7.2 mlflow-skinny: 2.16.2 numpy: 1.23.5 pandas: 1.5.3 pyarrow: 14.0.1 pydantic: 2.9.2 scikit-learn: 1.3.0 scipy: 1.11.1 sqlalchemy: 2.0.35 tiktoken: 0.7.0 uvicorn: 0.30.6 virtualenv: 20.24.2 watchfiles: 0.24.0 ``` ### Code to reproduce issue <!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW --> ``` from langchain_openai import AzureChatOpenAI from langchain.chains import create_history_aware_retriever, create_retrieval_chain from langchain.chains.combine_documents import create_stuff_documents_chain from langchain_core.chat_history import BaseChatMessageHistory from langchain_community.chat_message_histories import ChatMessageHistory from langchain_core.runnables.history import RunnableWithMessageHistory # fill with details llm = AzureChatOpenAI() vector_search_as_retriever = DatabricksVectorSearch().as_retriever() contextualize_q_prompt = ChatPromptTemplate.from_messages( [ ("system", contextualize_q_system_prompt), MessagesPlaceholder("chat_history"), ("human", "{input}"), ] ) history_aware_retriever = create_history_aware_retriever( llm, vector_search_as_retriever, contextualize_q_prompt ) qa_prompt = ChatPromptTemplate.from_messages( [ ("system", system_prompt), MessagesPlaceholder("chat_history"), ("human", "{input}"), ] ) question_answer_chain = create_stuff_documents_chain(llm, qa_prompt) rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain) def get_session_history(session_id: str) -> BaseChatMessageHistory: if session_id not in store: store[session_id] = ChatMessageHistory() return store[session_id] conversational_rag_chain = RunnableWithMessageHistory( rag_chain, get_session_history, input_messages_key="input", history_messages_key="chat_history", output_messages_key="answer", ) # Error with mlflow.start_run(run_name="test"): mlflow.set_tag("type", "chain") logged_chain_info = mlflow.langchain.log_model( lc_model=conversational_rag_chain, artifact_path="chain" ) ``` ### Stack trace <!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW --> ``` MlflowException: MLflow langchain flavor only supports subclasses of (<class 'langchain.chains.base.Chain'>, <class 'langchain.agents.agent.AgentExecutor'>, <class 'langchain_core.retrievers.BaseRetriever'>, <class 'langchain_core.language_models.chat_models.SimpleChatModel'>, <class 'langchain_core.prompts.chat.ChatPromptTemplate'>, <class 'langchain_core.runnables.passthrough.RunnablePassthrough'>, <class 'langchain_core.runnables.base.RunnableLambda'>, <class 'langchain_core.runnables.base.RunnableParallel'>, <class 'langchain_core.runnables.base.RunnableSequence'>, <class 'langchain_core.runnables.branch.RunnableBranch'>, <class 'langchain_core.runnables.passthrough.RunnableAssign'>, <class 'langchain_core.runnables.base.RunnableBinding'>), found RunnableWithMessageHistory. File <command-2576690084880631>, line 5 3 with mlflow.start_run(run_name=f"dbdemos_rag_azure"): 4 mlflow.set_tag("type", "chain") ----> 5 logged_chain_info = mlflow.langchain.log_model( 6 lc_model=conversational_rag_chain, # Chain code file e.g., /path/to/the/chain.py 7 model_config='rag_multi_chain_config.yaml', # Chain configuration 8 artifact_path="chain" 9 ) 11 # Test the chain locally 12 chain = mlflow.langchain.load_model(logged_chain_info.model_uri) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-ada40d50-865d-42f8-8ded-3ea7c11f55a6/lib/python3.11/site-packages/mlflow/tracing/provider.py:268, in trace_disabled.<locals>.wrapper(*args, **kwargs) 266 disable() 267 try: --> 268 is_func_called, result = True, f(*args, **kwargs) 269 finally: 270 enable() File /local_disk0/.ephemeral_nfs/envs/pythonEnv-ada40d50-865d-42f8-8ded-3ea7c11f55a6/lib/python3.11/site-packages/mlflow/langchain/__init__.py:549, in log_model(lc_model, artifact_path, conda_env, code_paths, registered_model_name, signature, input_example, await_registration_for, pip_requirements, extra_pip_requirements, metadata, loader_fn, persist_dir, example_no_conversion, run_id, model_config, streamable) 403 @experimental 404 @format_docstring(LOG_MODEL_PARAM_DOCS.format(package_name=FLAVOR_NAME)) 405 @docstring_version_compatibility_warning(FLAVOR_NAME) (...) 424 streamable=None, 425 ): 426 """ 427 Log a LangChain model as an MLflow artifact for the current run. 428 (...) 547 metadata of the logged model. 548 """ --> 549 return Model.log( 550 artifact_path=artifact_path, 551 flavor=mlflow.langchain, 552 registered_model_name=registered_model_name, 553 lc_model=lc_model, 554 conda_env=conda_env, 555 code_paths=code_paths, 556 signature=signature, 557 input_example=input_example, 558 await_registration_for=await_registration_for, 559 pip_requirements=pip_requirements, 560 extra_pip_requirements=extra_pip_requirements, 561 metadata=metadata, 562 loader_fn=loader_fn, 563 persist_dir=persist_dir, 564 example_no_conversion=example_no_conversion, 565 run_id=run_id, 566 model_config=model_config, 567 streamable=streamable, 568 ) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-ada40d50-865d-42f8-8ded-3ea7c11f55a6/lib/python3.11/site-packages/mlflow/models/model.py:725, in Model.log(cls, artifact_path, flavor, registered_model_name, await_registration_for, metadata, run_id, resources, **kwargs) 721 run_id = mlflow.tracking.fluent._get_or_start_run().info.run_id 722 mlflow_model = cls( 723 artifact_path=artifact_path, run_id=run_id, metadata=metadata, resources=resources 724 ) --> 725 flavor.save_model(path=local_path, mlflow_model=mlflow_model, **kwargs) 726 # `save_model` calls `load_model` to infer the model requirements, which may result in 727 # __pycache__ directories being created in the model directory. 728 for pycache in Path(local_path).rglob("__pycache__"): File /local_disk0/.ephemeral_nfs/envs/pythonEnv-ada40d50-865d-42f8-8ded-3ea7c11f55a6/lib/python3.11/site-packages/mlflow/tracing/provider.py:272, in trace_disabled.<locals>.wrapper(*args, **kwargs) 270 enable() 271 else: --> 272 is_func_called, result = True, f(*args, **kwargs) 273 # We should only catch the exception from disable() and enable() 274 # and let other exceptions propagate. 275 except MlflowTracingException as e: File /local_disk0/.ephemeral_nfs/envs/pythonEnv-ada40d50-865d-42f8-8ded-3ea7c11f55a6/lib/python3.11/site-packages/mlflow/langchain/__init__.py:262, in save_model(lc_model, path, conda_env, code_paths, mlflow_model, signature, input_example, pip_requirements, extra_pip_requirements, metadata, loader_fn, persist_dir, example_no_conversion, model_config, streamable) 259 import langchain 260 from langchain.schema import BaseRetriever --> 262 lc_model_or_path = _validate_and_prepare_lc_model_or_path(lc_model, loader_fn, temp_dir) 264 _validate_env_arguments(conda_env, pip_requirements, extra_pip_requirements) 266 path = os.path.abspath(path) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-ada40d50-865d-42f8-8ded-3ea7c11f55a6/lib/python3.11/site-packages/mlflow/langchain/utils/__init__.py:293, in _validate_and_prepare_lc_model_or_path(lc_model, loader_fn, temp_dir) 290 return _validate_and_get_model_code_path(lc_model, temp_dir) 292 if not isinstance(lc_model, supported_lc_types()): --> 293 raise mlflow.MlflowException.invalid_parameter_value( 294 get_unsupported_model_message(type(lc_model).__name__) 295 ) 297 _SUPPORTED_LLMS = _get_supported_llms() 298 if isinstance(lc_model, langchain.chains.llm.LLMChain) and not any( 299 isinstance(lc_model.llm, supported_llm) for supported_llm in _SUPPORTED_LLMS 300 ): ``` ### Other info / logs <!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW --> ``` REPLACE_ME ``` ### What component(s) does this bug affect? - [ ] `area/artifacts`: Artifact stores and artifact logging - [ ] `area/build`: Build and test infrastructure for MLflow - [ ] `area/deployments`: MLflow Deployments client APIs, server, and third-party Deployments integrations - [ ] `area/docs`: MLflow documentation pages - [ ] `area/examples`: Example code - [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry - [X] `area/models`: MLmodel format, model serialization/deserialization, flavors - [ ] `area/recipes`: Recipes, Recipe APIs, Recipe configs, Recipe Templates - [ ] `area/projects`: MLproject format, project running backends - [ ] `area/scoring`: MLflow Model server, model deployment tools, Spark UDFs - [ ] `area/server-infra`: MLflow Tracking server backend - [ ] `area/tracking`: Tracking Service, tracking client APIs, autologging ### What interface(s) does this bug affect? - [ ] `area/uiux`: Front-end, user experience, plotting, JavaScript, JavaScript dev server - [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models - [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry - [ ] `area/windows`: Windows support ### What language(s) does this bug affect? - [ ] `language/r`: R APIs and clients - [ ] `language/java`: Java APIs and clients - [ ] `language/new`: Proposals for new client languages ### What integration(s) does this bug affect? - [ ] `integrations/azure`: Azure and Azure ML integrations - [ ] `integrations/sagemaker`: SageMaker integrations - [ ] `integrations/databricks`: Databricks integrations
@VarunUllanat The workaround is to use `models from code` for saving the langchain model https://mlflow.org/docs/latest/models.html#models-from-code. This will be the recommended way for saving langchain models. Thanks for the response, when I set that: `mlflow.models.set_model(model=conversational_rag_chain)` I get the following error: ``` MlflowException Traceback (most recent call last) File <command-832405214942020>, line 1 ----> 1 mlflow.models.set_model(model=conversational_rag_chain) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-b9c2956b-9a79-418a-8f13-d08539e9b4d1/lib/python3.11/site-packages/mlflow/models/model.py:1068, in set_model(model) 1065 except Exception: 1066 pass -> 1068 raise mlflow.MlflowException(SET_MODEL_ERROR) MlflowException: Model should either be an instance of PyFuncModel, Langchain type, or LlamaIndex index. ``` For clarity the `type(conversational_rag_chain)` is `langchain_core.runnables.history.RunnableWithMessageHistory` and not `langchain_core.runnables.base.RunnableSequence` like a normal chain. Is the former not supported by mlflow? Could you try `pip install git+https://github.com/serena-ruan/mlflow.git@langchain` then retry and see if it works? <!-- assign-maintainer --> @mlflow/mlflow-team Please assign a maintainer and start triaging this issue. @serena-ruan your fix works (when will it be implemented?). Also, how would the mlflow model logging work for this with two arguments in the invoke method? ``` {"input": "What is langchain?"}, {"configurable": {"session_id": "123"}}) ``` @tkernr Unfortunately your input example requires us to support dictionary as params, this is on our roadmap and will be supported in the next release, please stay tuned. BTW I think the change is merged, could you run with latest MLflow version? Sorry for the confusion, seems the fix isn't included, let me file a PR and include it in the next release
1,731,987,688,000
null
Bug Report
[ "mlflow/models/model.py:_validate_langchain_model" ]
[]
1
jax-ml/jax
jax-ml__jax-25487
c73f3060997ac3b1c6de4f075111b684ea20b6ac
diff --git a/jax/_src/random.py b/jax/_src/random.py index 13c4ab4dbce4..12aa5b93efbf 100644 --- a/jax/_src/random.py +++ b/jax/_src/random.py @@ -291,15 +291,18 @@ def split(key: ArrayLike, num: int | tuple[int, ...] = 2) -> Array: return _return_prng_keys(wrapped, _split(typed_key, num)) -def _key_impl(keys: Array) -> str | PRNGSpec: +def _key_impl(keys: Array) -> PRNGImpl: assert jnp.issubdtype(keys.dtype, dtypes.prng_key) keys_dtype = typing.cast(prng.KeyTy, keys.dtype) - impl = keys_dtype._impl + return keys_dtype._impl + +def _key_spec(keys: Array) -> str | PRNGSpec: + impl = _key_impl(keys) return impl.name if impl.name in prng.prngs else PRNGSpec(impl) def key_impl(keys: ArrayLike) -> str | PRNGSpec: typed_keys, _ = _check_prng_key("key_impl", keys, allow_batched=True) - return _key_impl(typed_keys) + return _key_spec(typed_keys) def _key_data(keys: Array) -> Array:
`jax.random.beta` 3 orders of magnitude slower from 0.4.36 on GPU ### Description My code runs substantially slower from one month ago, and I figued out a key bottleneck: sampling from beta distribution has gotten around 1000 times slower on GPU. On Colab, I run the following code on different versions of jax ``` @jax.jit def sample_beta(rng_key): return jax.random.beta(key=rng_key, a=1, b=1, shape=(1000, 2)) seed = jrand.PRNGKey(1) sample_beta(seed) %timeit sample_beta(seed) ``` * Time take on version 0.4.35: **0.784ms** * Time take on version 0.4.36: **351ms** * Time take on version 0.4.37: **354ms** ![image](https://github.com/user-attachments/assets/e7dd4600-192c-4308-b4d4-52eb7ca762a3) ![image](https://github.com/user-attachments/assets/0e8f6952-50b2-4ba5-8fda-6b3533a94b86) ![image](https://github.com/user-attachments/assets/ce088de7-89e4-466a-b331-3a79cdf77ae2) ### System info (python version, jaxlib version, accelerator, etc.) jax: 0.4.36 jaxlib: 0.4.36 numpy: 1.26.4 python: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] device info: Tesla T4-1, 1 local devices" process_count: 1 platform: uname_result(system='Linux', node='d36852658d94', release='6.1.85+', version='#1 SMP PREEMPT_DYNAMIC Thu Jun 27 21:05:47 UTC 2024', machine='x86_64') $ nvidia-smi Fri Dec 13 13:13:50 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 | | N/A 72C P0 31W / 70W | 109MiB / 15360MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| +---------------------------------------------------------------------------------------+
I can reproduce this, but I'm not totally sure where this would be coming from. Perhaps @jakevdp or @froystig could take a look re: recent changes to PRNGs? My bisection points to https://github.com/jax-ml/jax/pull/24593
1,734,133,002,000
null
Performance Issue
[ "jax/_src/random.py:_key_impl", "jax/_src/random.py:key_impl" ]
[ "jax/_src/random.py:_key_spec" ]
2
jax-ml/jax
jax-ml__jax-24733
4b4fb9dae9eb7e2740d70de5b4a610f979530382
diff --git a/jax/_src/numpy/reductions.py b/jax/_src/numpy/reductions.py index fa8d73361e2b..be1e55675079 100644 --- a/jax/_src/numpy/reductions.py +++ b/jax/_src/numpy/reductions.py @@ -2360,7 +2360,8 @@ def _quantile(a: Array, q: Array, axis: int | tuple[int, ...] | None, index[axis] = high high_value = a[tuple(index)] else: - a = _where(any(lax_internal._isnan(a), axis=axis, keepdims=True), np.nan, a) + with jax.debug_nans(False): + a = _where(any(lax_internal._isnan(a), axis=axis, keepdims=True), np.nan, a) a = lax.sort(a, dimension=axis) n = lax.convert_element_type(a_shape[axis], lax_internal._dtype(q)) q = lax.mul(q, n - 1)
median FloatingPointError: invalid value (nan) encountered in jit(convert_element_type) ### Description Hello, I got this error in jnp.median when I set JAX_DISABLE_JIT=True and JAX_DEBUG_NANS=True. ``` Traceback (most recent call last): File "/data1/home/hhu17/zyl/PINE/H2+/3/test.py", line 29, in <module> c = jnp.median(b) ^^^^^^^^^^^^^ File "/data1/home/hhu17/.conda/envs/jax/lib/python3.11/site-packages/jax/_src/numpy/reductions.py", line 2517, in median return quantile(a, 0.5, axis=axis, out=out, overwrite_input=overwrite_input, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data1/home/hhu17/.conda/envs/jax/lib/python3.11/site-packages/jax/_src/numpy/reductions.py", line 2172, in quantile return _quantile(lax_internal.asarray(a), lax_internal.asarray(q), axis, method, keepdims, False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data1/home/hhu17/.conda/envs/jax/lib/python3.11/site-packages/jax/_src/numpy/reductions.py", line 2302, in _quantile a = _where(any(lax_internal._isnan(a), axis=axis, keepdims=True), np.nan, a) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data1/home/hhu17/.conda/envs/jax/lib/python3.11/site-packages/jax/_src/numpy/util.py", line 426, in _where x, y = promote_dtypes(x, y) ^^^^^^^^^^^^^^^^^^^^ File "/data1/home/hhu17/.conda/envs/jax/lib/python3.11/site-packages/jax/_src/numpy/util.py", line 259, in promote_dtypes return [lax._convert_element_type(x, to_dtype, weak_type) for x in args] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data1/home/hhu17/.conda/envs/jax/lib/python3.11/site-packages/jax/_src/numpy/util.py", line 259, in <listcomp> return [lax._convert_element_type(x, to_dtype, weak_type) for x in args] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data1/home/hhu17/.conda/envs/jax/lib/python3.11/site-packages/jax/_src/lax/lax.py", line 587, in _convert_element_type return convert_element_type_p.bind( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data1/home/hhu17/.conda/envs/jax/lib/python3.11/site-packages/jax/_src/lax/lax.py", line 2981, in _convert_element_type_bind operand = core.Primitive.bind(convert_element_type_p, operand, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data1/home/hhu17/.conda/envs/jax/lib/python3.11/site-packages/jax/_src/core.py", line 438, in bind return self.bind_with_trace(find_top_trace(args), args, params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data1/home/hhu17/.conda/envs/jax/lib/python3.11/site-packages/jax/_src/core.py", line 442, in bind_with_trace out = trace.process_primitive(self, map(trace.full_raise, args), params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data1/home/hhu17/.conda/envs/jax/lib/python3.11/site-packages/jax/_src/core.py", line 955, in process_primitive return primitive.impl(*tracers, **params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data1/home/hhu17/.conda/envs/jax/lib/python3.11/site-packages/jax/_src/dispatch.py", line 91, in apply_primitive outs = fun(*args) ^^^^^^^^^^ FloatingPointError: invalid value (nan) encountered in jit(convert_element_type). Because jax_config.debug_nans.value and/or config.jax_debug_infs is set, the de-optimized function (i.e., the function as if the `jit` decorator were removed) was called in an attempt to get a more precise error message. However, the de-optimized function did not produce invalid values during its execution. This behavior can result from `jit` optimizations causing the invalid value to be produced. It may also arise from having nan/inf constants as outputs, like `jax.jit(lambda ...: jax.numpy.nan)(...)`. ``` Following is the minimal code to reproduce the error. ``` import jax.numpy as jnp import jax key = jax.random.PRNGKey(12) a = jax.random.normal(key, 128) b = jnp.array(a) c = jnp.median(b) jit_median = jax.jit(jnp.median) c = jit_median(b) print(c) ``` Any help would be greatly appreciated! ### System info (python version, jaxlib version, accelerator, etc.) jax: 0.4.35 jaxlib: 0.4.34 numpy: 2.1.1 python: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] device info: cpu-1, 1 local devices" process_count: 1 platform: uname_result(system='Linux', node='login4', release='3.10.0-957.el7.x86_64', version='#1 SMP Mon Dec 7 11:30:56 UTC 2020', machine='x86_64')
Looks like it's coming from the NaN introduced on this line: https://github.com/jax-ml/jax/blob/4b4fb9dae9eb7e2740d70de5b4a610f979530382/jax/_src/numpy/reductions.py#L2363 @jakevdp Can I tag you here since you wrote the implementation for _quantile?
1,730,849,863,000
null
Bug Report
[ "jax/_src/numpy/reductions.py:_quantile" ]
[]
1
jax-ml/jax
jax-ml__jax-24717
34b4787e2eff9edbd8eca242a74f1c165388b871
diff --git a/jax/_src/scipy/stats/_core.py b/jax/_src/scipy/stats/_core.py index 08d1c0b6b538..f7b28d3ac301 100644 --- a/jax/_src/scipy/stats/_core.py +++ b/jax/_src/scipy/stats/_core.py @@ -198,13 +198,12 @@ def rankdata( return jnp.apply_along_axis(rankdata, axis, a, method) arr = jnp.ravel(a) - sorter = jnp.argsort(arr) + arr, sorter = jax.lax.sort_key_val(arr, jnp.arange(len(arr))) inv = invert_permutation(sorter) if method == "ordinal": return inv + 1 - arr = arr[sorter] - obs = jnp.insert(arr[1:] != arr[:-1], 0, True) + obs = jnp.concatenate([jnp.array([True]), arr[1:] != arr[:-1]]) dense = obs.cumsum()[inv] if method == "dense": return dense
scipy.stats.rankdata causes constant folding warning for method='dense' but not method='ordinal' ### Description [`scipy.stats.rankdata`](https://jax.readthedocs.io/en/latest/_autosummary/jax.scipy.stats.rankdata.html) causes a constant folding warning for `method='dense'` but not `method='ordinal'`: ``` $ py -c "import jax; jax.scipy.stats.rankdata(jax.numpy.zeros(10**7), 'ordinal')" $ py -c "import jax; jax.scipy.stats.rankdata(jax.numpy.zeros(10**7), 'dense')" 2024-11-04 20:21:27.997499: E external/xla/xla/service/slow_operation_alarm.cc:65] Constant folding an instruction is taking > 1s: %reduce-window.6 = s32[625000,16]{0,1} reduce-window(s32[625000,16]{0,1} %constant.174, s32[] %constant.17), window={size=1x16 pad=0_0x15_0}, to_apply=%region_5.113 This isn't necessarily a bug; constant-folding is inherently a trade-off between compilation time and speed at runtime. XLA has some guards that attempt to keep constant folding from taking too long, but fundamentally you'll always be able to come up with an input program that takes a long time. If you'd like to file a bug, run with envvar XLA_FLAGS=--xla_dump_to=/tmp/foo and attach the results. 2024-11-04 20:21:33.721446: E external/xla/xla/service/slow_operation_alarm.cc:133] The operation took 6.728977s Constant folding an instruction is taking > 1s: %reduce-window.6 = s32[625000,16]{0,1} reduce-window(s32[625000,16]{0,1} %constant.174, s32[] %constant.17), window={size=1x16 pad=0_0x15_0}, to_apply=%region_5.113 This isn't necessarily a bug; constant-folding is inherently a trade-off between compilation time and speed at runtime. XLA has some guards that attempt to keep constant folding from taking too long, but fundamentally you'll always be able to come up with an input program that takes a long time. If you'd like to file a bug, run with envvar XLA_FLAGS=--xla_dump_to=/tmp/foo and attach the results. ``` Looking at the code for `rankdata`, the culprit might be one of the 3 lines of code starting [here](https://github.com/jax-ml/jax/blob/ab47d4687f647de3aa145a9a782fb7b4aaf92af4/jax/_src/scipy/stats/_core.py#L206). XLA dump [here](https://www.dropbox.com/scl/fo/rruuywlngh1r03hj9c2r1/AM-ym1pWfIUhkHA2hOiQNko?rlkey=2xxwdrmssgfyk7yz61xrt1t7d&st=h6yp3a8x&dl=0). ### System info (python version, jaxlib version, accelerator, etc.) jax: 0.4.35 jaxlib: 0.4.34 numpy: 1.26.4 python: 3.12.7 (main, Oct 1 2024, 02:05:46) [Clang 15.0.0 (clang-1500.3.9.4)] device info: cpu-1, 1 local devices" process_count: 1 platform: uname_result(system='Darwin', node='Carloss-MacBook-Pro-2.local', release='23.6.0', version='Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:46 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6031', machine='arm64')
1,730,812,512,000
null
Performance Issue
[ "jax/_src/scipy/stats/_core.py:rankdata" ]
[]
1
phidatahq/phidata
phidatahq__phidata-1589
2c18b480f349eee62e16a794a250ed8549558cb1
diff --git a/phi/document/chunking/recursive.py b/phi/document/chunking/recursive.py index 662a9218c..47c552294 100644 --- a/phi/document/chunking/recursive.py +++ b/phi/document/chunking/recursive.py @@ -38,6 +38,7 @@ def chunk(self, document: Document) -> List[Document]: chunk_id = None if document.id: chunk_id = f"{document.id}_{chunk_number}" + chunk_number += 1 meta_data["chunk_size"] = len(chunk) chunks.append(Document(id=chunk_id, name=document.name, meta_data=meta_data, content=chunk))
Duplicate key value violates unique constraint with recursive chunking When use `RecursiveChunking` with large files, some errors happen: ``` ERROR Error with batch starting at index 0: (psycopg.errors.UniqueViolation) duplicate key value violates unique constraint "recipes_agentic_recursive_chunking_pkey" DETAIL: Key (id)=(relativity_1) already exists. ```
1,734,420,482,000
null
Bug Report
[ "phi/document/chunking/recursive.py:RecursiveChunking.chunk" ]
[]
1
phidatahq/phidata
phidatahq__phidata-1583
54f7a22970f66c32409607e2f1e3474a7a11a395
diff --git a/phi/memory/agent.py b/phi/memory/agent.py index 6bfd6c185..5f3a7dea1 100644 --- a/phi/memory/agent.py +++ b/phi/memory/agent.py @@ -1,5 +1,6 @@ from enum import Enum from typing import Dict, List, Any, Optional, Tuple +from copy import deepcopy from pydantic import BaseModel, ConfigDict @@ -357,8 +358,22 @@ def clear(self) -> None: self.summary = None self.memories = None - def deep_copy(self, *, update: Optional[Dict[str, Any]] = None) -> "AgentMemory": - new_memory = self.model_copy(deep=True, update=update) - # clear the new memory to remove any references to the old memory - new_memory.clear() - return new_memory + def deep_copy(self): + # Create a shallow copy of the object + copied_obj = self.__class__(**self.model_dump()) + + # Manually deepcopy fields that are known to be safe + for field_name, field_value in self.__dict__.items(): + if field_name not in ["db", "classifier", "manager", "summarizer"]: + try: + setattr(copied_obj, field_name, deepcopy(field_value)) + except Exception as e: + logger.warning(f"Failed to deepcopy field: {field_name} - {e}") + setattr(copied_obj, field_name, field_value) + + copied_obj.db = self.db + copied_obj.classifier = self.classifier + copied_obj.manager = self.manager + copied_obj.summarizer = self.summarizer + + return copied_obj
Agents with memory dont work in playground Repro Steps ``` memory_db = SqliteMemoryDb(table_name="memories", db_file="tmp/agents.db") agent = Agent( name="my_agent", agent_id="my_agent", model=models["gpt-4o"], debug_mode=True, memory=AgentMemory( db=memory_db, create_user_memories=True, create_session_summary=True, classifier=MemoryClassifier( model=models["gpt-4o-mini"], ), summarizer=MemorySummarizer( model=models["gpt-4o-mini"], ), manager=MemoryManager( model=models["gpt-4o-mini"], ), ), storage=agent_storage, ) # This works agent.print_response( "Who am i?", stream=True, ) ``` With playground, fails to `deepcopy` in the `router.py` ``` File "phi/playground/router.py", line 269, in agent_run new_agent_instance = agent.deep_copy(update={"session_id": body.session_id}) File "phi/agent/agent.py", line 277, in deep_copy fields_for_new_agent[field_name] = self._deep_copy_field(field_name, field_value) File "phi/agent/agent.py", line 294, in _deep_copy_field return field_value.deep_copy() File "phi/memory/agent.py", line 361, in deep_copy new_memory = self.model_copy(deep=True, update=update) File ".venv/lib/python3.9/site-packages/pydantic/main.py", line 337, in model_copy copied = self.__deepcopy__() if deep else self.__copy__() File ".venv/lib/python3.9/site-packages/pydantic/main.py", line 805, in __deepcopy__ _object_setattr(m, '__dict__', deepcopy(self.__dict__, memo=memo)) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 270, in _reconstruct state = deepcopy(state, memo) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 270, in _reconstruct state = deepcopy(state, memo) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 270, in _reconstruct state = deepcopy(state, memo) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 270, in _reconstruct state = deepcopy(state, memo) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/copy.py", line 161, in deepcopy rv = reductor(4) ```
Hey @nikhil-pandey, did you push a fix for this in your PR? or are you still countering this issue? @manthanguptaa I have same issue ``` File "/Users/fireharp/.pyenv/versions/3.11.9/lib/python3.11/copy.py", line 161, in deepcopy rv = reductor(4) ^^^^^^^^^^^ TypeError: cannot pickle 'module' object Exception ignored in: <function SqliteMemoryDb.__del__ at 0x1086cca40> Traceback (most recent call last): File "/Users/fireharp/Prog/Playgrounds/phidata/.venv/lib/python3.11/site-packages/phi/memory/db/sqlite.py", line 192, in __del__ self.Session.remove() ^^^^^^^^^^^^ AttributeError: 'SqliteMemoryDb' object has no attribute 'Session' INFO: 127.0.0.1:59086 - "GET /v1/playground/status HTTP/1.1" 200 OK ``` @fireharp allow me some time. I will take a look at it This issue has been automatically marked as stale due to 14 days of inactivity and will now be closed.
1,734,372,194,000
null
Bug Report
[ "phi/memory/agent.py:AgentMemory.deep_copy" ]
[]
1
phidatahq/phidata
phidatahq__phidata-1582
54f7a22970f66c32409607e2f1e3474a7a11a395
diff --git a/phi/tools/function.py b/phi/tools/function.py index 24d103165..89520833e 100644 --- a/phi/tools/function.py +++ b/phi/tools/function.py @@ -175,7 +175,7 @@ def process_entrypoint(self, strict: bool = False): except Exception as e: logger.warning(f"Could not parse args for {self.name}: {e}", exc_info=True) - self.description = getdoc(self.entrypoint) + self.description = getdoc(self.entrypoint) or self.description self.parameters = parameters self.entrypoint = validate_call(self.entrypoint)
Bedrock - Claude 3.5 Sonnet not working for Multi Agent Team **When trying to run a Multi-Agent Team using Amazon Bedrock Claude 3.5 Sonnet, then I get the following error.** Traceback (most recent call last): File "/Users/RyanBlake/Desktop/Source Control/PhiData Agents/FinanceAgentTeam.py", line 34, in <module> agent_team.print_response("Summarize analyst recommendations and share the latest news for CPI Capitec", stream=True) File "/opt/homebrew/lib/python3.11/site-packages/phi/agent/agent.py", line 2765, in print_response for resp in self.run(message=message, messages=messages, stream=True, **kwargs): File "/opt/homebrew/lib/python3.11/site-packages/phi/agent/agent.py", line 1787, in _run for model_response_chunk in self.model.response_stream(messages=messages_for_model): File "/opt/homebrew/lib/python3.11/site-packages/phi/model/aws/bedrock.py", line 493, in response_stream for chunk in response: File "/opt/homebrew/lib/python3.11/site-packages/phi/model/aws/bedrock.py", line 126, in invoke_stream response = self.bedrock_runtime_client.converse_stream(**body) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/botocore/client.py", line 569, in _api_call return self._make_api_call(operation_name, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/botocore/client.py", line 980, in _make_api_call request_dict = self._convert_to_request_dict( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/botocore/client.py", line 1047, in _convert_to_request_dict request_dict = self._serializer.serialize_to_request( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/botocore/validate.py", line 381, in serialize_to_request raise ParamValidationError(report=report.generate_report()) botocore.exceptions.ParamValidationError: Parameter validation failed: Invalid length for parameter toolConfig.tools[0].toolSpec.description, value: 0, valid min length: 1 Invalid length for parameter toolConfig.tools[1].toolSpec.description, value: 0, valid min length: 1 **I used the examples is, but just swopped out the model. Here is my Python script.** from phi.agent import Agent from phi.tools.googlesearch import GoogleSearch from phi.model.aws.claude import Claude from phi.tools.yfinance import YFinanceTools web_agent = Agent( name="Web Agent", model=Claude(id="anthropic.claude-3-5-sonnet-20240620-v1:0"), tools=[GoogleSearch()], #description="You are a professional financial analyst agent that helps people find financial data.", instructions=["Always include sources"], markdown=True, ) finance_agent = Agent( name="Finance Agent", role="Get financial data", model=Claude(id="anthropic.claude-3-5-sonnet-20240620-v1:0"), tools=[YFinanceTools(stock_price=True, analyst_recommendations=True, company_info=True)], #description="You are a professional financial analyst agent that helps people find financial data.", instructions=["Use tables to display data"], show_tool_calls=True, markdown=True, ) agent_team = Agent( team=[web_agent, finance_agent], model=Claude(id="anthropic.claude-3-5-sonnet-20240620-v1:0"), instructions=["Always include sources", "Use tables to display data"], show_tool_calls=True, markdown=True, ) agent_team.print_response("Summarize analyst recommendations and share the latest news for NVDA Nvidia", stream=True)
hey @billybobpersonal, I am going to try to replicate the issue today. Allow me some time @manthanguptaa thanks. Were you able to replicate it? Or would you like me to send more info. Hey @billybobpersonal, I was able to replicate it. I am working on a fix for it
1,734,369,612,000
null
Bug Report
[ "phi/tools/function.py:Function.process_entrypoint" ]
[]
1
phidatahq/phidata
phidatahq__phidata-1563
8f55f8b1d3fc13d46ad840666225ff2f9885cb68
diff --git a/phi/tools/crawl4ai_tools.py b/phi/tools/crawl4ai_tools.py index a7ca95c78..172953744 100644 --- a/phi/tools/crawl4ai_tools.py +++ b/phi/tools/crawl4ai_tools.py @@ -1,9 +1,10 @@ +import asyncio from typing import Optional from phi.tools import Toolkit try: - from crawl4ai import WebCrawler + from crawl4ai import AsyncWebCrawler, CacheMode except ImportError: raise ImportError("`crawl4ai` not installed. Please install using `pip install crawl4ai`") @@ -31,21 +32,31 @@ def web_crawler(self, url: str, max_length: Optional[int] = None) -> str: if url is None: return "No URL provided" - # Create an instance of WebCrawler - crawler = WebCrawler(verbose=True) - crawler.warmup() + # Run the async crawler function synchronously + return asyncio.run(self._async_web_crawler(url, max_length)) - # Run the crawler on a URL - result = crawler.run(url=url) + async def _async_web_crawler(self, url: str, max_length: Optional[int] = None) -> str: + """ + Asynchronous method to crawl a website using AsyncWebCrawler. + + :param url: The URL to crawl. + + :return: The results of the crawling as a markdown string, or None if no result. + """ + + async with AsyncWebCrawler(thread_safe=True) as crawler: + result = await crawler.arun(url=url, cache_mode=CacheMode.BYPASS) - # Determine the length to use - length = self.max_length or max_length + # Determine the length to use + length = self.max_length or max_length + if not result.markdown: + return "No result" - # Remove spaces and truncate if length is specified - if length: - result = result.markdown[:length] - result = result.replace(" ", "") - return result + # Remove spaces and truncate if length is specified + if length: + result = result.markdown[:length] + result = result.replace(" ", "") + return result - result = result.markdown.replace(" ", "") + result = result.markdown.replace(" ", "") return result
Crawl4AI tool has error I tweaked example code from here: https://docs.phidata.com/tools/crawl4ai and used this code: ``` from phi.agent import Agent from phi.model.openai import OpenAIChat from phi.tools.crawl4ai_tools import Crawl4aiTools from dotenv import load_dotenv load_dotenv() agent = Agent( model=OpenAIChat(id="gpt-4o"), tools=[Crawl4aiTools(max_length=None)], show_tool_calls=True ) agent.print_response("Summarize me the key points of this: https://blog.google/products/gemini/google-gemini-deep-research/") ``` but I've got error: ``` (phidata-venv) PS D:\Projects\AI_testing\phidata> python .\crawl4ai_example.py ▰▰▱▱▱▱▱ Thinking...INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" WARNING Could not run function web_crawler(url=https://blog.google/products/gemini/google-gemini-deep-research, max_length=500) ERROR 'NoneType' object is not callable Traceback (most recent call last): File "D:\Projects\AI_testing\phidata\phidata-venv\Lib\site-packages\phi\tools\function.py", line 313, in execute self.result = self.function.entrypoint(**entrypoint_args, **self.arguments) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Projects\AI_testing\phidata\phidata-venv\Lib\site-packages\pydantic\_internal\_validate_call.py", line 38, in wrapper_function return wrapper(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Projects\AI_testing\phidata\phidata-venv\Lib\site-packages\pydantic\_internal\_validate_call.py", line 111, in __call__ res = self.__pydantic_validator__.validate_python(pydantic_core.ArgsKwargs(args, kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Projects\AI_testing\phidata\phidata-venv\Lib\site-packages\phi\tools\crawl4ai_tools.py", line 35, in web_crawler crawler = WebCrawler(verbose=True) ^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: 'NoneType' object is not callable ``` **phidata v2.7.2** and **crawl4ai v0.4.1** is used.
Hey @vanetreg, I am able to replicate this error. Allow me some time to fix this issue.
1,734,095,142,000
null
Bug Report
[ "phi/tools/crawl4ai_tools.py:Crawl4aiTools.web_crawler" ]
[ "phi/tools/crawl4ai_tools.py:Crawl4aiTools._async_web_crawler" ]
1
phidatahq/phidata
phidatahq__phidata-1562
bd734bc8528aec12d1387064ab9cac571508fc7f
diff --git a/phi/model/google/gemini.py b/phi/model/google/gemini.py index 4a11c1c43..263d3afb0 100644 --- a/phi/model/google/gemini.py +++ b/phi/model/google/gemini.py @@ -23,7 +23,7 @@ GenerateContentResponse as ResultGenerateContentResponse, ) from google.protobuf.struct_pb2 import Struct -except ImportError: +except (ModuleNotFoundError, ImportError): logger.error("`google-generativeai` not installed. Please install it using `pip install google-generativeai`") raise @@ -301,6 +301,7 @@ def format_functions(self, params: Dict[str, Any]) -> Dict[str, Any]: Dict[str, Any]: The converted parameters dictionary compatible with Gemini. """ formatted_params = {} + for key, value in params.items(): if key == "properties" and isinstance(value, dict): converted_properties = {} @@ -322,8 +323,33 @@ def format_functions(self, params: Dict[str, Any]) -> Dict[str, Any]: formatted_params[key] = converted_properties else: formatted_params[key] = value + return formatted_params + def _build_function_declaration(self, func: Function) -> FunctionDeclaration: + """ + Builds the function declaration for Gemini tool calling. + + Args: + func: An instance of the function. + + Returns: + FunctionDeclaration: The formatted function declaration. + """ + formatted_params = self.format_functions(func.parameters) + if "properties" in formatted_params and formatted_params["properties"]: + # We have parameters to add + return FunctionDeclaration( + name=func.name, + description=func.description, + parameters=formatted_params, + ) + else: + return FunctionDeclaration( + name=func.name, + description=func.description, + ) + def add_tool( self, tool: Union["Tool", "Toolkit", Callable, dict, "Function"], @@ -356,11 +382,7 @@ def add_tool( func._agent = agent func.process_entrypoint() self.functions[name] = func - function_declaration = FunctionDeclaration( - name=func.name, - description=func.description, - parameters=self.format_functions(func.parameters), - ) + function_declaration = self._build_function_declaration(func) self.function_declarations.append(function_declaration) logger.debug(f"Function {name} from {tool.name} added to model.") @@ -369,11 +391,8 @@ def add_tool( tool._agent = agent tool.process_entrypoint() self.functions[tool.name] = tool - function_declaration = FunctionDeclaration( - name=tool.name, - description=tool.description, - parameters=self.format_functions(tool.parameters), - ) + + function_declaration = self._build_function_declaration(tool) self.function_declarations.append(function_declaration) logger.debug(f"Function {tool.name} added to model.") @@ -383,11 +402,7 @@ def add_tool( if function_name not in self.functions: func = Function.from_callable(tool) self.functions[func.name] = func - function_declaration = FunctionDeclaration( - name=func.name, - description=func.description, - parameters=self.format_functions(func.parameters), - ) + function_declaration = self._build_function_declaration(func) self.function_declarations.append(function_declaration) logger.debug(f"Function '{func.name}' added to model.") except Exception as e:
ToolKit functions with no arguments cause an error when using Gemini models. phidata version: 2.7.2 **To reproduce**: Use a Gemini model and provide a toolkit with a registered method that takes no arguments. **Expected behaviour**: Model can successfully use the tool. **Actual behaviour**: The gemini library returns this error: ``` 400 * GenerateContentRequest.tools[0].function_declarations[11].parameters.properties: should be non-empty for OBJECT type grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.INVALID_ARGUMENT details = "* GenerateContentRequest.tools[0].function_declarations[11].parameters.properties: should be non-empty for OBJECT type ``` **A minimal code to reproduce is attached** [reproduce.zip](https://github.com/user-attachments/files/18121422/reproduce.zip). Workaround: Adding a dummy parameter to the method seems to fix the issue. Can this be fixed with ToolKit.register(), or where are the model.function_declarations being setup? In the former case, adding a dummy parameter would be easy, but feels messy.
1,734,091,646,000
null
Bug Report
[ "phi/model/google/gemini.py:Gemini.add_tool" ]
[ "phi/model/google/gemini.py:Gemini._build_function_declaration" ]
1
nltk/nltk
nltk__nltk-3335
9a5622f8a5b228df9499cd03181d9f8491e39f17
diff --git a/nltk/app/wordnet_app.py b/nltk/app/wordnet_app.py index 48fe1e30f6..437eb0f755 100644 --- a/nltk/app/wordnet_app.py +++ b/nltk/app/wordnet_app.py @@ -414,7 +414,7 @@ def get_relations_data(word, synset): ), ), ) - elif synset.pos() == wn.ADJ or synset.pos == wn.ADJ_SAT: + elif synset.pos() == wn.ADJ or synset.pos() == wn.ADJ_SAT: return ( (ANTONYM, "Antonym", lemma_property(word, synset, lambda l: l.antonyms())), (SIMILAR, "Similar to", synset.similar_tos()), @@ -435,7 +435,7 @@ def get_relations_data(word, synset): ) # Derived from adjective - not supported by corpus else: - raise TypeError("Unhandles synset POS type: " + str(synset.pos())) + raise TypeError("Unhandled synset POS type: " + str(synset.pos())) html_header = """
Missing procedure call in line 417 Line 417 of the file "nltk/app/wordnet_app.py" should look like this: elif synset.pos() == wn.ADJ or synset.pos() == wn.ADJ_SAT: but instead looks like this: elif synset.pos() == wn.ADJ or synset.pos == wn.ADJ_SAT: which will generate this error (complete with spelling mistake) : "Unhandles synset POS type: s"
Thanks @drewvid, would you consider correcting both spelling errors in a PR? Sure
1,729,499,882,000
null
Bug Report
[ "nltk/app/wordnet_app.py:get_relations_data" ]
[]
1
kedro-org/kedro
kedro-org__kedro-4299
84b71b1436942d70f181a083991806cf75d5cd6d
diff --git a/kedro/framework/cli/cli.py b/kedro/framework/cli/cli.py index f5917e1b87..6ad4e24e97 100644 --- a/kedro/framework/cli/cli.py +++ b/kedro/framework/cli/cli.py @@ -217,7 +217,7 @@ def global_groups(self) -> Sequence[click.MultiCommand]: combines them with the built-in ones (eventually overriding the built-in ones if they are redefined by plugins). """ - return [*load_entry_points("global"), cli, global_commands] + return [cli, *load_entry_points("global"), global_commands] @property def project_groups(self) -> Sequence[click.MultiCommand]:
`kedro --version` isn't working ## Description Reported by @noklam, since adding lazy loading of Kedro subcommands, the `--version`/`-V` option isn't working. ## Context This bug is originating in Kedro 0.19.7 -> https://github.com/kedro-org/kedro/pull/3883
> Usage: kedro [OPTIONS] COMMAND [ARGS]... > Try 'kedro -h' for help. > > Error: No such option: -v > This is the stack trace when run `kedro -V`, `kedro -v ` or `kedro --version` While investgating this issue, I think it's worth checking why CI didn't catch this error, we have this test inplace. ```python def test_print_version(self): """Check that `kedro --version` and `kedro -V` outputs contain the current package version.""" result = CliRunner().invoke(cli, ["--version"]) assert result.exit_code == 0 assert version in result.output ``` how to reproduce the error? the command works well for me @DimedS `kedro -V,r kedro --version` as mentioned. Are you using the `main` branch? Can you copy the terminal log when you do `kedro`? I confirm `kedro -V` and `kedro --version` both give `No such option` errors with 0.19.9 I figured out what the problem is: https://github.com/kedro-org/kedro/blob/a1fae5018f35243a5e49a54a9dd3223b2c4ea743/kedro/framework/cli/cli.py#L220 Due to the changes in lazy loading PR, I re-ordered the global commands list to consider - first the commands loaded from plugins, - then `cli` which is the group with `info` and the `version_option` decorator - and then the `global_commands` group which contains the `new` and `starter` lazy commands. So if any plugin with global commands (eg. Kedro-Viz) is installed in your env, the `--version` option doesn't work. It works when you uninstall Kedro viz. Which is why it must be working in the CI and for @DimedS The solution is simply to re-order the command groups to `[cli, *load_entry_points("global"), global_commands]` but that would mean that users can't overwrite `kedro info` which I think is acceptable.
1,730,797,930,000
null
Bug Report
[ "kedro/framework/cli/cli.py:KedroCLI.global_groups" ]
[]
1
dask/dask
dask__dask-11608
24c492095a791696ce6611e9d2294274f4592911
diff --git a/dask/_task_spec.py b/dask/_task_spec.py index 316f1805aa6..c108bbb5b6b 100644 --- a/dask/_task_spec.py +++ b/dask/_task_spec.py @@ -799,6 +799,7 @@ def __init__( None, self.to_container, *args, + klass=self.klass, _dependencies=_dependencies, **kwargs, ) @@ -832,9 +833,9 @@ def __dask_tokenize__(self): return super().__dask_tokenize__() - @classmethod - def to_container(cls, *args, **kwargs): - return cls.klass(args) + @staticmethod + def to_container(*args, klass): + return klass(args) class List(NestedContainer):
`NestedContainer.to_container` method gets tracked individually per NestedContainer object Looking into https://github.com/dask/distributed/issues/8958, I've noticed that for each `NestedContainer` object, its bound `to_container` method is tracked individually by the GC. This accounts for ~500k of 9MM objects in my workload. It would probably be better to stop tracking these individually.
On top, that is very likely a self referencing cycle so breaking this will benefit GC in more than one way
1,734,442,914,000
null
Performance Issue
[ "dask/_task_spec.py:NestedContainer.__init__", "dask/_task_spec.py:NestedContainer.to_container" ]
[]
2
dask/dask
dask__dask-11539
5b115c4360fec6a4aa6e0edf8ad1d89a87c986dd
diff --git a/dask/array/core.py b/dask/array/core.py index 10736af6f9d..0a7ebeb1b7c 100644 --- a/dask/array/core.py +++ b/dask/array/core.py @@ -3754,9 +3754,9 @@ def from_zarr( store = zarr.storage.FSStore(url, **storage_options) else: store = url - z = zarr.open_array(store=store, read_only=True, path=component, **kwargs) + z = zarr.open_array(store=store, path=component, **kwargs) else: - z = zarr.open_array(store=url, read_only=True, path=component, **kwargs) + z = zarr.open_array(store=url, path=component, **kwargs) chunks = chunks if chunks is not None else z.chunks if name is None: name = "from-zarr-" + tokenize(z, component, storage_options, chunks, **kwargs)
Warning raised with default `from_zarr` settings **Describe the issue**: Reading a zarr array with `dask.array.from_zarr` raises a `UserWarning`, but I'm not doing anything wrong. **Minimal Complete Verifiable Example**: ```python import dask.array import zarr zarr_arr = zarr.open(shape=(6, 6, 6), store="./zeros.zarr", chunks=(3, 3, 2), mode='w') zarr_arr[:] = 0 dask_arr = dask.array.from_zarr("./zeros.zarr") ``` Raises: ``` /Users/dstansby/software/zarr/hackathon/.venv/lib/python3.12/site-packages/zarr/creation.py:614: UserWarning: ignoring keyword argument 'read_only' compressor, fill_value = _kwargs_compat(compressor, fill_value, kwargs) ``` **Anything else we need to know?**: **Environment**: - Dask version: 2024.11.2 - zarr version: 2.18.3 - Python version: 3.12 - Operating System: macOS - Install method (conda, pip, source): pip
1,732,053,620,000
null
Bug Report
[ "dask/array/core.py:from_zarr" ]
[]
1
dask/dask
dask__dask-11491
fa8fecf10a94971f2f31df57d504d25bef4dd57e
diff --git a/dask/array/core.py b/dask/array/core.py index fdf65bd24a4..3065406a922 100644 --- a/dask/array/core.py +++ b/dask/array/core.py @@ -562,7 +562,9 @@ def map_blocks( Dimensions lost by the function. new_axis : number or iterable, optional New dimensions created by the function. Note that these are applied - after ``drop_axis`` (if present). + after ``drop_axis`` (if present). The size of each chunk along this + dimension will be set to 1. Please specify ``chunks`` if the individual + chunks have a different size. enforce_ndim : bool, default False Whether to enforce at runtime that the dimensionality of the array produced by ``func`` actually matches that of the array returned by
`map_blocks()` with `new_axis` output has incorrect shape <!-- Please include a self-contained copy-pastable example that generates the issue if possible. Please be concise with code posted. See guidelines below on how to provide a good bug report: - Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports - Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly. --> **Describe the issue**: When running `map_blocks()` with `new_axis` specified, the output shape of the dask array is not set correctly. In the below example I would expect it to be the same as the shape after computation. **Minimal Complete Verifiable Example**: ```python import dask.array as da import numpy as np def func(x): return np.stack([x, x + 0.5]) x = da.arange(6, chunks=2) x_mapped = x.map_blocks(func, new_axis=[0]) print(x_mapped.shape) # (1, 6) print(x_mapped.compute().shape) # (2, 6) ``` **Anything else we need to know?**: **Environment**: - Dask version: b7d9bf49f682de8d2ef51f4617e3da782400c290 - Python version: 3.12.3 - Operating System: macOS - Install method (conda, pip, source): source
I don't think that we can guess the output shape with a high degree of fidelity. We should probably either set all chunks to NaN or force the specification of chunks. Being able to specify the size of new output dimensions if known would be nice. e.g., in the above toy example we know the size of the new dimension is going to be `2` ahead of time. Yeah, the interesting thing for Dask is the chunk sizes, the shape is just a convenient result from that, so we would need this information.
1,730,759,801,000
null
Bug Report
[ "dask/array/core.py:map_blocks" ]
[]
1
feast-dev/feast
feast-dev__feast-4727
e9cd3733f041da99bb1e84843ffe5af697085c34
diff --git a/sdk/python/feast/feature_server.py b/sdk/python/feast/feature_server.py index 26ee604e79..1f4918fe7a 100644 --- a/sdk/python/feast/feature_server.py +++ b/sdk/python/feast/feature_server.py @@ -24,6 +24,7 @@ FeastError, FeatureViewNotFoundException, ) +from feast.feast_object import FeastObject from feast.permissions.action import WRITE, AuthzedAction from feast.permissions.security_manager import assert_permissions from feast.permissions.server.rest import inject_user_details @@ -218,21 +219,25 @@ async def push(request: PushFeaturesRequest) -> None: else: store.push(**push_params) - @app.post("/write-to-online-store", dependencies=[Depends(inject_user_details)]) - def write_to_online_store(request: WriteToFeatureStoreRequest) -> None: - df = pd.DataFrame(request.df) - feature_view_name = request.feature_view_name - allow_registry_cache = request.allow_registry_cache + def _get_feast_object( + feature_view_name: str, allow_registry_cache: bool + ) -> FeastObject: try: - feature_view = store.get_stream_feature_view( # type: ignore + return store.get_stream_feature_view( # type: ignore feature_view_name, allow_registry_cache=allow_registry_cache ) except FeatureViewNotFoundException: - feature_view = store.get_feature_view( # type: ignore + return store.get_feature_view( # type: ignore feature_view_name, allow_registry_cache=allow_registry_cache ) - assert_permissions(resource=feature_view, actions=[AuthzedAction.WRITE_ONLINE]) + @app.post("/write-to-online-store", dependencies=[Depends(inject_user_details)]) + def write_to_online_store(request: WriteToFeatureStoreRequest) -> None: + df = pd.DataFrame(request.df) + feature_view_name = request.feature_view_name + allow_registry_cache = request.allow_registry_cache + resource = _get_feast_object(feature_view_name, allow_registry_cache) + assert_permissions(resource=resource, actions=[AuthzedAction.WRITE_ONLINE]) store.write_to_online_store( feature_view_name=feature_view_name, df=df, @@ -250,9 +255,8 @@ async def health(): @app.post("/materialize", dependencies=[Depends(inject_user_details)]) def materialize(request: MaterializeRequest) -> None: for feature_view in request.feature_views or []: - # TODO: receives a str for resource but isn't in the Union. is str actually allowed? assert_permissions( - resource=feature_view, # type: ignore + resource=_get_feast_object(feature_view, True), actions=[AuthzedAction.WRITE_ONLINE], ) store.materialize( @@ -264,9 +268,8 @@ def materialize(request: MaterializeRequest) -> None: @app.post("/materialize-incremental", dependencies=[Depends(inject_user_details)]) def materialize_incremental(request: MaterializeIncrementalRequest) -> None: for feature_view in request.feature_views or []: - # TODO: receives a str for resource but isn't in the Union. is str actually allowed? assert_permissions( - resource=feature_view, # type: ignore + resource=_get_feast_object(feature_view, True), actions=[AuthzedAction.WRITE_ONLINE], ) store.materialize_incremental(
Wrong permission asserts on materialize endpoints ## Expected Behavior The `assert_permissions` function expects a `resources` of type `FeastObject`. ## Current Behavior Materialization endpoints in `feature_server` module receive instead a `str`, as in [/materialize](https://github.com/feast-dev/feast/blob/60fbc62080950549f28b9411e00926be168bea56/sdk/python/feast/feature_server.py#L256-L258) and [/materialize_incremental](https://github.com/feast-dev/feast/blob/60fbc62080950549f28b9411e00926be168bea56/sdk/python/feast/feature_server.py#L269-L271) ## Possible Solution Fetch the `FeatureView`s like the [/write-to-online-store](https://github.com/feast-dev/feast/blob/60fbc62080950549f28b9411e00926be168bea56/sdk/python/feast/feature_server.py#L226C9-L235C14) endpoint
1,730,404,565,000
null
Bug Report
[ "sdk/python/feast/feature_server.py:get_app" ]
[]
1
python/mypy
python__mypy-18292
c4f5056d6c43db556b5215cb3c330fcde25a77cd
diff --git a/mypy/main.py b/mypy/main.py index e1c9f20400bc..d2a28a18c6a8 100644 --- a/mypy/main.py +++ b/mypy/main.py @@ -9,6 +9,7 @@ import time from collections import defaultdict from gettext import gettext +from io import TextIOWrapper from typing import IO, Any, Final, NoReturn, Sequence, TextIO from mypy import build, defaults, state, util @@ -74,6 +75,10 @@ def main( if args is None: args = sys.argv[1:] + # Write an escape sequence instead of raising an exception on encoding errors. + if isinstance(stdout, TextIOWrapper) and stdout.errors == "strict": + stdout.reconfigure(errors="backslashreplace") + fscache = FileSystemCache() sources, options = process_options(args, stdout=stdout, stderr=stderr, fscache=fscache) if clean_exit:
Error when displaying error that contains unicode characters in Windows <!-- If you're new to mypy and you're not sure whether what you're experiencing is a mypy bug, please see the "Question and Help" form instead. Please also consider: - checking our common issues page: https://mypy.readthedocs.io/en/stable/common_issues.html - searching our issue tracker: https://github.com/python/mypy/issues to see if it's already been reported - asking on gitter chat: https://gitter.im/python/typing --> **Bug Report** <!-- Note: If the problem you are reporting is about a specific library function, then the typeshed tracker is better suited for this report: https://github.com/python/typeshed/issues --> When displaying a type error about e.g. a variable that contains unicode characters, mypy crashes. **To Reproduce** 1. Make a file `file.py` containing the line `x=γ`. 2. Run `mypy.exe --show-column-numbers file.py` through flycheck (python-mypy) in Emacs **Expected Behavior** An error message like `file.py:1:5: error: Name "γ" is not defined` <!-- How did you expect your project to behave? It’s fine if you’re not sure your understanding is correct. Write down what you thought would happen. If you just expected no errors, you can delete this section. --> **Actual Behavior** It crashes and prints a stack trace: ``` File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "c:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\Scripts\mypy.exe\__main__.py", line 7, in <module> File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\site-packages\mypy\__main__.py", line 15, in console_entry main(None, sys.stdout, sys.stderr) File "mypy\main.py", line 96, in main File "mypy\main.py", line 173, in run_build File "mypy\build.py", line 180, in build File "mypy\build.py", line 256, in _build File "mypy\build.py", line 2717, in dispatch File "mypy\build.py", line 3048, in process_graph File "mypy\build.py", line 3164, in process_stale_scc File "mypy\main.py", line 165, in flush_errors File "mypy\main.py", line 199, in show_messages File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\encodings\cp1252.py", line 19, in encode return codecs.charmap_encode(input,self.errors,encoding_table)[0] UnicodeEncodeError: 'charmap' codec can't encode character '\u03b3' in position 33: character maps to <undefined> ``` I've fixed it locally by adding ``` sys.stdout.reconfigure(encoding='utf-8') sys.stderr.reconfigure(encoding='utf-8') ``` in `mypy/__main__.py`. It works for me, but I don't know whether it's the right thing for mypy. <!-- Did something go wrong? Is something broken, or not behaving as you expected? --> **Your Environment** Python 3.9.7, mypy 0.931 on Windows <!-- Include as many relevant details about the environment you experienced the bug in --> - Mypy version used: 0.931 - Mypy command-line flags: --show-column-numbers - Mypy configuration options from `mypy.ini` (and other config files): None, I think - Python version used: 3.9.7 - Operating system and version: Windows 11 <!-- You can freely edit this text, please remove all the lines you believe are unnecessary. -->
My 'fix' doesn't really work perfectly. Something in Windows+emacs+flycheck doesn't decode the mypy output as unicode, and what I see in Emacs is `file.py:1:5: error: Name "γ" is not defined`. But that's probably not a mypy issue. Update: I tested this with updated mypy 0.950 in Windows and Ubuntu, and couldn't reproduce by calling `mypy.exe --show-column-numbers file.py` in the command line. The issue happens only in flycheck in Emacs. I guess that flycheck's python-mypy runs in a special environment where stderr and stdout are opened as TextIO buffers with a non-utf-8 encoding. This can still happen anytime the output encoding can't represent a codepoint in the error message. For example, this can be reproduced on a unix system by running ```shell $ PYTHONIOENCODING=cp1252 mypy -c "x=γ" Traceback (most recent call last): ... File "/home/brian/Projects/open-contrib/mypy/mypy/main.py", line 230, in show_messages f.write(msg + "\n") File "/usr/lib/python3.12/encodings/cp1252.py", line 19, in encode return codecs.charmap_encode(input,self.errors,encoding_table)[0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ UnicodeEncodeError: 'charmap' codec can't encode character '\u03b3' in position 50: character maps to <undefined> ``` If this comes up again, we could look into using a different error handler when writing the output to stdout/stderr (the current default is `"strict"`, which raises an exception). Something like `"backslashreplace"` might make sense. For now, affected environments can try setting a different encoding or error hander via `PYTHONIOENCODING` or any other method. For example: ```shell $ PYTHONIOENCODING=utf-8 mypy -c "x=γ" <string>:1: error: Name "γ" is not defined [name-defined] $ PYTHONIOENCODING=cp1252:backslashreplace mypy -c "x=γ" <string>:1: error: Name "\u03b3" is not defined [name-defined] ``` Setting backslashreplace as the error handler seems like a good idea here.
1,734,121,592,000
null
Bug Report
[ "mypy/main.py:main" ]
[]
1
albumentations-team/albumentations
albumentations-team__albumentations-2183
47c24503e0636f258e2af2b18e552d52271308bf
diff --git a/albumentations/augmentations/functional.py b/albumentations/augmentations/functional.py index 52adf80df..2dc1dd07f 100644 --- a/albumentations/augmentations/functional.py +++ b/albumentations/augmentations/functional.py @@ -925,7 +925,12 @@ def add_sun_flare_overlay( overlay = img.copy() output = img.copy() + weighted_brightness = 0.0 + total_radius_length = 0.0 + for alpha, (x, y), rad3, (r_color, g_color, b_color) in circles: + weighted_brightness += alpha * rad3 + total_radius_length += rad3 cv2.circle(overlay, (x, y), rad3, (r_color, g_color, b_color), -1) output = add_weighted(overlay, alpha, output, 1 - alpha) @@ -933,7 +938,13 @@ def add_sun_flare_overlay( overlay = output.copy() num_times = src_radius // 10 - alpha = np.linspace(0.0, 1, num=num_times) + + # max_alpha is calculated using weighted_brightness and total_radii_length times 5 + # meaning the higher the alpha with larger area, the brighter the bright spot will be + # for list of alphas in range [0.05, 0.2], the max_alpha should below 1 + max_alpha = weighted_brightness / total_radius_length * 5 + alpha = np.linspace(0.0, min(max_alpha, 1.0), num=num_times) + rad = np.linspace(1, src_radius, num=num_times) for i in range(num_times):
[RandomSunFlare] Add transparency to RandomSunFlare ![image](https://github.com/user-attachments/assets/2c1a7dec-77db-4150-86df-d484da5e2e31) Sunflare obscures the object
Can I assume explore.albumentations.ai hosts latest commit on main? Typically yes, unless I forget to update the explore.albumentations.ai Right now it is the latest.
1,733,844,294,000
null
Feature Request
[ "albumentations/augmentations/functional.py:add_sun_flare_overlay" ]
[]
1
bridgecrewio/checkov
bridgecrewio__checkov-6826
24535627d7315014328ec034daa3362a72948d09
diff --git a/checkov/terraform/checks/resource/aws/EKSPlatformVersion.py b/checkov/terraform/checks/resource/aws/EKSPlatformVersion.py index 563798a01d0..d2011578ec6 100644 --- a/checkov/terraform/checks/resource/aws/EKSPlatformVersion.py +++ b/checkov/terraform/checks/resource/aws/EKSPlatformVersion.py @@ -24,7 +24,8 @@ def get_inspected_key(self) -> str: return "version" def get_expected_values(self) -> list[Any]: - return ["1.23", "1.24", "1.25", "1.26", "1.27", "1.28", "1.29", "1.30"] + # https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html + return ["1.24", "1.25", "1.26", "1.27", "1.28", "1.29", "1.30", "1.31"] check = EKSPlatformVersion()
Add EKS 1.31 as a supported version **Describe the issue** EKS 1.31 has been released. However `CKV_AWS_339` fails as this is not listed as a supported version. **Examples** ``` resource "aws_eks_cluster" "eks_cluster" { ... version = "1.31" ``` **Version (please complete the following information):** - Checkov Version 3.2.256 (latest) **Additional context** https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/resource/aws/EKSPlatformVersion.py seems to be where the fix is needed
@zvickery thanks for the comment, please feel free to contribute as this is the fastest way our checks could be updated :)
1,731,355,205,000
null
Feature Request
[ "checkov/terraform/checks/resource/aws/EKSPlatformVersion.py:EKSPlatformVersion.get_expected_values" ]
[]
1
spotify/luigi
spotify__luigi-3324
80549f6b6f8c143effb81f3cf4a411b6068d9e2c
diff --git a/luigi/contrib/postgres.py b/luigi/contrib/postgres.py index 719b80a4d7..19e96e8180 100644 --- a/luigi/contrib/postgres.py +++ b/luigi/contrib/postgres.py @@ -356,16 +356,15 @@ def copy(self, cursor, file): else: raise Exception('columns must consist of column strings or (column string, type string) tuples (was %r ...)' % (self.columns[0],)) - # cursor.copy_from is not available in pg8000 - if hasattr(cursor, 'copy_from'): - cursor.copy_from( - file, self.table, null=r'\\N', sep=self.column_separator, columns=column_names) + copy_sql = ( + "COPY {table} ({column_list}) FROM STDIN " + "WITH (FORMAT text, NULL '{null_string}', DELIMITER '{delimiter}')" + ).format(table=self.table, delimiter=self.column_separator, null_string=r'\\N', + column_list=", ".join(column_names)) + # cursor.copy_expert is not available in pg8000 + if hasattr(cursor, 'copy_expert'): + cursor.copy_expert(copy_sql, file) else: - copy_sql = ( - "COPY {table} ({column_list}) FROM STDIN " - "WITH (FORMAT text, NULL '{null_string}', DELIMITER '{delimiter}')" - ).format(table=self.table, delimiter=self.column_separator, null_string=r'\\N', - column_list=", ".join(column_names)) cursor.execute(copy_sql, stream=file) def run(self):
[contrib.postgres] copy_from does not accept schema.table notation in most recent psycopg2 versions <!--- We use GitHub issues mainly for tracking bugs and feature requests. Questions for how to use luigi can be sent to the mailing list. Currently, there are no strict procedures or guidelines for submitting issues. In short, please just use common sense. Common sense includes this at bare-minimum: * search for similar issues posted before creating a new issue. * Use markdown to format all code/logs. Issues which are hard to read when rendered on GitHub might be closed with a friendly reminder of this. * If applicable, reading relevant parts of the documentation. Also, add steps to reproduce the bug, if applicable. Sample code would be nice too :) For more information on how to submit valuable contributions, see https://opensource.guide/how-to-contribute/#how-to-submit-a-contribution --> ## Description I'm trying to maintain an old (2018) project that includes a lot of Luigi tasks, amongst which there are some tasks derived from [`CopyToTable`](https://github.com/spotify/luigi/blob/master/luigi/contrib/postgres.py#L299). In this project, the PostgreSQL database contains several schemas, and some data may be added to tables which are not in default `public` schema through `CopyToTable`-derived tasks. However `CopyToTable` uses the `cursor.copy_from` method, which has been recently modified in `psycopg2` API (see *e.g* https://github.com/psycopg/psycopg2/issues/1294). Hence using Luigi with a recent `psycopg2` raises an error like `psycopg2.errors.UndefinedTable: relation "schema.table" does not exist`. ## Expected behavior Taking into account the behavior change in psycopg2, considering `schema.table` notation for Postgres tables that are located in a dedicated schema. ## Minimal Working Example Let's consider the following Python module (let's call it `luigi_copytotable.py`) : ```python from luigi.contrib.postgres import CopyToTable import pandas as pd class SendToDB(CopyToTable): """Insert bike availability data into a PostgreSQL table """ host = "localhost" database = "my_db" user = "my_username" password = "my_password" columns = [('a', 'VARCHAR'), ('b', 'INT')] @property def table(self): return 'my_schema.my_table' def rows(self): df = pd.DataFrame({"a": ["foo", "bar", "wiz"], "b": [1, 2, 3]}) for idx, row in df.iterrows(): yield row.values ``` Running `luigi --local-scheduler --module luigi_copytotable SendToDB` throws: ```bash 16:04 $ luigi --local-scheduler --module luigi_copytotable SendToDB_ DEBUG: Checking if SendToDB() is complete INFO: Informed scheduler that task SendToDB__99914b932b has status PENDING INFO: Done scheduling tasks INFO: Running Worker with 1 processes DEBUG: Asking scheduler for work... DEBUG: Pending tasks: 1 INFO: [pid 494717] Worker Worker(salt=450412579, workers=1, host=*******, username=my_username, pid=494717) running SendToDB_() INFO: Done writing, importing at 2022-09-07 16:04:05.364381 INFO: Creating table my_schema.my_table ERROR: [pid 494717] Worker Worker(salt=450412579, workers=1, host=*******, username=my_username, pid=494717) failed SendToDB() Traceback (most recent call last): File "/home/rdelhome/.virtualenvs/jitenv/lib/python3.10/site-packages/luigi/worker.py", line 198, in run new_deps = self._run_get_new_deps() File "/home/rdelhome/.virtualenvs/jitenv/lib/python3.10/site-packages/luigi/worker.py", line 138, in _run_get_new_deps task_gen = self.task.run() File "/home/rdelhome/.virtualenvs/jitenv/lib/python3.10/site-packages/luigi/contrib/postgres.py", line 403, in run self.copy(cursor, tmp_file) File "/home/rdelhome/.virtualenvs/jitenv/lib/python3.10/site-packages/luigi/contrib/postgres.py", line 358, in copy cursor.copy_from( psycopg2.errors.UndefinedTable: relation "my_schema.my_table" does not exist DEBUG: 1 running tasks, waiting for next task to finish INFO: Informed scheduler that task SendToDB__99914b932b has status FAILED DEBUG: Asking scheduler for work... DEBUG: Done DEBUG: There are no more tasks to run at this time DEBUG: There are 1 pending tasks possibly being run by other workers DEBUG: There are 1 pending tasks unique to this worker DEBUG: There are 1 pending tasks last scheduled by this worker INFO: Worker Worker(salt=450412579, workers=1, host=*********, username=my_username, pid=494717) was stopped. Shutting down Keep-Alive thread INFO: ===== Luigi Execution Summary ===== Scheduled 1 tasks of which: * 1 failed: - 1 SendToDB() This progress looks :( because there were failed tasks ===== Luigi Execution Summary ===== ``` ## Hints for resolution As suggested in the psycopg2 issue, use `copy_expert` ? Or maybe modify the `if` predicate in https://github.com/spotify/luigi/blob/master/luigi/contrib/postgres.py#L357, to choose the `else` case if `copy_from` is not happy... (Note: As a temporary solution, I've downgraded my  `psycopg2` version to `<2.9` to make it work.) ## Related issue See on psycopg2 project: https://github.com/psycopg/psycopg2/issues/1294
1,732,801,325,000
null
Bug Report
[ "luigi/contrib/postgres.py:CopyToTable.copy" ]
[]
1
robotframework/robotframework
robotframework__robotframework-5265
6f58c00b10bd0b755657eb2a615b9a29a063f6ce
diff --git a/src/robot/output/pyloggingconf.py b/src/robot/output/pyloggingconf.py index fdccb16329d..b2300a5ad21 100644 --- a/src/robot/output/pyloggingconf.py +++ b/src/robot/output/pyloggingconf.py @@ -36,6 +36,7 @@ def robot_handler_enabled(level): return handler = RobotHandler() old_raise = logging.raiseExceptions + old_level = root.level root.addHandler(handler) logging.raiseExceptions = False set_level(level) @@ -43,6 +44,7 @@ def robot_handler_enabled(level): yield finally: root.removeHandler(handler) + root.setLevel(old_level) logging.raiseExceptions = old_raise
`logging` module log level is not restored after execution Hi, It seems like that the robot handler is changing the root logger log level via ``set_level`` function (``robot.output.pyloggingconf``) but the original root logger level is not restored back after the end of the ``robot.running.model.TestSuite.run`` method or ``robot.run`` module. The original context manager: ```python @contextmanager def robot_handler_enabled(level): root = logging.getLogger() if any(isinstance(h, RobotHandler) for h in root.handlers): yield return handler = RobotHandler() old_raise = logging.raiseExceptions root.addHandler(handler) logging.raiseExceptions = False set_level(level) try: yield finally: root.removeHandler(handler) logging.raiseExceptions = old_raise ``` Would it be necessary to restore the log level after changing it, in case the test script or any other third-party tool has already modified it for any reason? ```python @contextmanager def robot_handler_enabled(level): root = logging.getLogger() if any(isinstance(h, RobotHandler) for h in root.handlers): yield return handler = RobotHandler() old_raise = logging.raiseExceptions * -> old_level = logging.getLevelName(root.level) root.addHandler(handler) logging.raiseExceptions = False set_level(level) try: yield finally: root.removeHandler(handler) logging.raiseExceptions = old_raise * -> set_level(old_level) ```
Restoring old configuration sounds good to me. Interested to create a PR? Definitely! Thank you @pekkaklarck !
1,731,601,814,000
null
Bug Report
[ "src/robot/output/pyloggingconf.py:robot_handler_enabled" ]
[]
1
ShishirPatil/gorilla
ShishirPatil__gorilla-754
3b240551fe7ecb57ddd2c415b40872ce17dfb784
diff --git a/berkeley-function-call-leaderboard/bfcl/model_handler/oss_model/base_oss_handler.py b/berkeley-function-call-leaderboard/bfcl/model_handler/oss_model/base_oss_handler.py index c58812641..c3fc3c8e5 100644 --- a/berkeley-function-call-leaderboard/bfcl/model_handler/oss_model/base_oss_handler.py +++ b/berkeley-function-call-leaderboard/bfcl/model_handler/oss_model/base_oss_handler.py @@ -224,7 +224,7 @@ def _multi_threaded_inference(self, test_case, include_input_log: bool, include_ if "multi_turn" in test_case["id"]: model_responses, metadata = self.inference_multi_turn_prompting(test_case, include_input_log, include_state_log) else: - model_responses, metadata = self.inference_single_turn_prompting(test_case, include_input_log, include_state_log) + model_responses, metadata = self.inference_single_turn_prompting(test_case, include_input_log) except Exception as e: print("-" * 100) print(
[BFCL] bugs in function def _multi_threaded_inference(self, test_case, include_input_log: bool, include_state_log: bool): **Describe the issue** I encountered an error while running bfcl generate. The error occurred in the file gorilla/berkeley-function-call-leaderboard/bfcl/model_handler/oss_model/base_oss_handler.py in the function def _multi_threaded_inference(self, test_case, include_input_log: bool, include_state_log: bool):. The line model_responses, metadata = self.inference_single_turn_prompting(test_case, include_input_log, include_state_log) caused a runtime error. I discovered that the function inference_single_turn_prompting only accepts the parameters test_case and include_input_log. However, the code additionally passes include_state_log, which leads to the runtime error. When I removed include_state_log, the code ran successfully. **ID datapoint** 1. Datapoint / Model Handler permalink: 2. Issue: 2. Gorilla repo commit #: **What is the issue** The function inference_single_turn_prompting does not accept include_state_log as a parameter, causing a runtime error when it is passed. **Proposed Changes** { 'previous_datapoint':[], 'updated_datapoint':[] } **Additional context** Add any other context about the problem here.
1,731,442,886,000
null
Bug Report
[ "berkeley-function-call-leaderboard/bfcl/model_handler/oss_model/base_oss_handler.py:OSSHandler._multi_threaded_inference" ]
[]
1
Netflix/metaflow
Netflix__metaflow-2141
0bc4a9683ba67eedd756a8dc777916020587d5f7
diff --git a/metaflow/cli.py b/metaflow/cli.py index 1fc6a14953..a318b84a3e 100644 --- a/metaflow/cli.py +++ b/metaflow/cli.py @@ -282,31 +282,21 @@ def dump(obj, input_path, private=None, max_value_size=None, include=None, file= else: ds_list = list(datastore_set) # get all tasks - tasks_processed = False for ds in ds_list: - if ds is not None: - tasks_processed = True - echo( - "Dumping output of run_id=*{run_id}* " - "step=*{step}* task_id=*{task_id}*".format( - run_id=ds.run_id, step=ds.step_name, task_id=ds.task_id - ), - fg="magenta", - ) - - if file is None: - echo_always( - ds.format(**kwargs), - highlight="green", - highlight_bold=False, - err=False, - ) - else: - output[ds.pathspec] = ds.to_dict(**kwargs) + echo( + "Dumping output of run_id=*{run_id}* " + "step=*{step}* task_id=*{task_id}*".format( + run_id=ds.run_id, step=ds.step_name, task_id=ds.task_id + ), + fg="magenta", + ) - if not tasks_processed: - echo(f"No task(s) found for pathspec {input_path}", fg="red") - return + if file is None: + echo_always( + ds.format(**kwargs), highlight="green", highlight_bold=False, err=False + ) + else: + output[ds.pathspec] = ds.to_dict(**kwargs) if file is not None: with open(file, "wb") as f:
BUG: Data store error - AWS batch/step execution **Environment:** metaflow version: 2.12.29 Python 3.11 (Docker Image from public.ecr.aws/docker/library/python:3.11) Running on AWS Batch **Description:** Tested with version 2.12.28 and it runs successfully, with this latest version we get: Data store error: No completed attempts of the task was found for task `MyFlow/sfn-*/_parameters/*-params`. Maybe worth mentioning that we include a json file into `MyFlow` like: ``` json_config = IncludeFile( name="my_config", required=True, help="The Configuration", default=f"./{PARAMS_JSON}", ) ```
I also got this error when running on argo workflows. My flow does not use `IncludeFile` but just usual parameters. I can also confirm it happens for `2.12.29` but not `2.12.28` And another confirmation with step on batch. 2.12.29 displays the error, 2.12.28 does not. I also got this error on Argo Workflows. Same problematic version (`2.12.29`) and the same fix (downgrade to `2.12.28`). we are triaging also, for quicker resolution/response, you can always ping us on chat.metaflow.org
1,731,502,413,000
null
Bug Report
[ "metaflow/cli.py:dump" ]
[]
1
ray-project/ray
ray-project__ray-49071
f498afc76dfafcf447106471e8df33578a6293be
diff --git a/rllib/examples/rl_modules/classes/action_masking_rlm.py b/rllib/examples/rl_modules/classes/action_masking_rlm.py index 992802ebb13a..626554a6434c 100644 --- a/rllib/examples/rl_modules/classes/action_masking_rlm.py +++ b/rllib/examples/rl_modules/classes/action_masking_rlm.py @@ -1,10 +1,11 @@ import gymnasium as gym -from typing import Dict, Optional, Tuple +from typing import Dict, Optional, Tuple, Union from ray.rllib.algorithms.ppo.torch.ppo_torch_rl_module import PPOTorchRLModule from ray.rllib.core.columns import Columns from ray.rllib.core.rl_module.apis.value_function_api import ValueFunctionAPI -from ray.rllib.core.rl_module.rl_module import RLModule, RLModuleConfig +from ray.rllib.core.rl_module.default_model_config import DefaultModelConfig +from ray.rllib.core.rl_module.rl_module import RLModule from ray.rllib.utils.annotations import override from ray.rllib.utils.framework import try_import_torch from ray.rllib.utils.torch_utils import FLOAT_MIN @@ -32,9 +33,17 @@ class ActionMaskingRLModule(RLModule): """ @override(RLModule) - def __init__(self, config: RLModuleConfig): + def __init__( + self, + observation_space: Optional[gym.Space] = None, + action_space: Optional[gym.Space] = None, + inference_only: Optional[bool] = None, + learner_only: bool = False, + model_config: Optional[Union[dict, DefaultModelConfig]] = None, + catalog_class=None, + ): # If observation space is not of type `Dict` raise an error. - if not isinstance(config.observation_space, gym.spaces.dict.Dict): + if not isinstance(observation_space, gym.spaces.dict.Dict): raise ValueError( "This RLModule requires the environment to provide a " "`gym.spaces.Dict` observation space of the form: \n" @@ -46,15 +55,22 @@ def __init__(self, config: RLModuleConfig): # the action mask and the original observation space, the 'RLModule' # receives only the `"observation"` element of the space, but not the # action mask. - self.observation_space_with_mask = config.observation_space - config.observation_space = config.observation_space["observations"] + self.observation_space_with_mask = observation_space + self.observation_space = observation_space["observations"] # Keeps track if observation specs have been checked already. self._checked_observations = False # The PPORLModule, in its constructor will build networks for the original # observation space (i.e. without the action mask). - super().__init__(config) + super().__init__( + observation_space=self.observation_space, + action_space=action_space, + inference_only=inference_only, + learner_only=learner_only, + model_config=model_config, + catalog_class=catalog_class, + ) class ActionMaskingTorchRLModule(ActionMaskingRLModule, PPOTorchRLModule): @@ -100,11 +116,13 @@ def _forward_train( @override(ValueFunctionAPI) def compute_values(self, batch: Dict[str, TensorType], embeddings=None): - # Preprocess the batch to extract the `observations` to `Columns.OBS`. - action_mask, batch = self._preprocess_batch(batch) - # NOTE: Because we manipulate the batch we need to add the `action_mask` - # to the batch to access them in `_forward_train`. - batch["action_mask"] = action_mask + # Check, if the observations are still in `dict` form. + if isinstance(batch[Columns.OBS], dict): + # Preprocess the batch to extract the `observations` to `Columns.OBS`. + action_mask, batch = self._preprocess_batch(batch) + # NOTE: Because we manipulate the batch we need to add the `action_mask` + # to the batch to access them in `_forward_train`. + batch["action_mask"] = action_mask # Call the super's method to compute values for GAE. return super().compute_values(batch, embeddings)
[RLlib] action_masking_example.py fails - RLModule build fails with "unexpected keyword argument 'observation_space'" ### What happened + What you expected to happen Running the `action_masking_rl_module.py` example, which is shipped with 2.39 release, fails at RLModule instantiation. > File "C:\Users\Philipp\anaconda3\envs\py311-raynew\Lib\site-packages\ray\rllib\core\rl_module\rl_module.py", line 100, in build > module = self.module_class( > ^^^^^^^^^^^^^^^^^^ > TypeError: ActionMaskingRLModule.__init__() got an unexpected keyword argument 'observation_space' I did no change to the file locally. I skipped the CLI args "--enable-new-api-stack", as for PPO the new API stack is enabled by default since of release 2.39 ### Versions / Dependencies python==3.11.9 ray===2.39.0 torch==2.3.1+cu118 gymnasium==1.0.0 ### Reproduction script python ray/rllib/examples/rl_modules/action_masking_rl_module.py ### Issue Severity Medium: It is a significant difficulty but I can work around it.
1,733,320,563,000
null
Bug Report
[ "rllib/examples/rl_modules/classes/action_masking_rlm.py:ActionMaskingRLModule.__init__", "rllib/examples/rl_modules/classes/action_masking_rlm.py:ActionMaskingTorchRLModule.compute_values" ]
[]
2
ray-project/ray
ray-project__ray-48891
37aa0c66110fc235762c29612b90f1c73869e6cf
diff --git a/python/ray/scripts/scripts.py b/python/ray/scripts/scripts.py index 1f26a483a7aa..eed702bb7438 100644 --- a/python/ray/scripts/scripts.py +++ b/python/ray/scripts/scripts.py @@ -622,6 +622,15 @@ def debug(address: str, verbose: bool): type=str, help="a JSON serialized dictionary mapping label name to label value.", ) [email protected]( + "--include-log-monitor", + default=None, + type=bool, + help="If set to True or left unset, a log monitor will start monitoring " + "the log files of all processes on this node and push their contents to GCS. " + "Only one log monitor should be started per physical host to avoid log " + "duplication on the driver process.", +) @add_click_logging_options @PublicAPI def start( @@ -668,6 +677,7 @@ def start( ray_debugger_external, disable_usage_stats, labels, + include_log_monitor, ): """Start Ray processes manually on the local machine.""" @@ -757,6 +767,7 @@ def start( no_monitor=no_monitor, tracing_startup_hook=tracing_startup_hook, ray_debugger_external=ray_debugger_external, + include_log_monitor=include_log_monitor, ) if ray_constants.RAY_START_HOOK in os.environ:
[Core] Logs are duplicated if multiple nodes are running on same machine ### What happened + What you expected to happen I encountered this https://github.com/ray-project/ray/issues/10392 issue when I was experimenting with ray. This issue was closed due to the inability to provide a reproducible example. ### Versions / Dependencies ray[all] 2.38.0 MacOS ### Reproduction script ```python # example.py import ray @ray.remote def foo(): print('hello') if __name__ == '__main__': ray.init() handle = foo.remote() ray.get(handle) ``` ```shell RAY_ENABLE_WINDOWS_OR_OSX_CLUSTER=1 ray start --head RAY_ENABLE_WINDOWS_OR_OSX_CLUSTER=1 ray start --address='192.168.0.196:6379' python example.py ``` Output: 24-11-08 13:54:19,817 INFO worker.py:1601 -- Connecting to existing Ray cluster at address: 192.168.0.196:6379... 2024-11-08 13:54:19,831 INFO worker.py:1777 -- Connected to Ray cluster. View the dashboard at http://127.0.0.1:8265 (foo pid=45881) hello (foo pid=45881) hello ### Issue Severity Low: It annoys or frustrates me. A workaround is at: https://github.com/intel-analytics/BigDL-2.x/pull/2799/files I mitigated this issue by calling this function after starting worker node. Of course, it has many downsides and it's not the way to go in long term. ```python def kill_redundant_log_monitors(): """ Killing redundant log_monitor.py processes. If multiple ray nodes are started on the same machine, there will be multiple ray log_monitor.py processes monitoring the same log dir. As a result, the logs will be replicated multiple times and forwarded to driver. See issue https://github.com/ray-project/ray/issues/10392 """ import psutil import subprocess log_monitor_processes = [] for proc in psutil.process_iter(["name", "cmdline"]): try: cmdline = subprocess.list2cmdline(proc.cmdline()) except (psutil.AccessDenied, psutil.NoSuchProcess): continue is_log_monitor = "log_monitor.py" in cmdline if is_log_monitor: log_monitor_processes.append(proc) if len(log_monitor_processes) > 1: for proc in log_monitor_processes[1:]: proc.kill() ```
thank you for reporting the issue!
1,732,341,280,000
null
Bug Report
[ "python/ray/scripts/scripts.py:start" ]
[]
1
ray-project/ray
ray-project__ray-48793
4b4f3c669bc71027cbae99d5b12ec750b70d96d4
diff --git a/python/ray/setup-dev.py b/python/ray/setup-dev.py index 31d722b89984..d26d377a65f5 100755 --- a/python/ray/setup-dev.py +++ b/python/ray/setup-dev.py @@ -73,9 +73,27 @@ def do_link(package, force=False, skip_list=None, local_path=None): print("You don't have write permission " f"to {package_home}, using sudo:") sudo = ["sudo"] print(f"Creating symbolic link from \n {local_home} to \n {package_home}") + + # Preserve ray/serve/generated + if package == "serve": + # Copy generated folder to a temp dir + generated_folder = os.path.join(package_home, "generated") + temp_dir = "/tmp/ray/_serve/" + if not os.path.exists(temp_dir): + os.makedirs(temp_dir) + subprocess.check_call(["cp", "-r", generated_folder, temp_dir]) + subprocess.check_call(sudo + ["rm", "-rf", package_home]) subprocess.check_call(sudo + ["ln", "-s", local_home, package_home]) + # Move generated folder to local_home + if package == "serve": + tmp_generated_folder = os.path.join(temp_dir, "generated") + package_generated_folder = os.path.join(package_home, "generated") + subprocess.check_call( + ["mv", tmp_generated_folder, package_generated_folder] + ) + if __name__ == "__main__": parser = argparse.ArgumentParser(
ray/serve/generated file is missing after running setup-dev.py ### What happened + What you expected to happen When running `python setup-dev.py`, it creates softlink for each python package. However, since the generated folder is not part of the repository, creating the symbolic link for the `serve` package inadvertently overwrites the folder and the generated folder can't be found anymore. ### Versions / Dependencies Lastest ### Reproduction script ``` pip install -U "ray[serve] @ https://s3-us-west-2.amazonaws.com/ray-wheels/latest/ray-3.0.0.dev0-cp39-cp39-macosx_11_0_arm64.whl" python python/ray/setup-dev.py ``` ### Issue Severity Medium: It is a significant difficulty but I can work around it.
1,731,977,311,000
null
Bug Report
[ "python/ray/setup-dev.py:do_link" ]
[]
1
ray-project/ray
ray-project__ray-48790
e70b37a435122609f88e02ce3377b8dd7f780e6b
diff --git a/python/ray/serve/api.py b/python/ray/serve/api.py index 182795889d47..13b92c7fcaae 100644 --- a/python/ray/serve/api.py +++ b/python/ray/serve/api.py @@ -474,6 +474,7 @@ def _run( else: client = _private_api.serve_start( http_options={"location": "EveryNode"}, + global_logging_config=logging_config, ) # Record after Ray has been started. ServeUsageTag.API_VERSION.record("v2")
[serve] logging_config specified in `serve.run` is not propagated cluster-wide ### Description Specifying `logging_config` in `serve.run(..., logging_config={...})` does not configure logging for the cluster, as is expected. This is because we don't propagate `logging_config` to `.serve_start(...)` here: https://github.com/ray-project/ray/blob/master/python/ray/serve/api.py#L475-L477 A simple workaround for now is, ```Python logging_config = {"log_level": "..."} serve.start(logging_config=logging_config) serve.run(logging_config=logging_config) ``` ### Use case This issue arose when trying to configure Serve logging holistically for tests.
1,731,975,189,000
null
Bug Report
[ "python/ray/serve/api.py:_run" ]
[]
1
ray-project/ray
ray-project__ray-48786
5cd8967f1c0c16d3ae5fedb8449d0d25dd4f9f3e
diff --git a/python/ray/autoscaler/_private/commands.py b/python/ray/autoscaler/_private/commands.py index 3c03738854f7..9a9b9d91cc2f 100644 --- a/python/ray/autoscaler/_private/commands.py +++ b/python/ray/autoscaler/_private/commands.py @@ -1153,16 +1153,15 @@ def exec_cluster( }, docker_config=config.get("docker"), ) - shutdown_after_run = False if cmd and stop: cmd = "; ".join( [ cmd, "ray stop", "ray teardown ~/ray_bootstrap_config.yaml --yes --workers-only", + "sudo shutdown -h now", ] ) - shutdown_after_run = True result = _exec( updater, @@ -1172,7 +1171,7 @@ def exec_cluster( port_forward=port_forward, with_output=with_output, run_env=run_env, - shutdown_after_run=shutdown_after_run, + shutdown_after_run=False, extra_screen_args=extra_screen_args, ) if tmux or screen:
[Ray Clusters] `ray exec ... --stop --tmux ...` doesn't work with both `--stop` and `--tmux` specified ### What happened + What you expected to happen When running `ray exec ...` with both `--stop` and `--tmux` flags, the `sudo shutdown -h now` command gets incorrectly left outside the tmux command and thus the machine is immediately shut down without the actual command finishing inside tmux. For example, consider the following command: ``` ray exec \ --verbose \ --start \ --stop \ --tmux \ --no-config-cache \ ./cluster-config.yml \ 'echo "start" && sleep 10 && echo "done"' ``` This results in (as printed out by the command runner): > Running `tmux new -d bash -c 'echo "start" && sleep 10 && echo "done"; ray stop; ray teardown ~/ray_bootstrap_config.yaml --yes --workers-only; exec bash'; sudo shutdown -h now` The first part `tmux new -d bash -c '...'` returns immediately and thus the `sudo shutdown -h now` gets executed immediately before the command inside tmux finishes. I would expect the shutdown command to run only after the actual command. ### Versions / Dependencies ``` $ ray --version ray, version 2.37.0 ``` ### Reproduction script `cluster-config.yml`: ```yml auth: ssh_user: ubuntu cluster_name: minimal provider: type: gcp region: us-east1 availability_zone: us-east1-b project_id: [project-id] # Globally unique project id ``` Command: ``` ray exec \ --verbose \ --start \ --stop \ --tmux \ --no-config-cache \ ./cluster-config.yml \ 'echo "start" && sleep 10 && echo "done"' ``` ### Issue Severity Medium: It is a significant difficulty but I can work around it.
@hartikainen do you want to create a PR to fix it? We are happy to review the PR.
1,731,968,780,000
null
Bug Report
[ "python/ray/autoscaler/_private/commands.py:exec_cluster" ]
[]
1
ray-project/ray
ray-project__ray-48756
e70b37a435122609f88e02ce3377b8dd7f780e6b
diff --git a/python/ray/dashboard/modules/metrics/install_and_start_prometheus.py b/python/ray/dashboard/modules/metrics/install_and_start_prometheus.py index a65050212950..cf7cb31c3607 100644 --- a/python/ray/dashboard/modules/metrics/install_and_start_prometheus.py +++ b/python/ray/dashboard/modules/metrics/install_and_start_prometheus.py @@ -26,6 +26,9 @@ def get_system_info(): if architecture == "x86_64": # In the Prometheus filename, it's called amd64 architecture = "amd64" + elif architecture == "aarch64": + # In the Prometheus filename, it's called arm64 + architecture = "arm64" return os_type, architecture
[ray metrics launch-prometheus] Incorrect download URL generation for aarch64 architecture ### What happened + What you expected to happen <img width="802" alt="image" src="https://github.com/user-attachments/assets/e370ab29-db28-432b-b2c5-4c50e8e2dcf6"> - When executing the "ray metrics launch-prometheus" command on an aarch64 architecture system, the download URL is incorrectly generated, leading to a "not found" error. - This occurs because the command attempts to download the Prometheus build file from the GitHub releases page (https://github.com/prometheus/prometheus/releases) using "aarch64" in the URL, while Prometheus classifies this architecture as "arm64". ### Versions / Dependencies - Ray: rayproject/ray:nightly-aarch64 ### Reproduction script 1. Run "ray metrics launch-prometheus" on an aarch64 system 2. Observe that the command attempts to download a file with "aarch64" in the URL ### Issue Severity Low: It annoys or frustrates me.
1,731,661,282,000
null
Bug Report
[ "python/ray/dashboard/modules/metrics/install_and_start_prometheus.py:get_system_info" ]
[]
1
optuna/optuna
optuna__optuna-5828
81d1d36cce68e7de0384951689cdbcd4ae8b6866
diff --git a/optuna/cli.py b/optuna/cli.py index 16fa3a6df1..7246a86e21 100644 --- a/optuna/cli.py +++ b/optuna/cli.py @@ -215,7 +215,10 @@ def _dump_table(records: list[dict[str, Any]], header: list[str]) -> str: for t in value_types: if t == ValueType.STRING: value_type = ValueType.STRING - max_width = max(len(header[column]), max(row[column].width() for row in rows)) + if len(rows) == 0: + max_width = len(header[column]) + else: + max_width = max(len(header[column]), max(row[column].width() for row in rows)) separator += "-" * (max_width + 2) + "+" if value_type == ValueType.NUMERIC: header_string += f" {header[column]:>{max_width}} |" @@ -228,7 +231,8 @@ def _dump_table(records: list[dict[str, Any]], header: list[str]) -> str: ret += separator + "\n" ret += header_string + "\n" ret += separator + "\n" - ret += "\n".join(rows_string) + "\n" + for row_string in rows_string: + ret += row_string + "\n" ret += separator + "\n" return ret
CLI for empty DB raises `ValueError` ### Expected behavior CLI for empty DB should output empty result, but the current implementation raises `ValueError`. ### Environment - Optuna version:4.2.0.dev - Python version:3.13.0 - OS:macOS-15.1-x86_64-i386-64bit-Mach-O - (Optional) Other libraries and their versions: ### Error messages, stack traces, or logs ```shell See below. ``` ### Steps to reproduce For empty DB (`tmp.db` does not exist before the command), the `optuna studies` command raises `ValueError`. ```bash $ optuna --storage sqlite:///tmp.db studies Traceback (most recent call last): File "/Users/naotomizuno/.pyenv/versions/optuna-3.13.0/bin/optuna", line 8, in <module> sys.exit(main()) ~~~~^^ File "/Users/naotomizuno/optuna/optuna/cli.py", line 991, in main return args.handler(args) ~~~~~~~~~~~~^^^^^^ File "/Users/naotomizuno/optuna/optuna/cli.py", line 463, in take_action _format_output( ~~~~~~~~~~~~~~^ records, self._study_list_header, parsed_args.format, parsed_args.flatten ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/naotomizuno/optuna/optuna/cli.py", line 258, in _format_output return _dump_table(values, header).strip() ~~~~~~~~~~~^^^^^^^^^^^^^^^^ File "/Users/naotomizuno/optuna/optuna/cli.py", line 222, in _dump_table max_width = max(len(header[column]), max(row[column].width() for row in rows)) ~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ValueError: max() iterable argument is empty ``` ### Additional context (optional) _No response_
1,733,375,129,000
null
Bug Report
[ "optuna/cli.py:_dump_table" ]
[]
1
BerriAI/litellm
BerriAI__litellm-6915
fd2d4254bcd01e924ca4dded36ee4714c33734af
diff --git a/litellm/llms/fireworks_ai/chat/fireworks_ai_transformation.py b/litellm/llms/fireworks_ai/chat/fireworks_ai_transformation.py index 4d5b2d6eb3ba..10d8a5913328 100644 --- a/litellm/llms/fireworks_ai/chat/fireworks_ai_transformation.py +++ b/litellm/llms/fireworks_ai/chat/fireworks_ai_transformation.py @@ -25,6 +25,7 @@ class FireworksAIConfig: stop: Optional[Union[str, list]] = None response_format: Optional[dict] = None user: Optional[str] = None + logprobs: Optional[int] = None # Non OpenAI parameters - Fireworks AI only params prompt_truncate_length: Optional[int] = None @@ -44,6 +45,7 @@ def __init__( stop: Optional[Union[str, list]] = None, response_format: Optional[dict] = None, user: Optional[str] = None, + logprobs: Optional[int] = None, prompt_truncate_length: Optional[int] = None, context_length_exceeded_behavior: Optional[Literal["error", "truncate"]] = None, ) -> None: @@ -86,6 +88,7 @@ def get_supported_openai_params(self): "stop", "response_format", "user", + "logprobs", "prompt_truncate_length", "context_length_exceeded_behavior", ]
[Bug]: supported params are out of date for fireworks AI ### What happened? when calling fireworks models, litellm is complianing: logprobs is not supproted but it's actually supported by fireworks ai. ref: https://docs.fireworks.ai/api-reference/post-completions ### Relevant log output _No response_ ### Twitter / LinkedIn details _No response_
1,732,617,202,000
null
Bug Report
[ "litellm/llms/fireworks_ai/chat/fireworks_ai_transformation.py:FireworksAIConfig.__init__", "litellm/llms/fireworks_ai/chat/fireworks_ai_transformation.py:FireworksAIConfig.get_supported_openai_params" ]
[]
2
matplotlib/matplotlib
matplotlib__matplotlib-29265
0406a56b051a371ccf81d2946126580651a645f2
diff --git a/lib/matplotlib/collections.py b/lib/matplotlib/collections.py index a78f1838357e..f18d5a4c3a8c 100644 --- a/lib/matplotlib/collections.py +++ b/lib/matplotlib/collections.py @@ -1612,14 +1612,13 @@ def __init__(self, segments, # Can be None. """ Parameters ---------- - segments : list of array-like - A sequence (*line0*, *line1*, *line2*) of lines, where each line is a list - of points:: + segments : list of (N, 2) array-like + A sequence ``[line0, line1, ...]`` where each line is a (N, 2)-shape + array-like containing points:: - lineN = [(x0, y0), (x1, y1), ... (xm, ym)] + line0 = [(x0, y0), (x1, y1), ...] - or the equivalent Mx2 numpy array with two columns. Each line - can have a different number of segments. + Each line can contain a different number of points. linewidths : float or list of float, default: :rc:`lines.linewidth` The width of each line in points. colors : :mpltype:`color` or list of color, default: :rc:`lines.color`
Improve LineCollection docstring further (M, 2) I would perhaps completely drop the "list of points" and just write ``` A sequence ``[line0, line1, ...]`` where each line is a (N, 2)-shape array-like of points:: line0 = [(x0, y0), (x1, y1), ...] Each line can... ``` _Originally posted by @anntzer in https://github.com/matplotlib/matplotlib/pull/26676#discussion_r1313026557_
1,733,753,596,000
null
Feature Request
[ "lib/matplotlib/collections.py:LineCollection.__init__" ]
[]
1
matplotlib/matplotlib
matplotlib__matplotlib-29254
671177c08613136fd5004092b8b56449d419c12a
diff --git a/lib/matplotlib/figure.py b/lib/matplotlib/figure.py index e5cf88131178..3d6f9a7f4c16 100644 --- a/lib/matplotlib/figure.py +++ b/lib/matplotlib/figure.py @@ -1382,8 +1382,8 @@ def align_xlabels(self, axs=None): Notes ----- - This assumes that ``axs`` are from the same `.GridSpec`, so that - their `.SubplotSpec` positions correspond to figure positions. + This assumes that all Axes in ``axs`` are from the same `.GridSpec`, + so that their `.SubplotSpec` positions correspond to figure positions. Examples -------- @@ -1444,8 +1444,8 @@ def align_ylabels(self, axs=None): Notes ----- - This assumes that ``axs`` are from the same `.GridSpec`, so that - their `.SubplotSpec` positions correspond to figure positions. + This assumes that all Axes in ``axs`` are from the same `.GridSpec`, + so that their `.SubplotSpec` positions correspond to figure positions. Examples -------- @@ -1500,8 +1500,8 @@ def align_titles(self, axs=None): Notes ----- - This assumes that ``axs`` are from the same `.GridSpec`, so that - their `.SubplotSpec` positions correspond to figure positions. + This assumes that all Axes in ``axs`` are from the same `.GridSpec`, + so that their `.SubplotSpec` positions correspond to figure positions. Examples -------- @@ -1544,6 +1544,11 @@ def align_labels(self, axs=None): matplotlib.figure.Figure.align_xlabels matplotlib.figure.Figure.align_ylabels matplotlib.figure.Figure.align_titles + + Notes + ----- + This assumes that all Axes in ``axs`` are from the same `.GridSpec`, + so that their `.SubplotSpec` positions correspond to figure positions. """ self.align_xlabels(axs=axs) self.align_ylabels(axs=axs)
[Bug]: Figure.align_labels() confused by GridSpecFromSubplotSpec ### Bug summary In a composite figure with nested gridspecs, `Figure.align_labels()` (and `align_xlabels()`, `align_ylabels()`) can end up aligning labels that should not intuitively be. Likewise with `align_titles()`. ### Code for reproduction ```Python fig = plt.figure(figsize=(6, 4)) gs0 = gridspec.GridSpec(nrows=1, ncols=2, figure=fig) gs00 = gs0[0].subgridspec(nrows=2, ncols=1, height_ratios=[8, 8]) gs01 = gs0[1].subgridspec(nrows=2, ncols=1, height_ratios=[9, 6]) left_axs = gs00.subplots() right_axs = gs01.subplots() left_axs[0].set_ylim(0, 0.02) # to force nontrivial alignment left_axs[0].set_ylabel('foo') left_axs[1].set_ylabel('bar') right_axs[0].set_ylabel('baz') right_axs[1].set_ylabel('qux') left_axs[1].set_title('title') right_axs[1].set_title('title') fig.align_labels() fig.align_titles() ``` ### Actual outcome All labels are aligned. Titles are aligned as well. ![image](https://github.com/user-attachments/assets/b78defcf-e742-4643-b8b9-1f66ce375997) ### Expected outcome Labels in separate columns are aligned, but labels in different columns should not be. Titles are not aligned: ![image](https://github.com/user-attachments/assets/795a7730-b2b3-442b-896f-cdde31b7ff97) ### Additional information Right now, the ylabel (xlabel) alignment code seems to attempt to align labels on Axes with the same column index (resp. row index) without checking if those indexes are for the same gridspec. To fix this, we should probably add a check that two Axes share the same gridspec (in addition to being in the same row/col) before we align their labels. (This would not allow label alignment across gridspecs, but if a user wants to align labels between two Axes, it seems reasonable to expect them to put the Axes in the same gridspec.) The same thing happens with align_titles(). For now, a workaround for labels is to call `Figure.align_labels()` separately for each sub-gridspec with the `axs` kwarg (as done for the expected outcome figure above). ### Operating system macOS 14.1.1 ### Matplotlib Version 3.9.2 ### Matplotlib Backend module://matplotlib_inline.backend_inline ### Python version 3.12.2 ### Jupyter version _No response_ ### Installation conda
This is definitely an issue, but not sure we would prioritize or accept a complicated fix for this. Note the docs say > Align the xlabels of subplots in the same subplot row if label alignment is being done automatically (i.e. the label position is not manually set). This issue with subgridspecs not having a clear hierarchy is why we introduced [subfigures](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/subfigures.html) in v3.4. Your code would look like: ``` import matplotlib.pyplot as plt import numpy as np fig = plt.figure(figsize=(6, 4), layout='constrained') sfigs = fig.subfigures(1, 2) left_axs = sfigs[0].subplots(2, 1, height_ratios=[8, 8]) right_axs = sfigs[1].subplots(2, 1, height_ratios=[9, 6]) left_axs[0].set_ylim(0, 0.02) # to force nontrivial alignment left_axs[0].set_ylabel('foo') left_axs[1].set_ylabel('bar') right_axs[0].set_ylabel('baz') right_axs[1].set_ylabel('qux') left_axs[1].set_title('title') right_axs[1].set_title('title') for sfig in sfigs: sfig.align_labels() sfig.align_titles() plt.show() ``` ![subplots](https://github.com/user-attachments/assets/d560a556-1699-4082-90ee-2c2f8814e749) I suppose one change we could entertain is `align_labels` and friends accepting a list of subplots to align. This works with `subplot_mosaic`, which would be my recommended approach ```python fig, axd = plt.subplot_mosaic(""" AC AC BC BD BD """, layout="constrained") axd["A"].set_ylim(0, 0.02) # to force nontrivial alignment axd["A"].set_ylabel('foo') axd["B"].set_ylabel('bar') axd["C"].set_ylabel('baz') axd["D"].set_ylabel('qux') axd["B"].set_title('title') axd["D"].set_title('title') fig.align_labels() fig.align_titles() ``` ![grafik](https://github.com/user-attachments/assets/16fa2680-9b88-436e-9bed-aa64ead1b7f6) I suggest we simply declare that `align_labels` and `align_titles` do not work with subgridspece. For sure you could use subplot_mosaic for a similar layout as well, though note that it is very hard to use it to get the height ratios exactly as requested. Depends on what your constraints actually are. Being slightly more specific in the docstring would be fine. Actually, https://matplotlib.org/devdocs/api/_as_gen/matplotlib.figure.Figure.align_titles.html https://matplotlib.org/devdocs/api/_as_gen/matplotlib.figure.Figure.align_xlabels.html https://matplotlib.org/devdocs/api/_as_gen/matplotlib.figure.Figure.align_ylabels.html all have a note that they assume all Axes are from the same GridSpec. That note is missing in https://matplotlib.org/devdocs/api/_as_gen/matplotlib.figure.Figure.align_labels.html and should be copied there.
1,733,617,371,000
null
Bug Report
[ "lib/matplotlib/figure.py:FigureBase.align_xlabels", "lib/matplotlib/figure.py:FigureBase.align_ylabels", "lib/matplotlib/figure.py:FigureBase.align_titles", "lib/matplotlib/figure.py:FigureBase.align_labels" ]
[]
4
matplotlib/matplotlib
matplotlib__matplotlib-29236
84fbae8eea3bb791ae9175dbe77bf5dee3368275
diff --git a/lib/matplotlib/animation.py b/lib/matplotlib/animation.py index 47f2f0f9515b..2be61284073a 100644 --- a/lib/matplotlib/animation.py +++ b/lib/matplotlib/animation.py @@ -492,8 +492,15 @@ def grab_frame(self, **savefig_kwargs): buf = BytesIO() self.fig.savefig( buf, **{**savefig_kwargs, "format": "rgba", "dpi": self.dpi}) - self._frames.append(Image.frombuffer( - "RGBA", self.frame_size, buf.getbuffer(), "raw", "RGBA", 0, 1)) + im = Image.frombuffer( + "RGBA", self.frame_size, buf.getbuffer(), "raw", "RGBA", 0, 1) + if im.getextrema()[3][0] < 255: + # This frame has transparency, so we'll just add it as is. + self._frame.append(im) + else: + # Without transparency, we switch to RGB mode, which converts to P mode a + # little better if needed (specifically, this helps with GIF output.) + self._frames.append(im.convert("RGB")) def finish(self): self._frames[0].save(
[Bug]: inconsistent ‘animation.FuncAnimation’ between display and save ### Bug summary when i want to save images to gif, it's inconsistent between display and save; It seems that the color information has been lost: ![temp](https://github.com/user-attachments/assets/f1a6c123-2533-4db0-8c14-4264cfe3345c) ### Code for reproduction ```Python def animation_test(): import matplotlib.pyplot as plt import matplotlib.animation as animation file = r'example.dat' num_frames = 72 nx = 8 ny = 9 data = np.fromfile(file, np.float32).reshape(num_frames, ny, nx) fig, ax = plt.subplots() img = data[0,] # plt.imshow(img) vmax = 100 vmin = 0 h = ax.imshow(img, cmap=plt.get_cmap('CMRmap_r'), origin='lower', interpolation='none', vmin=vmin, vmax=vmax, animated=True) ax.set_xticks(range(nx)) ax.set_xticklabels(range(1, nx + 1)) ax.set_yticks(range(ny)) ax.set_yticklabels(range(1, ny + 1)) fig.tight_layout() def update(frame): img = data[frame, ] h.set_array(img) return h, # create animation interval = 100 ani = animation.FuncAnimation(fig, update, frames=range(num_frames), interval=interval, blit=True) # ani = animation.FuncAnimation(fig, update, frames=frame_iter, interval=interval, blit=False, cache_frame_data=False) ani.save('example.gif', writer='pillow', fps=2, dpi=300) pass if __name__ == '__main__': animation_test() ``` ### Actual outcome above picture -> right ### Expected outcome above picture -> left ### Additional information _No response_ ### Operating system win10 ### Matplotlib Version '3.4.2' ### Matplotlib Backend _No response_ ### Python version 3.7.10 ### Jupyter version _No response_ ### Installation pip
Do you mind also including the data points that you plotted? I updated the code and uploaded the data file: [example.zip](https://github.com/user-attachments/files/17945028/example.zip) Thank you. I was able to reproduce the behavior now. It does seem like a bug. It may be because the PillowWriter is renormalizing the color values frame-by-frame instead of using the original normalization that is still there when you directly .show() the plot. In that case, keeping around a Normalization object that the PillowWriter can reference later would solve it. But I'll let the veterans figure decide if that's the issue. Um, can you provide a preliminary solution? :) So far I'm only good enough to triage bugs. :( Without having debugged this exactly, my guess is that this is a fundamental limitation of gif. From https://en.wikipedia.org/wiki/GIF > The format can contain up to [8 bits per pixel](https://en.wikipedia.org/wiki/8-bit_color), allowing a single image to reference its own [palette](https://en.wikipedia.org/wiki/Palette_(computing)) of up to 256 different colors chosen from the [24-bit](https://en.wikipedia.org/wiki/24-bit_color) [RGB color space](https://en.wikipedia.org/wiki/RGB_color_model). It can also represent multiple images in a file, which can be used for [animations](https://en.wikipedia.org/wiki/Animation), and allows a separate palette of up to 256 colors for each frame. These palette limitations make GIF less suitable for reproducing color photographs and other [images with color gradients](https://en.wikipedia.org/wiki/Image_gradient) but well-suited for simpler images such as graphics or logos with solid areas of color. This seems to be specific to the Pillow writer; it looks similar to the expected result when using ffmpeg. This is a similar issue to https://github.com/matplotlib/matplotlib/issues/25806; Pillow converts the RGBA image to P(alette) and loses some colours. This is due to the inherent limitations of the GIF format as @timhoffm has mentioned. See for example the upstream issue https://github.com/python-pillow/Pillow/issues/6832 I think your best bet is to either switch to ffmpeg, which does this better, or switch to a more flexible format like `webp`. Hmm, actually it looks like we can help Pillow a little bit here. If the image doesn't contain any transparency, then we can convert it to `RGB` mode, and Pillow's conversion from that mode to `P` mode is a bit better.
1,733,387,401,000
null
Bug Report
[ "lib/matplotlib/animation.py:PillowWriter.grab_frame" ]
[]
1
tobymao/sqlglot
tobymao__sqlglot-4526
946cd4234a2ca403785b7c6a026a39ef604e8754
diff --git a/sqlglot/planner.py b/sqlglot/planner.py index 2e42b32c4..687bffb9f 100644 --- a/sqlglot/planner.py +++ b/sqlglot/planner.py @@ -201,11 +201,13 @@ def set_ops_and_aggs(step): aggregate.add_dependency(step) step = aggregate + else: + aggregate = None order = expression.args.get("order") if order: - if isinstance(step, Aggregate): + if aggregate and isinstance(step, Aggregate): for i, ordered in enumerate(order.expressions): if extract_agg_operands(exp.alias_(ordered.this, f"_o_{i}", quoted=True)): ordered.this.replace(exp.column(f"_o_{i}", step.name, quoted=True))
getting UnboundLocalError: cannot access local variable 'aggregate' where it is not associated with a value when running sqlglot.planner.Plan **Before you file an issue** - Make sure you specify the "read" dialect eg. `parse_one(sql, read="spark")` - Make sure you specify the "write" dialect eg. `ast.sql(dialect="duckdb")` - Check if the issue still exists on main **Fully reproducible code snippet** ``` import sqlglot import sqlglot.planner r = 'select suma from ( select sum(a) as suma from table1) order by suma' parsed = sqlglot.parse_one(r, dialect='snowflake') p = sqlglot.planner.Plan(parsed) ``` Throws: ``` File venv/lib/python3.11/site-packages/sqlglot/planner.py:14, in Plan.__init__(self, expression) 12 def __init__(self, expression: exp.Expression) -> None: 13 self.expression = expression.copy() ---> 14 self.root = Step.from_expression(self.expression) 15 self._dag: t.Dict[Step, t.Set[Step]] = {} File venv/lib/python3.11/site-packages/sqlglot/planner.py:213, in Step.from_expression(cls, expression, ctes) 210 if extract_agg_operands(exp.alias_(ordered.this, f"_o_{i}", quoted=True)): 211 ordered.this.replace(exp.column(f"_o_{i}", step.name, quoted=True)) --> 213 set_ops_and_aggs(aggregate) 215 sort = Sort() 216 sort.name = step.name UnboundLocalError: cannot access local variable 'aggregate' where it is not associated with a value ``` On sqlglot 25.33.0 dialect seems to be irrelevant, same error with Athena **Official Documentation** Please include links to official SQL documentation related to your issue.
You need to run the optimizer first: ```python >>> import sqlglot >>> import sqlglot.planner >>> >>> r = 'select suma from ( select sum(a) as suma from table1) order by suma' >>> parsed = sqlglot.parse_one(r, dialect='snowflake') >>> p = sqlglot.planner.Plan(parsed) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/georgesittas/Code/tobiko/sqlglot/sqlglot/planner.py", line 14, in __init__ self.root = Step.from_expression(self.expression) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/georgesittas/Code/tobiko/sqlglot/sqlglot/planner.py", line 213, in from_expression set_ops_and_aggs(aggregate) ^^^^^^^^^ UnboundLocalError: cannot access local variable 'aggregate' where it is not associated with a value >>> from sqlglot.optimizer import optimize >>> optimized = optimize(parsed) >>> optimized.sql() 'WITH "_q_0" AS (SELECT SUM("table1"."a") AS "suma" FROM "table1" AS "table1") SELECT "_q_0"."suma" AS "suma" FROM "_q_0" AS "_q_0" ORDER BY "suma" NULLS LAST' >>> >>> p = sqlglot.planner.Plan(optimized) >>> p Plan ---- - Sort: _q_0 (4376798720) Context: Key: - "suma" NULLS LAST Projections: - "_q_0"."suma" AS "suma" Dependencies: - Scan: _q_0 (4343324816) Context: Source: "_q_0" AS "_q_0" Projections: Dependencies: - Aggregate: _q_0 (4376798672) Context: Aggregations: - SUM("table1"."a") AS "suma" Projections: - "table1"."suma" Dependencies: - Scan: table1 (4376798816) Context: Source: "table1" AS "table1" Projections: ``` Looks like there's a code path where this _can_ happen, I think I may have made an incorrect assumption on needing the optimizer. Will double check and re-close if needed.
1,734,394,253,000
null
Bug Report
[ "sqlglot/planner.py:Step.from_expression" ]
[]
1
tobymao/sqlglot
tobymao__sqlglot-4369
a665030323b200f3bed241bb928993b9807c4100
diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py index f04cece117..b0c2a7f560 100644 --- a/sqlglot/expressions.py +++ b/sqlglot/expressions.py @@ -767,6 +767,7 @@ def and_( *expressions: t.Optional[ExpOrStr], dialect: DialectType = None, copy: bool = True, + wrap: bool = True, **opts, ) -> Condition: """ @@ -781,18 +782,22 @@ def and_( If an `Expression` instance is passed, it will be used as-is. dialect: the dialect used to parse the input expression. copy: whether to copy the involved expressions (only applies to Expressions). + wrap: whether to wrap the operands in `Paren`s. This is true by default to avoid + precedence issues, but can be turned off when the produced AST is too deep and + causes recursion-related issues. opts: other options to use to parse the input expressions. Returns: The new And condition. """ - return and_(self, *expressions, dialect=dialect, copy=copy, **opts) + return and_(self, *expressions, dialect=dialect, copy=copy, wrap=wrap, **opts) def or_( self, *expressions: t.Optional[ExpOrStr], dialect: DialectType = None, copy: bool = True, + wrap: bool = True, **opts, ) -> Condition: """ @@ -807,12 +812,15 @@ def or_( If an `Expression` instance is passed, it will be used as-is. dialect: the dialect used to parse the input expression. copy: whether to copy the involved expressions (only applies to Expressions). + wrap: whether to wrap the operands in `Paren`s. This is true by default to avoid + precedence issues, but can be turned off when the produced AST is too deep and + causes recursion-related issues. opts: other options to use to parse the input expressions. Returns: The new Or condition. """ - return or_(self, *expressions, dialect=dialect, copy=copy, **opts) + return or_(self, *expressions, dialect=dialect, copy=copy, wrap=wrap, **opts) def not_(self, copy: bool = True): """ @@ -6921,6 +6929,7 @@ def _combine( operator: t.Type[Connector], dialect: DialectType = None, copy: bool = True, + wrap: bool = True, **opts, ) -> Expression: conditions = [ @@ -6930,10 +6939,10 @@ def _combine( ] this, *rest = conditions - if rest: + if rest and wrap: this = _wrap(this, Connector) for expression in rest: - this = operator(this=this, expression=_wrap(expression, Connector)) + this = operator(this=this, expression=_wrap(expression, Connector) if wrap else expression) return this @@ -7316,7 +7325,11 @@ def condition( def and_( - *expressions: t.Optional[ExpOrStr], dialect: DialectType = None, copy: bool = True, **opts + *expressions: t.Optional[ExpOrStr], + dialect: DialectType = None, + copy: bool = True, + wrap: bool = True, + **opts, ) -> Condition: """ Combine multiple conditions with an AND logical operator. @@ -7330,16 +7343,23 @@ def and_( If an Expression instance is passed, this is used as-is. dialect: the dialect used to parse the input expression. copy: whether to copy `expressions` (only applies to Expressions). + wrap: whether to wrap the operands in `Paren`s. This is true by default to avoid + precedence issues, but can be turned off when the produced AST is too deep and + causes recursion-related issues. **opts: other options to use to parse the input expressions. Returns: The new condition """ - return t.cast(Condition, _combine(expressions, And, dialect, copy=copy, **opts)) + return t.cast(Condition, _combine(expressions, And, dialect, copy=copy, wrap=wrap, **opts)) def or_( - *expressions: t.Optional[ExpOrStr], dialect: DialectType = None, copy: bool = True, **opts + *expressions: t.Optional[ExpOrStr], + dialect: DialectType = None, + copy: bool = True, + wrap: bool = True, + **opts, ) -> Condition: """ Combine multiple conditions with an OR logical operator. @@ -7353,16 +7373,23 @@ def or_( If an Expression instance is passed, this is used as-is. dialect: the dialect used to parse the input expression. copy: whether to copy `expressions` (only applies to Expressions). + wrap: whether to wrap the operands in `Paren`s. This is true by default to avoid + precedence issues, but can be turned off when the produced AST is too deep and + causes recursion-related issues. **opts: other options to use to parse the input expressions. Returns: The new condition """ - return t.cast(Condition, _combine(expressions, Or, dialect, copy=copy, **opts)) + return t.cast(Condition, _combine(expressions, Or, dialect, copy=copy, wrap=wrap, **opts)) def xor( - *expressions: t.Optional[ExpOrStr], dialect: DialectType = None, copy: bool = True, **opts + *expressions: t.Optional[ExpOrStr], + dialect: DialectType = None, + copy: bool = True, + wrap: bool = True, + **opts, ) -> Condition: """ Combine multiple conditions with an XOR logical operator. @@ -7376,12 +7403,15 @@ def xor( If an Expression instance is passed, this is used as-is. dialect: the dialect used to parse the input expression. copy: whether to copy `expressions` (only applies to Expressions). + wrap: whether to wrap the operands in `Paren`s. This is true by default to avoid + precedence issues, but can be turned off when the produced AST is too deep and + causes recursion-related issues. **opts: other options to use to parse the input expressions. Returns: The new condition """ - return t.cast(Condition, _combine(expressions, Xor, dialect, copy=copy, **opts)) + return t.cast(Condition, _combine(expressions, Xor, dialect, copy=copy, wrap=wrap, **opts)) def not_(expression: ExpOrStr, dialect: DialectType = None, copy: bool = True, **opts) -> Not:
Excessive Recursion in Query Optimization with Multiple OR Clauses ## Context We are encountering an issue where a query with a high number of OR operators is causing excessive recursion during the optimization phase. The resulting recursion depth leads to stack overflow errors. As a temporary workaround, we increased the stack size limit. Despite the number of entries not being particularly high, we suspect that something in the optimization process is causing the recursion depth to increase unexpectedly. ## Reproducible example code snippet ```python import sqlglot import sqlglot.expressions as expressions from sqlglot.expressions import column is_equal_list = ['a'] * 500 is_equal = expressions.false() for value in is_equal_list: is_equal = is_equal.or_(column("a_column").eq(value)) ``` If you try to access `is_equal`, you'll receive an error: ```python is_equal # throws #sqlglot/expressions.py", line 256, in is_leaf # return not any(isinstance(v, (Expression, list)) for v in self.args.values()) # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #RecursionError: maximum recursion depth exceeded ``` The default recursion depth is 1000.
this is because to_s is showing the full nested tree. if you do is_equal.sql() it should be ok
1,731,329,118,000
null
Bug Report
[ "sqlglot/expressions.py:Expression.and_", "sqlglot/expressions.py:Expression.or_", "sqlglot/expressions.py:_combine", "sqlglot/expressions.py:xor" ]
[]
4
flet-dev/flet
flet-dev__flet-4554
be58db6a4120596c45172933432678105785d94a
diff --git a/sdk/python/packages/flet-cli/src/flet_cli/utils/project_dependencies.py b/sdk/python/packages/flet-cli/src/flet_cli/utils/project_dependencies.py index 218705576..f39561bfc 100644 --- a/sdk/python/packages/flet-cli/src/flet_cli/utils/project_dependencies.py +++ b/sdk/python/packages/flet-cli/src/flet_cli/utils/project_dependencies.py @@ -14,29 +14,69 @@ def get_poetry_dependencies( if poetry_dependencies is None: return None - def format_dependency_version(dependency: str, version_value: Any): + def format_dependency_version(dependency_name: str, dependency_value: Any): + sep = "@" + value = "" suffix = "" - if isinstance(version_value, dict): - version = version_value["version"] - markers = version_value.get("markers") + + if isinstance(dependency_value, dict): + version = dependency_value.get("version") + if version: + sep = "==" + value = version + else: + git_url = dependency_value.get("git") + if git_url: + value = ( + f"git+{git_url}" if not git_url.startswith("git@") else git_url + ) + rev = ( + dependency_value.get("branch") + or dependency_value.get("rev") + or dependency_value.get("tag") + ) + if rev: + value = f"{value}@{rev}" + subdirectory = dependency_value.get("subdirectory") + if subdirectory: + value = f"{value}#subdirectory={subdirectory}" + else: + path = dependency_value.get("path") + if path: + value = path + dependency_name = "" + sep = "" + else: + url = dependency_value.get("url") + if url: + value = url + dependency_name = "" + sep = "" + else: + raise Exception( + f"Unsupported dependency specification: {dependency_name} = {dependency_value}" + ) + + # markers - common for all + markers = dependency_value.get("markers") if markers is not None: suffix = f";{markers}" else: - version = version_value + value = dependency_value + sep = "==" - sep = "==" - if version.startswith("^"): + if value.startswith("^"): sep = ">=" - version = version[1:] - elif version.startswith("~"): + value = value[1:] + elif value.startswith("~"): sep = "~=" - version = version[1:] - return f"{dependency}~={version[1:]}" - elif "<" in version or ">" in version: + value = value[1:] + return f"{dependency_name}~={value[1:]}" + elif "<" in value or ">" in value: sep = "" - version = version.replace(" ", "") + value = value.replace(" ", "") - return f"{dependency}{sep}{version}{suffix}" + return f"{dependency_name}{sep}{value}{suffix}" dependencies: set[str] = { format_dependency_version(dependency, version)
`flet build` fails to parse file and git dependencies from `tool.poetry.dependencies` in `pyproject.toml` ### Discussed in https://github.com/flet-dev/flet/discussions/4546 <div type='discussions-op-text'> <sup>Originally posted by **amcraig** December 11, 2024</sup> ### Question Hi all, I've tried including my python package(not on PyPi) through both relative paths to the whl/tar.gz and via git in both `requirements.txt` and in `pyproject.toml` (including poetry) but any attempts I do fail due to a `distutils Module not found` error or `KeyError: 'version'`. Does anyone have a guaranteed way to provide a local/private python package to Flet in the build process? Thanks! ### Code sample ```python ##### Pyproject Poetry [tool.poetry] name = "file_tracker" version = "0.5.0" description = "redacted" authors = ["amcraig"] [tool.poetry.dependencies] python = "^3.10" private_package = { git = "https://github.com/private/package.git" } flet = "^0.25.1" ##### requirements.txt python==3.10 flet datateam @ git+https://github.com/private/package ``` ### Error message _No response_ ### ------------------------------------------------------ - [X] I have searched for answers to my question both in the issues and in previous discussions.</div> `flet build` fails to parse file and git dependencies from `tool.poetry.dependencies` in `pyproject.toml` ### Discussed in https://github.com/flet-dev/flet/discussions/4546 <div type='discussions-op-text'> <sup>Originally posted by **amcraig** December 11, 2024</sup> ### Question Hi all, I've tried including my python package(not on PyPi) through both relative paths to the whl/tar.gz and via git in both `requirements.txt` and in `pyproject.toml` (including poetry) but any attempts I do fail due to a `distutils Module not found` error or `KeyError: 'version'`. Does anyone have a guaranteed way to provide a local/private python package to Flet in the build process? Thanks! ### Code sample ```python ##### Pyproject Poetry [tool.poetry] name = "file_tracker" version = "0.5.0" description = "redacted" authors = ["amcraig"] [tool.poetry.dependencies] python = "^3.10" private_package = { git = "https://github.com/private/package.git" } flet = "^0.25.1" ##### requirements.txt python==3.10 flet datateam @ git+https://github.com/private/package ``` ### Error message _No response_ ### ------------------------------------------------------ - [X] I have searched for answers to my question both in the issues and in previous discussions.</div>
1,734,034,325,000
null
Bug Report
[ "sdk/python/packages/flet-cli/src/flet_cli/utils/project_dependencies.py:get_poetry_dependencies" ]
[]
1
flet-dev/flet
flet-dev__flet-4452
f62b5066ab79f3b99241e9c234baeac71fd60f95
diff --git a/sdk/python/packages/flet-cli/src/flet_cli/commands/build.py b/sdk/python/packages/flet-cli/src/flet_cli/commands/build.py index 0dcd8539a..212157549 100644 --- a/sdk/python/packages/flet-cli/src/flet_cli/commands/build.py +++ b/sdk/python/packages/flet-cli/src/flet_cli/commands/build.py @@ -1271,6 +1271,7 @@ def package_python_app(self): assert self.options assert self.get_pyproject assert self.python_app_path + assert self.package_app_path assert self.build_dir assert self.flutter_dir @@ -1282,7 +1283,7 @@ def package_python_app(self): "run", "serious_python:main", "package", - str(self.python_app_path), + str(self.package_app_path), "--platform", self.package_platform, ]
`flet build` creates bundle but running it gives `ImportError: No module named main` error ### Duplicate Check - [X] I have searched the [opened issues](https://github.com/flet-dev/flet/issues) and there are no duplicates ### Describe the bug Traceback (most recent call last): File "<string>", line 47, in <module> File "<frozen runpy>", line 222, in run_module File "<frozen runpy>", line 142, in _get_module_details ImportError: No module named main ### Code sample <details open><summary>Code</summary> ```python print(error) ``` </details> ### To reproduce ... ### Expected behavior _No response_ ### Screenshots / Videos <details open> <summary>Captures</summary> ![Screenshot_2024-11-28-23-36-20-36_92b64b2a7aa6eb3771ed6e18d0029815](https://github.com/user-attachments/assets/e22fcd4a-eaa1-448f-8c61-51164c6cbae7) </details> ### Operating System Windows ### Operating system details 11 ### Flet version 0.25 ### Regression No, it isn't ### Suggestions _No response_ ### Logs <details open><summary>Logs</summary> ```console [Paste your logs here] ``` </details> ### Additional details _No response_
What do you have in pyproject.toml and what is the file structure of your project? ### `pyproject.toml` ```toml [project] name = "weather-app" version = "0.1.0" description = "" readme = "README.md" requires-python = ">=3.8" dependencies = [ "flet" ] [tool.flet] # org name in reverse domain name notation, e.g. "com.mycompany". # Combined with project.name to build bundle ID for iOS and Android apps org = "com.mycompany" # project display name that is used as an app title on Android and iOS home screens, # shown in window titles and about app dialogs on desktop. product = "Weather Forcast" # company name to display in about app dialogs company = "Flet" # copyright text to display in about app dialogs copyright = "Copyright (C) 2024 by Flet" [tool.flet.app] path = "src" ``` ### `Structure` ``` W:\dev-mobile\dev mobile (flet)\api-app\weather-app>flet build apk [09:17:25] Created Flutter bootstrap project from gh:flet-dev/flet-build-template with ref 0.25.0 ✅ Customized app icons and splash images ✅ [09:18:41] Generated app icons ✅ [09:18:51] Generated splash screens ✅ [09:21:48] Packaged Python app ✅ [09:30:39] Built .apk for Android ✅ Copied build to build\apk directory ✅ Successfully built your .apk for Android! 🥳 Find it in build\apk directory. 📁 ``` ![Screenshot 2024-11-29 091419](https://github.com/user-attachments/assets/784e36c1-4450-4269-b8f7-a42cf28c11b4) ![Screenshot 2024-11-29 091455](https://github.com/user-attachments/assets/3bcecb4c-51b6-4b33-a2f6-7259916c2477) ![Screenshot 2024-11-29 091512](https://github.com/user-attachments/assets/332c1a6d-4b46-484f-974f-ea914158c952) I'm having the same problem. I created a basic project and built it right away, and it runs fine with flet run, but I get an error after building. win11, 0.25.0 Commands used ``` mkdir flet-test cd flet-test flet create . flet run flet build windows -vv cd flet build/windows flet-test.exe ``` Error Description ``` Traceback (most recent call last): File "<string>", line 47, in <module> File "<frozen runpy>", line 222, in run_module File "<frozen runpy>", line 142, in _get_module_details ImportError: No module named main ``` I filmed a video of the process. https://github.com/user-attachments/assets/c6033991-4dbb-4967-9206-2f8833cd2640 The terminal log is here. [ps-log.txt](https://github.com/user-attachments/files/17961394/ps-log.txt) Also, apps written with existing FLETs are experiencing the same error after updating from 0.24.1 to 0.25.0. Thanks for the additional details. I'm on it.
1,732,904,343,000
null
Bug Report
[ "sdk/python/packages/flet-cli/src/flet_cli/commands/build.py:Command.package_python_app" ]
[]
1
flet-dev/flet
flet-dev__flet-4425
97f42c4602c7ee63b571c29af555d4eca6203659
diff --git a/sdk/python/packages/flet/src/flet/core/date_picker.py b/sdk/python/packages/flet/src/flet/core/date_picker.py index 4abb8122d..09d735f86 100644 --- a/sdk/python/packages/flet/src/flet/core/date_picker.py +++ b/sdk/python/packages/flet/src/flet/core/date_picker.py @@ -173,24 +173,6 @@ def __init__( def _get_control_name(self): return "datepicker" - def before_update(self): - super().before_update() - assert ( - self.__first_date is None - or self.__last_date is None - or self.__first_date <= self.__last_date - ), "last_date must be on or after first_date" - assert ( - self.__first_date is None - or self.__current_date is None - or self.__first_date <= self.__current_date - ), "current_date must be on or after first_date" - assert ( - self.__last_date is None - or self.__current_date is None - or self.__last_date >= self.__current_date - ), "last_date must be on or after current_date" - @deprecated( reason="Use Page.open() method instead.", version="0.23.0",
Opening DatePicker returns AssertionError ### Duplicate Check - [X] I have searched the [opened issues](https://github.com/flet-dev/flet/issues) and there are no duplicates ### Describe the bug When running example from docs, clicking on Pick Date button, which opens DatePicker, returns an error: Future exception was never retrieved future: <Future finished exception=AssertionError('last_date must be on or after current_date')> Traceback (most recent call last): File "/Users/inesa/.pyenv/versions/3.12.6/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/inesa/projects/flet-dev/flet/sdk/python/packages/flet/src/flet/core/page.py", line 948, in wrapper handler(*args) File "/Users/inesa/projects/flet-dev/flet/sdk/python/playground/date-picker-test.py", line 19, in <lambda> on_click=lambda e: page.open( ^^^^^^^^^^ File "/Users/inesa/projects/flet-dev/flet/sdk/python/packages/flet/src/flet/core/page.py", line 1429, in open self.__offstage.update() File "/Users/inesa/projects/flet-dev/flet/sdk/python/packages/flet/src/flet/core/control.py", line 324, in update self.__page.update(self) File "/Users/inesa/projects/flet-dev/flet/sdk/python/packages/flet/src/flet/core/page.py", line 723, in update r = self.__update(*controls) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/inesa/projects/flet-dev/flet/sdk/python/packages/flet/src/flet/core/page.py", line 835, in __update commands, added_controls, removed_controls = self.__prepare_update(*controls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/inesa/projects/flet-dev/flet/sdk/python/packages/flet/src/flet/core/page.py", line 851, in __prepare_update control.build_update_commands( File "/Users/inesa/projects/flet-dev/flet/sdk/python/packages/flet/src/flet/core/control.py", line 498, in build_update_commands innerCmds = ctrl._build_add_commands( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/inesa/projects/flet-dev/flet/sdk/python/packages/flet/src/flet/core/control.py", line 552, in _build_add_commands command = self._build_command(False) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/inesa/projects/flet-dev/flet/sdk/python/packages/flet/src/flet/core/control.py", line 579, in _build_command self.before_update() File "/Users/inesa/projects/flet-dev/flet/sdk/python/packages/flet/src/flet/core/date_picker.py", line 189, in before_update self.__last_date is None ### Code sample <details open><summary>Code</summary> ```python import datetime import flet as ft def main(page: ft.Page): page.horizontal_alignment = ft.CrossAxisAlignment.CENTER def handle_change(e): page.add(ft.Text(f"Date changed: {e.control.value.strftime('%Y-%m-%d')}")) def handle_dismissal(e): page.add(ft.Text(f"DatePicker dismissed")) page.add( ft.ElevatedButton( "Pick date", icon=ft.icons.CALENDAR_MONTH, on_click=lambda e: page.open( ft.DatePicker( first_date=datetime.datetime(year=2023, month=10, day=1), last_date=datetime.datetime(year=2024, month=10, day=1), on_change=handle_change, on_dismiss=handle_dismissal, ) ), ) ) ft.app(main) ``` </details> ### To reproduce 1. Run Code sample 2. Click on "Pick date" button -> see the error in output ### Expected behavior expected to see the DatePicker ### Screenshots / Videos <details open> <summary>Captures</summary> [Upload media here] </details> ### Operating System macOS ### Operating system details macOs ### Flet version flet==0.25.0.dev3750 ### Regression Yes, it used to work in a previous Flet version (please specify the version in additional details) ### Suggestions _No response_ ### Logs <details open><summary>Logs</summary> ```console [Paste your logs here] ``` </details> ### Additional details _No response_
1,732,622,526,000
null
Bug Report
[ "sdk/python/packages/flet/src/flet/core/date_picker.py:DatePicker.before_update" ]
[]
1
flet-dev/flet
flet-dev__flet-4388
5fb877b3a3f886f3475cd8ebca1cee52472d0ef7
diff --git a/sdk/python/packages/flet-cli/src/flet_cli/commands/build.py b/sdk/python/packages/flet-cli/src/flet_cli/commands/build.py index a61657801..a097bb454 100644 --- a/sdk/python/packages/flet-cli/src/flet_cli/commands/build.py +++ b/sdk/python/packages/flet-cli/src/flet_cli/commands/build.py @@ -14,6 +14,12 @@ from flet.utils import copy_tree, is_windows, slugify from flet.utils.platform_utils import get_bool_env_var from flet.version import update_version +from packaging import version +from rich.console import Console +from rich.style import Style +from rich.table import Column, Table +from rich.theme import Theme + from flet_cli.commands.base import BaseCommand from flet_cli.utils.merge import merge_dict from flet_cli.utils.project_dependencies import ( @@ -21,11 +27,6 @@ get_project_dependencies, ) from flet_cli.utils.pyproject_toml import load_pyproject_toml -from packaging import version -from rich.console import Console -from rich.style import Style -from rich.table import Column, Table -from rich.theme import Theme if is_windows(): from ctypes import windll @@ -55,6 +56,8 @@ def __init__(self, parser: argparse.ArgumentParser) -> None: self.build_dir = None self.flutter_dir: Optional[Path] = None self.flutter_exe = None + self.skip_flutter_doctor = get_bool_env_var("FLET_CLI_SKIP_FLUTTER_DOCTOR") + self.no_rich_output = get_bool_env_var("FLET_CLI_NO_RICH_OUTPUT") self.current_platform = platform.system() self.platforms = { "windows": { @@ -511,58 +514,65 @@ def add_arguments(self, parser: argparse.ArgumentParser) -> None: default=False, help="disables rich output and uses plain text instead", ) + parser.add_argument( + "--skip-flutter-doctor", + action="store_true", + default=False, + help="whether to skip running Flutter doctor in failed builds", + ) def handle(self, options: argparse.Namespace) -> None: - self.verbose = options.verbose - - # get `flutter` and `dart` executables from PATH - self.flutter_exe = self.find_flutter_batch("flutter") - self.dart_exe = self.find_flutter_batch("dart") - - if self.verbose > 1: - console.log("Flutter executable:", self.flutter_exe) - console.log("Dart executable:", self.dart_exe) - - self.no_rich_output = options.no_rich_output or get_bool_env_var( - "FLET_CLI_NO_RICH_OUTPUT" - ) - self.emojis = { - "checkmark": "[green]OK[/]" if self.no_rich_output else "✅", - "loading": "" if self.no_rich_output else "⏳", - "success": "" if self.no_rich_output else "🥳", - "directory": "" if self.no_rich_output else "📁", - } target_platform = options.target_platform.lower() - # platform check - if ( - self.current_platform - not in self.platforms[target_platform]["can_be_run_on"] - or options.show_platform_matrix - ): - can_build_message = ( - "can't" - if self.current_platform - not in self.platforms[target_platform]["can_be_run_on"] - else "can" - ) - # replace "Darwin" with "macOS" for user-friendliness - self.current_platform = ( - "macOS" if self.current_platform == "Darwin" else self.current_platform - ) - # highlight the current platform in the build matrix table - self.platform_matrix_table.rows[ - list(self.platforms.keys()).index(target_platform) - ].style = "bold red1" - console.log(self.platform_matrix_table) - - message = f"You {can_build_message} build [cyan]{target_platform}[/] on [magenta]{self.current_platform}[/]." - self.cleanup(1, message) - with console.status( f"[bold blue]Initializing {target_platform} build... ", spinner="bouncingBall", ) as self.status: - from cookiecutter.main import cookiecutter + # get `flutter` and `dart` executables from PATH + self.flutter_exe = self.find_flutter_batch("flutter") + self.dart_exe = self.find_flutter_batch("dart") + + self.verbose = options.verbose + if self.verbose > 1: + console.log("Flutter executable:", self.flutter_exe) + console.log("Dart executable:", self.dart_exe) + + self.no_rich_output = self.no_rich_output or options.no_rich_output + self.skip_flutter_doctor = ( + self.skip_flutter_doctor or options.skip_flutter_doctor + ) + self.emojis = { + "checkmark": "[green]OK[/]" if self.no_rich_output else "✅", + "loading": "" if self.no_rich_output else "⏳", + "success": "" if self.no_rich_output else "🥳", + "directory": "" if self.no_rich_output else "📁", + } + + # platform check + if ( + self.current_platform + not in self.platforms[target_platform]["can_be_run_on"] + or options.show_platform_matrix + ): + can_build_message = ( + "can't" + if self.current_platform + not in self.platforms[target_platform]["can_be_run_on"] + else "can" + ) + # replace "Darwin" with "macOS" for user-friendliness + self.current_platform = ( + "macOS" + if self.current_platform == "Darwin" + else self.current_platform + ) + # highlight the current platform in the build matrix table + self.platform_matrix_table.rows[ + list(self.platforms.keys()).index(target_platform) + ].style = "bold red1" + console.log(self.platform_matrix_table) + + message = f"You {can_build_message} build [cyan]{target_platform}[/] on [magenta]{self.current_platform}[/]." + self.cleanup(1, message) package_platform = self.platforms[target_platform]["package_platform"] @@ -874,6 +884,8 @@ def handle(self, options: argparse.Namespace) -> None: f"[bold blue]Creating Flutter bootstrap project from {template_url} with ref {template_ref} {self.emojis['loading']}... ", ) try: + from cookiecutter.main import cookiecutter + cookiecutter( template=template_url, checkout=template_ref, @@ -1516,8 +1528,11 @@ def cleanup( console.log(flutter_msg, style=error_style) # windows has been reported to raise encoding errors when running `flutter doctor` - # so skip running `flutter doctor` if no_rich_output is True and platform is Windows - if not (self.no_rich_output and self.current_platform == "Windows"): + # so skip running it if no_rich_output is True + if not ( + (self.no_rich_output and self.current_platform == "Windows") + or self.skip_flutter_doctor + ): self.run_flutter_doctor() sys.exit(exit_code)
Add an option to skip the running `flutter doctor` when `flet build` fails ### Discussed in https://github.com/flet-dev/flet/discussions/4359 <div type='discussions-op-text'> <sup>Originally posted by **DFNJKD-98** November 13, 2024</sup> ### Question Due to some reasons, my country cannot freely access GitHub, so Flutter Doctor always reports errors. This hinders me from using Fleet to build Windows applications. Can I skip the Flutter Doctor phase using the 'flex build windows' command? I'm not sure if this is necessary. ### Code sample _No response_ ### Error message ```bash As shown below, the Flutter Doctor command may encounter some issues, but building a Windows application directly using Flutter is not a problem. I can use Flutter normally in VSCode. Doctor summary (to see all details, run flutter doctor -v): [√] Flutter (Channel stable, 3.22.1, on Microsoft Windows [version 1.0.19045.4046], local zh CN) [√] Windows Version (Installed version of Windows is version 10 or higher) [√] Chrome - develop for the web [√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.7.3) [!] Flutter IDE Support (No supported IDEs installed) [√] Connected device (3 available) [!] Network resources X An HTTP error occurred while checking " https://github.com/ ": Connection closed before full header was received ! Doctor found issues in 2 categories. ``` </div>
1,731,860,800,000
null
Feature Request
[ "sdk/python/packages/flet-cli/src/flet_cli/commands/build.py:Command.__init__", "sdk/python/packages/flet-cli/src/flet_cli/commands/build.py:Command.handle", "sdk/python/packages/flet-cli/src/flet_cli/commands/build.py:Command.add_arguments", "sdk/python/packages/flet-cli/src/flet_cli/commands/build.py:Command.cleanup" ]
[]
4
flet-dev/flet
flet-dev__flet-4384
5fb877b3a3f886f3475cd8ebca1cee52472d0ef7
diff --git a/sdk/python/packages/flet/src/flet/core/icon.py b/sdk/python/packages/flet/src/flet/core/icon.py index 8944f28bf..5af67aa7b 100644 --- a/sdk/python/packages/flet/src/flet/core/icon.py +++ b/sdk/python/packages/flet/src/flet/core/icon.py @@ -130,6 +130,7 @@ def _get_control_name(self): return "icon" def before_update(self): + super().before_update() self._set_attr_json("shadows", self.__shadows) # name
Icon rotation doesn't work with flet-0.25.0.dev3711 ### Duplicate Check - [X] I have searched the [opened issues](https://github.com/flet-dev/flet/issues) and there are no duplicates ### Describe the bug Icon rotation doesn't work anymore with flet 0.25 IconButton and other controls are OK ### Code sample <details open><summary>Code</summary> flet 0.24.1 : ```python import flet as ft import math def main(page: ft.Page): page.add( ft.Icon( name=ft.icons.FAVORITE, ), ft.Icon( name=ft.icons.FAVORITE, size=60, rotate=ft.Rotate(0.5*math.pi) ), ) ft.app(target=main) ``` flet-0.25.0.dev3711 : ```python import flet as ft import math def main(page: ft.Page): page.add( ft.Icon( name=ft.Icons.FAVORITE, ), ft.Icon( name=ft.Icons.FAVORITE, size=60, rotate=ft.Rotate(0.5*math.pi) ), ) ft.app(target=main) ``` </details> ### To reproduce Nothing particular : With the same code, icon is rotated with flet 0.24.1 and lower, not rotated with 0.25 ### Expected behavior Icon rotate.. ### Screenshots / Videos <details open> <summary>Captures</summary> 0.24.1 : ![image](https://github.com/user-attachments/assets/620d5cd8-c867-4cd7-8ecf-73837665cb68) flet-0.25.0.dev3711 ![image](https://github.com/user-attachments/assets/25a13b91-3de3-4831-86cf-06b201c28312) </details> ### Operating System Windows ### Operating system details windows 11 24H2 ### Flet version flet-0.25.0.dev3711 ### Regression Yes, it used to work in a previous Flet version (please specify the version in additional details) ### Suggestions _No response_ ### Logs <details open><summary>Logs</summary> ```console [Paste your logs here] ``` </details> ### Additional details _No response_
There is a lot of issues we are facing after 0.24.1
1,731,816,481,000
null
Bug Report
[ "sdk/python/packages/flet/src/flet/core/icon.py:Icon.before_update" ]
[]
1
flet-dev/flet
flet-dev__flet-4373
5fb877b3a3f886f3475cd8ebca1cee52472d0ef7
diff --git a/sdk/python/packages/flet/src/flet/core/markdown.py b/sdk/python/packages/flet/src/flet/core/markdown.py index 118e4e1c4..77a761c88 100644 --- a/sdk/python/packages/flet/src/flet/core/markdown.py +++ b/sdk/python/packages/flet/src/flet/core/markdown.py @@ -400,7 +400,12 @@ def before_update(self): self._set_attr_json("codeStyle", self.__code_style) self._set_attr_json("codeStyleSheet", self.__code_style_sheet) self._set_attr_json("mdStyleSheet", self.__md_style_sheet) - self._set_attr_json("codeTheme", self.__code_theme) + self._set_attr_json( + "codeTheme", + self.__code_theme.value + if isinstance(self.__code_theme, MarkdownCodeTheme) + else self.__code_theme, + ) def _get_children(self): if self.__img_error_content is not None: @@ -483,11 +488,13 @@ def md_style_sheet(self, value: Optional[MarkdownStyleSheet]): # code_theme @property - def code_theme(self) -> Optional[MarkdownCodeTheme]: + def code_theme(self) -> Optional[Union[MarkdownCodeTheme, MarkdownCustomCodeTheme]]: return self.__code_theme @code_theme.setter - def code_theme(self, value: Optional[MarkdownCodeTheme]): + def code_theme( + self, value: Optional[Union[MarkdownCodeTheme, MarkdownCustomCodeTheme]] + ): self.__code_theme = value # code_style
Regression in `Markdown.code_theme` when using `MarkdownCodeTheme` enum A custom theme works great although the only issue I faced was setting `code_theme` with `ft.MarkdownCodeTheme.ATOM_ONE_DARK` or any other value but **only** using `ft.MarkdownTheme` class the error it throws is: # Code ```python import flet as ft data = """ ```python class MyClass(object): def __init__(self): pass def greet(self): print("Hello World") ``` """ def main(page: ft.Page): page.add( ft.Markdown( value=data, code_theme=ft.MarkdownCodeTheme.ATOM_ONE_DARK, extension_set=ft.MarkdownExtensionSet.GITHUB_WEB, ) ) ft.app(main) ``` <details> <summary>Error</summary> ```bash (.venv) PS D:\CodingFolder\myCodingFilesPy\Flet\leraning> flet run .\theme\ --web http://127.0.0.1:54949 D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\websockets\legacy\server.py:1185: DeprecationWarning: remove second argument of ws_handler warnings.warn("remove second argument of ws_handler", DeprecationWarning) Unhandled error processing page session 49apGh5JlJ9Wkk0V: Traceback (most recent call last): File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet_web\fastapi\flet_app.py", line 141, in __on_session_created await asyncio.get_running_loop().run_in_executor( File "C:\ProgramData\anaconda3\Lib\concurrent\futures\thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\theme\main.py", line 12, in main page.add( File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet\core\page.py", line 737, in add r = self.__update(self) ^^^^^^^^^^^^^^^^^^^ File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet\core\page.py", line 835, in __update commands, added_controls, removed_controls = self.__prepare_update(*controls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet\core\page.py", line 851, in __prepare_update control.build_update_commands( File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet\core\control.py", line 497, in build_update_commands innerCmds = ctrl._build_add_commands( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet\core\control.py", line 562, in _build_add_commands childCmd = control._build_add_commands( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet\core\control.py", line 551, in _build_add_commands command = self._build_command(False) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet\core\control.py", line 578, in _build_command self.before_update() File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet\core\markdown.py", line 403, in before_update self._set_attr_json("codeTheme", self.__code_theme) File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet\core\control.py", line 189, in _set_attr_json nv = self._convert_attr_json( ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet\core\control.py", line 197, in _convert_attr_json json.dumps(value, cls=EmbedJsonEncoder, separators=(",", ":")) File "C:\ProgramData\anaconda3\Lib\json\__init__.py", line 238, in dumps **kw).encode(obj) ^^^^^^^^^^^ File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet\core\embed_json_encoder.py", line 59, in encode return super().encode(self._convert_enums(o)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\Lib\json\encoder.py", line 200, in encode chunks = self.iterencode(o, _one_shot=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\Lib\json\encoder.py", line 258, in iterencode return _iterencode(o, 0) ^^^^^^^^^^^^^^^^^ File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet\core\embed_json_encoder.py", line 49, in default obj_as_dict = self._convert_enums(obj.__dict__) ^^^^^^^^^^^^ AttributeError: 'mappingproxy' object has no attribute '__dict__' ``` </details> ![image](https://github.com/user-attachments/assets/79ad892b-aa1b-4e68-9570-108835a23f54) ### I think there's something wrong with `ft.MarkdownTheme` class because when I used `"atom-one-dark"` it worked. When I run the same code but with `flet` version, `0.24.1` it worked as expected. Kindly raise issue regarding this. _Originally posted by @tanmay-bhatgare in https://github.com/flet-dev/flet/issues/4342#issuecomment-2476781933_
1,731,605,637,000
null
Bug Report
[ "sdk/python/packages/flet/src/flet/core/markdown.py:Markdown.before_update", "sdk/python/packages/flet/src/flet/core/markdown.py:Markdown.code_theme" ]
[]
2
flet-dev/flet
flet-dev__flet-4340
0f7b14b787eb4249b93e2abd5da23cc953b1e091
diff --git a/sdk/python/packages/flet/src/flet/core/colors.py b/sdk/python/packages/flet/src/flet/core/colors.py index 0fb695878..66b4293e8 100644 --- a/sdk/python/packages/flet/src/flet/core/colors.py +++ b/sdk/python/packages/flet/src/flet/core/colors.py @@ -37,9 +37,12 @@ import random from enum import Enum, EnumMeta -from typing import Dict, List, Optional, Union +from typing import TYPE_CHECKING, Dict, List, Optional, Union from warnings import warn +if TYPE_CHECKING: + from flet.core.types import ColorValue + from flet.utils import deprecated @@ -56,9 +59,16 @@ def __getattribute__(self, item): class colors(str, Enum, metaclass=ColorsDeprecated): - def with_opacity(self, opacity: Union[int, float]) -> str: + @staticmethod + @deprecated( + reason="Use Colors.with_opacity() method instead.", + version="0.25.0", + delete_version="0.28.0", + ) + def with_opacity(opacity: Union[int, float], color: "ColorValue") -> str: assert 0 <= opacity <= 1, "opacity must be between 0 and 1" - return f"{self.value},{opacity}" + color_str = color.value if isinstance(color, Enum) else color + return f"{color_str},{opacity}" @staticmethod def random(): @@ -416,21 +426,11 @@ def random_color(): class Colors(str, Enum): - def with_opacity(self, opacity: Union[int, float]) -> str: - """ - Returns the color with the specified opacity. - - Args: - opacity: The opacity value, which must be between 0 and 1. - - Returns: - A string representing the color value with the specified opacity appended. - - Raises: - AssertionError: If the opacity is not between 0 and 1 (inclusive). - """ + @staticmethod + def with_opacity(opacity: Union[int, float], color: "ColorValue") -> str: assert 0 <= opacity <= 1, "opacity must be between 0 and 1" - return f"{self.value},{opacity}" + color_str = color.value if isinstance(color, Enum) else color + return f"{color_str},{opacity}" @staticmethod def random(
Using `ft.colors.with_opacity` returns exception, should be warning ### Duplicate Check - [X] I have searched the [opened issues](https://github.com/flet-dev/flet/issues) and there are no duplicates ### Describe the bug This code used to work before: ``` tooltip_bgcolor=ft.Colors.with_opacity(0.5, ft.Colors.GREY_300) ``` Now it returns exception: ERROR:asyncio:Future exception was never retrieved future: <Future finished exception=TypeError("'<=' not supported between instances of 'int' and 'Colors'")> Traceback (most recent call last): File "/Users/inesa/.pyenv/versions/3.12.6/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/inesa/Library/Caches/pypoetry/virtualenvs/controls-gallery-WumZyC-d-py3.12/lib/python3.12/site-packages/flet/core/page.py", line 944, in wrapper handler(*args) File "/Users/inesa/projects/flet-dev/examples/python/apps/controls-gallery/main.py", line 36, in route_change gallery_view.display_control_examples(route_list[0], route_list[1]) File "/Users/inesa/projects/flet-dev/examples/python/apps/controls-gallery/components/gallery_view.py", line 30, in display_control_examples self.examples_view.display( File "/Users/inesa/projects/flet-dev/examples/python/apps/controls-gallery/components/examples_view.py", line 55, in display content=example.example(), ^^^^^^^^^^^^^^^^^ File "/Users/inesa/projects/flet-dev/examples/python/apps/controls-gallery/examples/charts/barchart/01_barchart_1.py", line 87, in example tooltip_bgcolor=ft.colors.with_opacity(0.5, ft.Colors.GREY_300), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/inesa/Library/Caches/pypoetry/virtualenvs/controls-gallery-WumZyC-d-py3.12/lib/python3.12/site-packages/flet/core/colors.py", line 60, in with_opacity assert 0 <= opacity <= 1, "opacity must be between 0 and 1" ### Code sample <details open><summary>Code</summary> ```python import flet as ft def main(page: ft.Page): chart = ft.BarChart( bar_groups=[ ft.BarChartGroup( x=0, bar_rods=[ ft.BarChartRod( from_y=0, to_y=40, width=40, color=ft.colors.AMBER, tooltip="Apple", border_radius=0, ), ], ), ft.BarChartGroup( x=1, bar_rods=[ ft.BarChartRod( from_y=0, to_y=100, width=40, color=ft.colors.BLUE, tooltip="Blueberry", border_radius=0, ), ], ), ft.BarChartGroup( x=2, bar_rods=[ ft.BarChartRod( from_y=0, to_y=30, width=40, color=ft.colors.RED, tooltip="Cherry", border_radius=0, ), ], ), ft.BarChartGroup( x=3, bar_rods=[ ft.BarChartRod( from_y=0, to_y=60, width=40, color=ft.colors.ORANGE, tooltip="Orange", border_radius=0, ), ], ), ], border=ft.border.all(1, ft.colors.GREY_400), left_axis=ft.ChartAxis( labels_size=40, title=ft.Text("Fruit supply"), title_size=40 ), bottom_axis=ft.ChartAxis( labels=[ ft.ChartAxisLabel( value=0, label=ft.Container(ft.Text("Apple"), padding=10) ), ft.ChartAxisLabel( value=1, label=ft.Container(ft.Text("Blueberry"), padding=10) ), ft.ChartAxisLabel( value=2, label=ft.Container(ft.Text("Cherry"), padding=10) ), ft.ChartAxisLabel( value=3, label=ft.Container(ft.Text("Orange"), padding=10) ), ], labels_size=40, ), horizontal_grid_lines=ft.ChartGridLines( color=ft.colors.GREY_300, width=1, dash_pattern=[3, 3] ), tooltip_bgcolor=ft.colors.with_opacity(0.5, ft.colors.GREY_300), max_y=110, interactive=True, expand=True, ) page.add(chart) ft.app(main) ``` </details> ### To reproduce Run the repro code -> BarChart is not displayed, error in in log: Unhandled error processing page session : Traceback (most recent call last): File "/Users/inesa/projects/flet-dev/flet/sdk/python/packages/flet/src/flet/app.py", line 247, in on_session_created await asyncio.get_running_loop().run_in_executor( File "/Users/inesa/.pyenv/versions/3.12.6/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/inesa/projects/flet-dev/flet/sdk/python/playground/bar_chart_test.py", line 84, in main tooltip_bgcolor=ft.colors.with_opacity(0.5, ft.colors.GREY_300), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/inesa/projects/flet-dev/flet/sdk/python/packages/flet/src/flet/core/colors.py", line 60, in with_opacity assert 0 <= opacity <= 1, "opacity must be between 0 and 1" ^^^^^^^^^^^^^^^^^ TypeError: '<=' not supported between instances of 'int' and 'colors` ### Expected behavior Expected to see a warning ### Screenshots / Videos <details open> <summary>Captures</summary> [Upload media here] </details> ### Operating System macOS ### Operating system details 15 ### Flet version 0.25.0.dev3679 ### Regression Yes, it used to work in a previous Flet version (please specify the version in additional details) ### Suggestions _No response_ ### Logs <details open><summary>Logs</summary> ```console [Paste your logs here] ``` </details> ### Additional details _No response_
1,731,103,482,000
null
Bug Report
[ "sdk/python/packages/flet/src/flet/core/colors.py:colors.with_opacity", "sdk/python/packages/flet/src/flet/core/colors.py:Colors.with_opacity" ]
[]
2
flet-dev/flet
flet-dev__flet-4314
3b7241e3f5024ee47f3bcba3092a9e71e56bfe42
diff --git a/sdk/python/packages/flet/src/flet/core/segmented_button.py b/sdk/python/packages/flet/src/flet/core/segmented_button.py index 01f303af6..7c1934834 100644 --- a/sdk/python/packages/flet/src/flet/core/segmented_button.py +++ b/sdk/python/packages/flet/src/flet/core/segmented_button.py @@ -203,12 +203,11 @@ def before_update(self): assert ( len(self.selected) < 2 or self.allow_multiple_selection ), "allow_multiple_selection must be True for selected to have more than one item" - if self.__style is None: - self.__style = ButtonStyle() - self.__style.side = self._wrap_attr_dict(self.__style.side) - self.__style.shape = self._wrap_attr_dict(self.__style.shape) - self.__style.padding = self._wrap_attr_dict(self.__style.padding) - self._set_attr_json("style", self.__style) + style = self.__style or ButtonStyle() + style.side = self._wrap_attr_dict(style.side) + style.shape = self._wrap_attr_dict(style.shape) + style.padding = self._wrap_attr_dict(style.padding) + self._set_attr_json("style", style) def _get_children(self): for segment in self.segments:
user customized style for SegmentedButton not wrapped ### Duplicate Check - [X] I have searched the [opened issues](https://github.com/flet-dev/flet/issues) and there are no duplicates ### Describe the bug user cutomized style is not working in SegmentButton ![image](https://github.com/user-attachments/assets/7c5f6f9c-9c29-43b5-a4d4-87f52d0d420d) ### Code sample <details open><summary>Code</summary> ```python import flet as ft def main(page: ft.Page): page.window.width = 800 page.window.height = 500 page.window.alignment = ft.alignment.center def handle_change(e): print("on_change data : " + str(e.data)) def on_error(e) -> None: print("error: ", e.data) page.on_error = on_error style = ft.ButtonStyle(shape=ft.RoundedRectangleBorder(radius=5)) page.add( ft.SegmentedButton( on_change=handle_change, selected_icon=ft.Icon(ft.icons.ONETWOTHREE), selected={"1", "4"}, allow_multiple_selection=True, style=style, segments=[ ft.Segment( value="1", label=ft.Text("1"), icon=ft.Icon(ft.icons.LOOKS_ONE), ), ft.Segment( value="2", label=ft.Text("2"), icon=ft.Icon(ft.icons.LOOKS_TWO), ), ft.Segment( value="3", label=ft.Text("3"), icon=ft.Icon(ft.icons.LOOKS_3), ), ft.Segment( value="4", label=ft.Text("4"), icon=ft.Icon(ft.icons.LOOKS_4), ), ], ) ) ft.app(main) ``` </details> ### To reproduce pass customized style to SegmentedButton (style has customized attribute value, e.g. `style = ft.ButtonStyle(shape=ft.RoundedRectangleBorder(radius=5))`) ### Expected behavior SegmentedButton has RoundedRectangleBorder ### Screenshots / Videos <details open> <summary>Captures</summary> [Upload media here] ![Code_2eBKOg7ESB](https://github.com/user-attachments/assets/e8389abd-23e6-4328-89b4-0ebbb9ac0200) </details> ### Operating System Windows ### Operating system details Windows 11 24H2 ### Flet version 0.24.1 ### Regression No, it isn't ### Suggestions _No response_ ### Logs <details open><summary>Logs</summary> ```console [Paste your logs here] ``` </details> ### Additional details _No response_
1,730,865,480,000
null
Bug Report
[ "sdk/python/packages/flet/src/flet/core/segmented_button.py:SegmentedButton.before_update" ]
[]
1
explodinggradients/ragas
explodinggradients__ragas-1627
c3a183167b375bf5e22c8c8959d212e6b58103be
diff --git a/src/ragas/metrics/_noise_sensitivity.py b/src/ragas/metrics/_noise_sensitivity.py index a6a903b68..4074d0edc 100644 --- a/src/ragas/metrics/_noise_sensitivity.py +++ b/src/ragas/metrics/_noise_sensitivity.py @@ -101,7 +101,6 @@ async def _decompose_answer_into_statements( sentences_with_index = { i: sentence for i, sentence in enumerate(sentences) - if sentence.strip().endswith(".") } statements_simplified = await self.statement_prompt.generate(
In NoiseSensitivity, the `.` should be deleted as the split condition. [ ] I checked the [documentation](https://docs.ragas.io/) and related resources and couldn't find an answer to my question. **Your Question** After the sentence is split, i think the `.` should no longer be used as a split condition. Many other languages ​​do not use `.` as a split condition. Just follow the result of `sentence_segmenter`. **Code Examples** https://github.com/explodinggradients/ragas/blob/main/src/ragas/metrics/_noise_sensitivity.py#L104C13-L104C46 **Additional context** I will provide a pr.
1,730,804,031,000
null
Feature Request
[ "src/ragas/metrics/_noise_sensitivity.py:NoiseSensitivity._decompose_answer_into_statements" ]
[]
1
scikit-learn/scikit-learn
scikit-learn__scikit-learn-30241
551d56c254197c4b6ad63974d749824ed2c7bc58
diff --git a/sklearn/utils/estimator_checks.py b/sklearn/utils/estimator_checks.py index f5d542d9a59fc..da1300817b148 100644 --- a/sklearn/utils/estimator_checks.py +++ b/sklearn/utils/estimator_checks.py @@ -2076,11 +2076,11 @@ def check_regressor_multioutput(name, estimator): assert y_pred.dtype == np.dtype("float64"), ( "Multioutput predictions by a regressor are expected to be" - " floating-point precision. Got {} instead".format(y_pred.dtype) + f" floating-point precision. Got {y_pred.dtype} instead" ) assert y_pred.shape == y.shape, ( "The shape of the prediction for multioutput data is incorrect." - " Expected {}, got {}." + f" Expected {y_pred.shape}, got {y.shape}." )
Missing format string arguments This assertion error string is not properly formatted as the 2 format arguments `y_pred.shape` and `y.shape` are missing: https://github.com/scikit-learn/scikit-learn/blob/551d56c254197c4b6ad63974d749824ed2c7bc58/sklearn/utils/estimator_checks.py#L2139 ```python assert y_pred.shape == y.shape, ( "The shape of the prediction for multioutput data is incorrect." " Expected {}, got {}." ) ``` should become ```python assert y_pred.shape == y.shape, ( "The shape of the prediction for multioutput data is incorrect." " Expected {}, got {}.".format(y_pred.shape, y.shape) ) ```
Please feel free to directly submit a PR with the fix in the future in such cases :)
1,731,059,196,000
null
Bug Report
[ "sklearn/utils/estimator_checks.py:check_regressor_multioutput" ]
[]
1
Lightning-AI/pytorch-lightning
Lightning-AI__pytorch-lightning-20484
601c0608059ed33ac617a57bb122e17b88c35c9a
diff --git a/src/lightning/pytorch/loops/prediction_loop.py b/src/lightning/pytorch/loops/prediction_loop.py index 7044ccea87a7f..dcfd873a28b4b 100644 --- a/src/lightning/pytorch/loops/prediction_loop.py +++ b/src/lightning/pytorch/loops/prediction_loop.py @@ -233,8 +233,9 @@ def _predict_step( self.batch_progress.increment_ready() - if not using_dataloader_iter: - any_on_epoch = self._store_data_for_prediction_writer(batch_idx, dataloader_idx) + any_on_epoch = ( + self._store_data_for_prediction_writer(batch_idx, dataloader_idx) if not using_dataloader_iter else False + ) # the `_step` methods don't take a batch_idx when `dataloader_iter` is used, but all other hooks still do, # so we need different kwargs
UnboundLocalError: local variable 'any_on_epoch' referenced before assignment in prediction loop ### Bug description `UnboundLocalError` raises when using the predict method with `return_predictions=False`. This is due to `any_on_epoch` [not being defined](https://github.com/Lightning-AI/pytorch-lightning/blob/be608fa355b835b9b0727df2f5476f0a1d90bc59/src/lightning/pytorch/loops/prediction_loop.py#L236) if `data_fetcher` is [an instance](https://github.com/Lightning-AI/pytorch-lightning/blob/be608fa355b835b9b0727df2f5476f0a1d90bc59/src/lightning/pytorch/loops/prediction_loop.py#L229) of `_DataLoaderIterDataFetcher`. ### What version are you seeing the problem on? v2.4 ### How to reproduce the bug _No response_ ### Error messages and logs ``` UnboundLocalError: local variable 'any_on_epoch' referenced before assignment ``` ### Environment <details> <summary>Current environment</summary> ``` #- PyTorch Lightning Version (e.g., 2.4.0): #- PyTorch Version (e.g., 2.4): #- Python version (e.g., 3.12): #- OS (e.g., Linux): #- CUDA/cuDNN version: #- GPU models and configuration: #- How you installed Lightning(`conda`, `pip`, source): ``` </details> ### More info _No response_
nice catch > This is due to any_on_epoch [not being defined](https://github.com/Lightning-AI/pytorch-lightning/blob/be608fa355b835b9b0727df2f5476f0a1d90bc59/src/lightning/pytorch/loops/prediction_loop.py#L236) if data_fetcher is [not an instance](https://github.com/Lightning-AI/pytorch-lightning/blob/be608fa355b835b9b0727df2f5476f0a1d90bc59/src/lightning/pytorch/loops/prediction_loop.py#L229) of _DataLoaderIterDataFetcher. it's actually the other way around, it errors out when it *is* an instance of `_DataLoaderIterDataFetcher` the solution is to replace lines 236 and 237: https://github.com/Lightning-AI/pytorch-lightning/blob/be608fa355b835b9b0727df2f5476f0a1d90bc59/src/lightning/pytorch/loops/prediction_loop.py#L236-L237 with ```python any_on_epoch = self._store_data_for_prediction_writer(batch_idx, dataloader_idx) if not using_dataloader_iter else False ``` Would you like to volunteer for a quick PR?
1,733,763,932,000
null
Bug Report
[ "src/lightning/pytorch/loops/prediction_loop.py:_PredictionLoop._predict_step" ]
[]
1
Lightning-AI/pytorch-lightning
Lightning-AI__pytorch-lightning-20420
20d19d2f5728f7049272f2db77a9748ff4cf5ccd
diff --git a/examples/fabric/build_your_own_trainer/run.py b/examples/fabric/build_your_own_trainer/run.py index 01044f5d94fa8..c0c2ff28ddc41 100644 --- a/examples/fabric/build_your_own_trainer/run.py +++ b/examples/fabric/build_your_own_trainer/run.py @@ -41,7 +41,8 @@ def training_step(self, batch, batch_idx: int): def configure_optimizers(self): optim = torch.optim.Adam(self.parameters(), lr=1e-4) - return optim, { + return { + "optimizer": optim, "scheduler": torch.optim.lr_scheduler.ReduceLROnPlateau(optim, mode="max", verbose=True), "monitor": "val_accuracy", "interval": "epoch",
OptimizerLRScheduler typing does not fit examples ### Bug description The return type of `LightningModule.configure_optimizers()` is `OptimizerLRScheduler`, see the [source code](https://github.com/Lightning-AI/pytorch-lightning/blob/e6c26d2d22fc4678b2cf47d57697ebae68b09529/src/lightning/pytorch/core/module.py#L954). However, the examples give return types not fitting this return type. See e.g. the example [here](https://github.com/Lightning-AI/pytorch-lightning/blob/e6c26d2d22fc4678b2cf47d57697ebae68b09529/examples/fabric/build_your_own_trainer/run.py#L42-L49). Furthermore, the `OptimizerLRScheduler` is only used as a return type, but I don't see where it is actually used, i.e. the other part of the typed interface. A [search for it](https://github.com/search?q=repo%3ALightning-AI%2Fpytorch-lightning%20OptimizerLRScheduler&type=code) does not reveal it. ### What version are you seeing the problem on? 2.3.1 ### How to reproduce the bug Just run mypy on an example, e.g. https://github.com/Lightning-AI/pytorch-lightning/blob/e6c26d2d22fc4678b2cf47d57697ebae68b09529/examples/fabric/build_your_own_trainer/run.py#L42-L49. ### Error messages and logs Running mypy on this causes a bug: ``` Error: Incompatible return value type (got "tuple[Adam, dict[str, object]]", expected "Optimizer | Sequence[Optimizer] | tuple[Sequence[Optimizer], Sequence[LRScheduler | ReduceLROnPlateau | LRSchedulerConfig]] | OptimizerLRSchedulerConfig | Sequence[OptimizerLRSchedulerConfig] | None") ``` ### Environment <details> <summary>Current environment</summary> * CUDA: - GPU: None - available: False - version: None * Lightning: - lightning-utilities: 0.11.3.post0 - pytorch-lightning: 2.3.1 - torch: 2.3.1 - torchmetrics: 0.8.0 - torchvision: 0.18.1 * Packages: - absl-py: 2.1.0 - aenum: 3.1.15 - aiohttp: 3.9.5 - aiosignal: 1.3.1 - alabaster: 0.7.16 - albumentations: 1.3.1 - antlr4-python3-runtime: 4.9.3 - arabic-reshaper: 3.0.0 - asn1crypto: 1.5.1 - async-timeout: 4.0.3 - attrs: 23.2.0 - babel: 2.15.0 - boto3: 1.34.137 - botocore: 1.34.137 - build: 1.2.1 - certifi: 2024.6.2 - cffi: 1.16.0 - charset-normalizer: 3.3.2 - click: 8.1.7 - coloredlogs: 15.0.1 - contourpy: 1.2.1 - coverage: 5.3.1 - cryptography: 42.0.8 - cssselect2: 0.7.0 - cycler: 0.12.1 - data-gradients: 0.3.2 - deprecated: 1.2.14 - docutils: 0.17.1 - einops: 0.3.2 - exceptiongroup: 1.2.1 - filelock: 3.15.4 - flatbuffers: 24.3.25 - fonttools: 4.53.0 - frozenlist: 1.4.1 - fsspec: 2024.6.1 - future: 1.0.0 - grpcio: 1.64.1 - html5lib: 1.1 - huggingface-hub: 0.23.4 - humanfriendly: 10.0 - hydra-core: 1.3.2 - idna: 3.7 - imagededup: 0.3.1 - imageio: 2.34.2 - imagesize: 1.4.1 - iniconfig: 2.0.0 - jinja2: 3.1.4 - jmespath: 1.0.1 - joblib: 1.4.2 - json-tricks: 3.16.1 - jsonschema: 4.22.0 - jsonschema-specifications: 2023.12.1 - kiwisolver: 1.4.5 - lazy-loader: 0.4 - lightly: 1.5.8 - lightly-train: 0.1.0 - lightly-utils: 0.0.2 - lightning-utilities: 0.11.3.post0 - lxml: 5.2.2 - markdown: 3.6 - markdown-it-py: 3.0.0 - markupsafe: 2.1.5 - matplotlib: 3.9.0 - mdurl: 0.1.2 - mpmath: 1.3.0 - multidict: 6.0.5 - mypy: 1.10.1 - mypy-extensions: 1.0.0 - networkx: 3.3 - numpy: 1.23.0 - omegaconf: 2.3.0 - onnx: 1.15.0 - onnxruntime: 1.15.0 - onnxsim: 0.4.36 - opencv-python: 4.10.0.84 - opencv-python-headless: 4.10.0.84 - oscrypto: 1.3.0 - packaging: 24.1 - pandas: 2.2.2 - pillow: 10.4.0 - pip: 24.1.1 - pip-tools: 7.4.1 - platformdirs: 4.2.2 - pluggy: 1.5.0 - protobuf: 3.20.3 - psutil: 6.0.0 - pycparser: 2.22 - pydantic: 1.10.17 - pydeprecate: 0.3.2 - pygments: 2.18.0 - pyhanko: 0.25.0 - pyhanko-certvalidator: 0.26.3 - pyparsing: 3.1.2 - pypdf: 4.2.0 - pypng: 0.20220715.0 - pyproject-hooks: 1.1.0 - pytest: 8.2.2 - pytest-mock: 3.14.0 - python-bidi: 0.4.2 - python-dateutil: 2.9.0.post0 - pytorch-lightning: 2.3.1 - pytz: 2024.1 - pywavelets: 1.6.0 - pyyaml: 6.0.1 - qrcode: 7.4.2 - qudida: 0.0.4 - rapidfuzz: 3.9.3 - referencing: 0.35.1 - reportlab: 3.6.13 - requests: 2.32.3 - rich: 13.7.1 - rpds-py: 0.18.1 - ruff: 0.5.0 - s3transfer: 0.10.2 - safetensors: 0.4.3 - scikit-image: 0.24.0 - scikit-learn: 1.5.0 - scipy: 1.13.1 - seaborn: 0.13.2 - selftrain: 0.1.0 - setuptools: 70.2.0 - six: 1.16.0 - snowballstemmer: 2.2.0 - sphinx: 4.0.3 - sphinx-rtd-theme: 1.3.0 - sphinxcontrib-applehelp: 1.0.8 - sphinxcontrib-devhelp: 1.0.6 - sphinxcontrib-htmlhelp: 2.0.5 - sphinxcontrib-jquery: 4.1 - sphinxcontrib-jsmath: 1.0.1 - sphinxcontrib-qthelp: 1.0.7 - sphinxcontrib-serializinghtml: 1.1.10 - stringcase: 1.2.0 - super-gradients: 3.7.1 - svglib: 1.5.1 - sympy: 1.12.1 - tensorboard: 2.17.0 - tensorboard-data-server: 0.7.2 - termcolor: 1.1.0 - threadpoolctl: 3.5.0 - tifffile: 2024.6.18 - timm: 1.0.7 - tinycss2: 1.3.0 - tomli: 2.0.1 - torch: 2.3.1 - torchmetrics: 0.8.0 - torchvision: 0.18.1 - tqdm: 4.66.4 - treelib: 1.6.1 - typing-extensions: 4.12.2 - tzdata: 2024.1 - tzlocal: 5.2 - uritools: 4.0.3 - urllib3: 2.2.2 - webencodings: 0.5.1 - werkzeug: 3.0.3 - wheel: 0.43.0 - wrapt: 1.16.0 - xhtml2pdf: 0.2.11 - yarl: 1.9.4 * System: - OS: Darwin - architecture: - 64bit - - processor: arm - python: 3.10.8 - release: 23.5.0 - version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000 ### More info _No response_
Hey @MalteEbner The optimizer should be inside the dict in that example. Thank you for noticing it. Would you like to send a PR with this quick fix? Would be much appreciated 😃 > Hey @MalteEbner The optimizer should be inside the dict in that example. Thank you for noticing it. Would you like to send a PR with this quick fix? Would be much appreciated 😃 Unfortunately, the problem is not that the example is wrong. This is just a symptom of an underlying problem. The problem is that the `LightningModule.configure_optimizers() -> OptimizerLRScheduler` and its usage don't fit. Toe see why, have a look at where `configure_optimizers()` is used, see the source code [here](https://github.com/Lightning-AI/pytorch-lightning/blob/f91349c961103af48091654775248789b6e03bd1/src/lightning/pytorch/core/optimizer.py#L179-L200): The output of it is stored under the variable name `optim_conf` and then passed to the function ```python def _configure_optimizers( optim_conf: Union[Dict[str, Any], List, Optimizer, Tuple], ) ``` However, these definition type `OptimizerLRScheduler` and the usage type `Union[Dict[str, Any], List, Optimizer, Tuple] `don't align. To fix this, more is needed 1. Change it to `optim_conf: OptimizerLRScheduler`, so that the usage of `configure_optimizers()` has the same type as its definition. 2. Redefine the `OptimizerLRScheduler` such that it fits the supported types in `_configure_optimizers`. I'm not seeing any mypy errors regarding this. Which version did you use and what was the command you ran? The version we test with you can see here: https://github.com/Lightning-AI/pytorch-lightning/blob/master/requirements/typing.txt We typically bump it together when an new torch version comes out (which is soon again). Maybe your issue will show up, but I'm not seeing it locally. Yes sure, the internal `_configure_optimizers` uses a bit more generic typing. Feel free to update it to be more specific 👍 . The return type of `LightningModule.configure_optimizers()` should not be changed, this looks all good to me still.
1,731,575,568,000
null
Bug Report
[ "examples/fabric/build_your_own_trainer/run.py:MNISTModule.configure_optimizers" ]
[]
1
Lightning-AI/pytorch-lightning
Lightning-AI__pytorch-lightning-20401
c110f4f3f60c643740f5e3573546abfcb5355315
diff --git a/src/lightning/pytorch/cli.py b/src/lightning/pytorch/cli.py index 26af335f7be93..e0de8a24b38f5 100644 --- a/src/lightning/pytorch/cli.py +++ b/src/lightning/pytorch/cli.py @@ -389,6 +389,7 @@ def __init__( self._add_instantiators() self.before_instantiate_classes() self.instantiate_classes() + self.after_instantiate_classes() if self.subcommand is not None: self._run_subcommand(self.subcommand) @@ -560,6 +561,9 @@ def instantiate_classes(self) -> None: self._add_configure_optimizers_method_to_model(self.subcommand) self.trainer = self.instantiate_trainer() + def after_instantiate_classes(self) -> None: + """Implement to run some code after instantiating the classes.""" + def instantiate_trainer(self, **kwargs: Any) -> Trainer: """Instantiates the trainer.
Proposal(CLI): after_instantiate_classes hook ### Description & Motivation Adds a `after_instantiate_classes` hook to the Lightning CLI, called after `self.instantiate_classes()` during the initalization of `LightningCLI`. ### Pitch While having the Lightning CLI is great, it is not perfect for each use case out-of-the-box. Hence, you included hooks like `before_instantiate_classes` and describe in the docs how to extend the CLI. Problem is, you cannot extend this feature without hacks or substantial copy-pasta. I think, to further improve the CLI, without adding any complexity, it makes sense to add a `after_instantiate_classes` hook, too. ### Alternatives 1. Hacks - Extend the Lightning CLI and run the `after_instantiate_classes` function before the `self._run_subcommand` function. - Problems: it's not intuitive that the function is called there, won't be called if `self.subcommand is None` 2. Copy-Pasta - Extend the Lightning CLI and replace the original `__init__` with the proposed one. - Problems: could break with any update, lots of code duplication ### Additional context _No response_ cc @borda @tchaton @justusschock @awaelchli @mauvilsa
1,730,892,823,000
null
Feature Request
[ "src/lightning/pytorch/cli.py:LightningCLI.__init__" ]
[ "src/lightning/pytorch/cli.py:LightningCLI.after_instantiate_classes" ]
1
kornia/kornia
kornia__kornia-3084
b230615f08bb0fff1b3044fc8ccb38f21bd9e817
diff --git a/kornia/augmentation/container/augment.py b/kornia/augmentation/container/augment.py index a9cad91924..ebd892bf17 100644 --- a/kornia/augmentation/container/augment.py +++ b/kornia/augmentation/container/augment.py @@ -507,11 +507,9 @@ def __call__( if output_type == "tensor": self._output_image = _output_image if isinstance(_output_image, dict): - self._output_image[original_keys[idx]] = self._detach_tensor_to_cpu( - _output_image[original_keys[idx]] - ) + self._output_image[original_keys[idx]] = _output_image[original_keys[idx]] else: - self._output_image[idx] = self._detach_tensor_to_cpu(_output_image[idx]) + self._output_image[idx] = _output_image[idx] elif isinstance(_output_image, dict): self._output_image[original_keys[idx]] = _output_image[original_keys[idx]] else:
AugmentationSequential explicitly moves the output to the CPU if data_keys is given ### Describe the bug With the 0.7.4 release, augmentations on the GPU are not possible anymore because the output of the input tensor is always explicitly moved to the CPU. The problem is that `_detach_tensor_to_cpu` is called explicitly on every tensor in [augment.py](https://github.com/kornia/kornia/blob/main/kornia/augmentation/container/augment.py#L507). ### Reproduction steps The augmented version of the input gets moved to the CPU: ```python import torch import kornia input = torch.randn(1, 3, 10, 10, device="cuda") mask = torch.randn(1, 10, 10, device="cuda") aug_list = kornia.augmentation.AugmentationSequential(kornia.augmentation.RandomHorizontalFlip(p=1)) for a in aug_list(input, mask, data_keys=["input", "mask"]): print(a.dtype, a.device) # torch.float32 cpu # torch.float32 cuda:0 ``` Without `data_keys` and only one input, the output device is as expected: ```python for a in aug_list(input): print(a.dtype, a.device) # torch.float32 cuda:0 ``` ### Expected behavior I would expect the augmented version of the input tensor to reside on the GPU. ### Environment - PyTorch Version (e.g., 1.0): 2.5.1 - OS (e.g., Linux): Ubuntu - How you installed PyTorch (`conda`, `pip`, source): pip - Build command you used (if compiling from source): - Python version: 3.12 - CUDA/cuDNN version: 12.4 - GPU models and configuration: NVIDIA 4090 RTX - Any other relevant information: ### Additional context Reverting back to 0.7.3 fixes the problem.
i can see that this was touched in https://github.com/kornia/kornia/pull/2979 @ashnair1 @shijianjian @johnnv1 do you recall why we need here `_detach_tensor_to_cpu` instead of something like `_detach_tensor_to_device` ? It was mainly for passing the tests, due to the randomness handling for CUDA and CPU are different. I think the bug is because the dict tensors are not handled properly to be move back to CUDA. @ashnair1, can you fix this? We may need to do a patch release after this. BTW, I think we may need a pre-merge action to run CUDA tests. At least to run it for one-time for each merge. It has happened many times I think that some CUDA errors after all CPU tests passed. > BTW, I think we may need a pre-merge action to run CUDA tests. At least to run it for one-time for each merge. It has happened many times I think that some CUDA errors after all CPU tests passed. For that, we need someone to sponsor with access to a machine that has a cuda device. Will the meta device be helpful? So we mock cuda instead? https://pytorch.org/docs/stable/meta.html https://github.com/pytorch/pytorch/issues/61654#issuecomment-879989145 we can try yes One of Kornia's biggest strengths is fast GPU augmentations. This bug makes 0.7.4 unusable for many including me - are there any updates on a fix? hey @ashnair1, can you help fix this issue?
1,733,310,216,000
null
Bug Report
[ "kornia/augmentation/container/augment.py:AugmentationSequential.__call__" ]
[]
1