Dataset Viewer
Auto-converted to Parquet
Unnamed: 0
int64
0
825k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
61
repo_url
stringlengths
36
90
action
stringclasses
3 values
title
stringlengths
4
228
labels
stringlengths
4
352
body
stringlengths
48
210k
index
stringclasses
4 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
146k
binary_label
int64
0
1
584
7,986,119,894
IssuesEvent
2018-07-19 00:07:47
rust-lang-nursery/stdsimd
https://api.github.com/repos/rust-lang-nursery/stdsimd
closed
floating-point sum / product are buggy w.r.t. NaNs
A-portable
Due to https://bugs.llvm.org/show_bug.cgi?id=36732, `wrapping_sum` / `wrapping_product` are implemented with fast-math flags unconditionally enabled, which results in inconsistencies like them returning a `NaN` for which the `nan.is_nan()` method returns `false`... We'll probably need to work-around these issues here in `stdsimd`.
True
floating-point sum / product are buggy w.r.t. NaNs - Due to https://bugs.llvm.org/show_bug.cgi?id=36732, `wrapping_sum` / `wrapping_product` are implemented with fast-math flags unconditionally enabled, which results in inconsistencies like them returning a `NaN` for which the `nan.is_nan()` method returns `false`... We'll probably need to work-around these issues here in `stdsimd`.
port
floating point sum product are buggy w r t nans due to wrapping sum wrapping product are implemented with fast math flags unconditionally enabled which results in inconsistencies like them returning a nan for which the nan is nan method returns false we ll probably need to work around these issues here in stdsimd
1
40,848
2,868,945,909
IssuesEvent
2015-06-05 22:07:21
dart-lang/pub
https://api.github.com/repos/dart-lang/pub
closed
Use new Link() instead of shelling out to ln/mklink
bug Fixed Priority-Medium
<a href="https://github.com/munificent"><img src="https://avatars.githubusercontent.com/u/46275?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [munificent](https://github.com/munificent)** _Originally opened as dart-lang/sdk#9467_ ---- Now that dart:io has an API for creating symlinks, we should use that.
1.0
Use new Link() instead of shelling out to ln/mklink - <a href="https://github.com/munificent"><img src="https://avatars.githubusercontent.com/u/46275?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [munificent](https://github.com/munificent)** _Originally opened as dart-lang/sdk#9467_ ---- Now that dart:io has an API for creating symlinks, we should use that.
non_port
use new link instead of shelling out to ln mklink issue by originally opened as dart lang sdk now that dart io has an api for creating symlinks we should use that
0
1,741
25,410,141,818
IssuesEvent
2022-11-22 18:14:34
golang/vulndb
https://api.github.com/repos/golang/vulndb
closed
x/vulndb: potential Go vuln in github.com/hashicorp/consul: GHSA-gw2g-hhc9-wgjh
excluded: NOT_IMPORTABLE
In GitHub Security Advisory [GHSA-gw2g-hhc9-wgjh](https://github.com/advisories/GHSA-gw2g-hhc9-wgjh), there is a vulnerability in the following Go packages or modules: | Unit | Fixed | Vulnerable Ranges | | - | - | - | | [github.com/hashicorp/consul](https://pkg.go.dev/github.com/hashicorp/consul) | 1.14.0 | >= 1.13.0, < 1.14.0 | See [doc/triage.md](https://github.com/golang/vulndb/blob/master/doc/triage.md) for instructions on how to triage this report. ``` modules: - module: TODO versions: - introduced: 1.13.0 fixed: 1.14.0 packages: - package: github.com/hashicorp/consul description: HashiCorp Consul and Consul Enterprise 1.13.0 up to 1.13.3 do not filter cluster filtering's imported nodes and services for HTTP or RPC endpoints used by the UI. Fixed in 1.14.0. cves: - CVE-2022-3920 ghsas: - GHSA-gw2g-hhc9-wgjh ```
True
x/vulndb: potential Go vuln in github.com/hashicorp/consul: GHSA-gw2g-hhc9-wgjh - In GitHub Security Advisory [GHSA-gw2g-hhc9-wgjh](https://github.com/advisories/GHSA-gw2g-hhc9-wgjh), there is a vulnerability in the following Go packages or modules: | Unit | Fixed | Vulnerable Ranges | | - | - | - | | [github.com/hashicorp/consul](https://pkg.go.dev/github.com/hashicorp/consul) | 1.14.0 | >= 1.13.0, < 1.14.0 | See [doc/triage.md](https://github.com/golang/vulndb/blob/master/doc/triage.md) for instructions on how to triage this report. ``` modules: - module: TODO versions: - introduced: 1.13.0 fixed: 1.14.0 packages: - package: github.com/hashicorp/consul description: HashiCorp Consul and Consul Enterprise 1.13.0 up to 1.13.3 do not filter cluster filtering's imported nodes and services for HTTP or RPC endpoints used by the UI. Fixed in 1.14.0. cves: - CVE-2022-3920 ghsas: - GHSA-gw2g-hhc9-wgjh ```
port
x vulndb potential go vuln in github com hashicorp consul ghsa wgjh in github security advisory there is a vulnerability in the following go packages or modules unit fixed vulnerable ranges see for instructions on how to triage this report modules module todo versions introduced fixed packages package github com hashicorp consul description hashicorp consul and consul enterprise up to do not filter cluster filtering s imported nodes and services for http or rpc endpoints used by the ui fixed in cves cve ghsas ghsa wgjh
1
410
6,552,008,599
IssuesEvent
2017-09-05 16:37:43
Shinmera/portacle
https://api.github.com/repos/Shinmera/portacle
closed
Fails to start on Fedora 25
portability
Portacle 0.12 won't start on Fedora 25. `portacle.desktop` shown as `Portacle` didn't do anything, so I ran the `Exec` command from within the same directory. ``` bash -c 'cd $(dirname %k) && ./portacle.run' p11-kit: couldn't load module: /usr/lib/x86_64-linux-gnu/pkcs11/p11-kit-trust.so: /usr/lib/x86_64-linux-gnu/pkcs11/p11-kit-trust.so: cannot open shared object file: No such file or directory p11-kit: couldn't load module: /usr/lib/x86_64-linux-gnu/pkcs11/gnome-keyring-pkcs11.so: /usr/lib/x86_64-linux-gnu/pkcs11/gnome-keyring-pkcs11.so: cannot open shared object file: No such file or directory Warning: arch-dependent data dir '/home/linus/portacle/lin/emacs/libexec/emacs/25.1/x86_64-unknown-linux-gnu/': No such file or directory Warning: Lisp directory '/home/linus/portacle/lin/emacs/share/emacs/25.1/lisp': No such file or directory GLib: Cannot convert message: Conversion from character set 'UTF-8' to 'ISO-8859-1' is not supported (emacs:28472): Gtk-WARNING **: Conversion from character set 'ISO-8859-1' to 'UTF-8' is not supported (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "adwaita", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "adwaita", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "adwaita", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "adwaita", GLib-GIO-Message: Using the 'memory' GSettings backend. Your settings will not be saved or shared with other applications. (emacs:28472): GLib-CRITICAL **: g_error_new_literal: assertion 'message != NULL' failed Fatal error 11: Segmentation fault Backtrace: /home/martin/portacle//lin/emacs/bin/emacs[0x4f7742] /home/martin/portacle//lin/emacs/bin/emacs[0x4df2e9] /home/martin/portacle//lin/emacs/bin/emacs[0x4f65be] /home/martin/portacle//lin/emacs/bin/emacs[0x4f67c3] /home/martin/portacle//lin/emacs/bin/emacs[0x4f67fa] /home/martin/portacle//lin/lib/libpthread.so.0(+0x10330)[0x7f4ea484b330] /home/martin/portacle//lin/lib/libgdk_pixbuf-2.0.so.0(+0xa253)[0x7f4ea7096253] /home/martin/portacle//lin/lib/libgdk_pixbuf-2.0.so.0(gdk_pixbuf_get_formats+0xd)[0x7f4ea709866d] /home/martin/portacle//lin/lib/libgtk-x11-2.0.so.0(+0xff2dc)[0x7f4ea79d32dc] /home/martin/portacle//lin/lib/libgobject-2.0.so.0(g_type_create_instance+0x1eb)[0x7f4ea6c2ee3b] /home/martin/portacle//lin/lib/libgobject-2.0.so.0(+0x15355)[0x7f4ea6c13355] /home/martin/portacle//lin/lib/libgobject-2.0.so.0(g_object_newv+0x22d)[0x7f4ea6c1510d] /home/martin/portacle//lin/lib/libgobject-2.0.so.0(g_object_new+0xec)[0x7f4ea6c158bc] /home/martin/portacle//lin/lib/libgtk-x11-2.0.so.0(gtk_icon_theme_get_for_screen+0x77)[0x7f4ea79d35c7] /home/martin/portacle//lin/emacs/bin/emacs[0x4d91a5] /home/martin/portacle//lin/emacs/bin/emacs[0x4da968] /home/martin/portacle//lin/emacs/bin/emacs[0x4ca9a0] /home/martin/portacle//lin/emacs/bin/emacs[0x4cb464] /home/martin/portacle//lin/emacs/bin/emacs[0x550832] /home/martin/portacle//lin/emacs/bin/emacs[0x583d15] /home/martin/portacle//lin/emacs/bin/emacs[0x550282] /home/martin/portacle//lin/emacs/bin/emacs[0x550643] /home/martin/portacle//lin/emacs/bin/emacs[0x583d15] /home/martin/portacle//lin/emacs/bin/emacs[0x550643] /home/martin/portacle//lin/emacs/bin/emacs[0x551b70] /home/martin/portacle//lin/emacs/bin/emacs[0x55073a] /home/martin/portacle//lin/emacs/bin/emacs[0x583d15] /home/martin/portacle//lin/emacs/bin/emacs[0x550643] /home/martin/portacle//lin/emacs/bin/emacs[0x583d15] /home/martin/portacle//lin/emacs/bin/emacs[0x550643] /home/martin/portacle//lin/emacs/bin/emacs[0x583d15] /home/martin/portacle//lin/emacs/bin/emacs[0x550643] /home/martin/portacle//lin/emacs/bin/emacs[0x583d15] /home/martin/portacle//lin/emacs/bin/emacs[0x550643] /home/martin/portacle//lin/emacs/bin/emacs[0x583d15] /home/martin/portacle//lin/emacs/bin/emacs[0x54f7f3] /home/martin/portacle//lin/emacs/bin/emacs[0x54fb3e] /home/martin/portacle//lin/emacs/bin/emacs[0x553001] /home/martin/portacle//lin/emacs/bin/emacs[0x54f0dd] /home/martin/portacle//lin/emacs/bin/emacs[0x4e1ddc] /home/martin/portacle//lin/emacs/bin/emacs[0x54f08b] ... ./portacle.run: line 7: 28472 Segmentation fault (core dumped) "$ROOT/lin/launcher/portacle" "$@" $ ```
True
Fails to start on Fedora 25 - Portacle 0.12 won't start on Fedora 25. `portacle.desktop` shown as `Portacle` didn't do anything, so I ran the `Exec` command from within the same directory. ``` bash -c 'cd $(dirname %k) && ./portacle.run' p11-kit: couldn't load module: /usr/lib/x86_64-linux-gnu/pkcs11/p11-kit-trust.so: /usr/lib/x86_64-linux-gnu/pkcs11/p11-kit-trust.so: cannot open shared object file: No such file or directory p11-kit: couldn't load module: /usr/lib/x86_64-linux-gnu/pkcs11/gnome-keyring-pkcs11.so: /usr/lib/x86_64-linux-gnu/pkcs11/gnome-keyring-pkcs11.so: cannot open shared object file: No such file or directory Warning: arch-dependent data dir '/home/linus/portacle/lin/emacs/libexec/emacs/25.1/x86_64-unknown-linux-gnu/': No such file or directory Warning: Lisp directory '/home/linus/portacle/lin/emacs/share/emacs/25.1/lisp': No such file or directory GLib: Cannot convert message: Conversion from character set 'UTF-8' to 'ISO-8859-1' is not supported (emacs:28472): Gtk-WARNING **: Conversion from character set 'ISO-8859-1' to 'UTF-8' is not supported (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "adwaita", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "adwaita", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "adwaita", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (emacs:28472): Gtk-WARNING **: Unable to locate theme engine in module_path: "adwaita", GLib-GIO-Message: Using the 'memory' GSettings backend. Your settings will not be saved or shared with other applications. (emacs:28472): GLib-CRITICAL **: g_error_new_literal: assertion 'message != NULL' failed Fatal error 11: Segmentation fault Backtrace: /home/martin/portacle//lin/emacs/bin/emacs[0x4f7742] /home/martin/portacle//lin/emacs/bin/emacs[0x4df2e9] /home/martin/portacle//lin/emacs/bin/emacs[0x4f65be] /home/martin/portacle//lin/emacs/bin/emacs[0x4f67c3] /home/martin/portacle//lin/emacs/bin/emacs[0x4f67fa] /home/martin/portacle//lin/lib/libpthread.so.0(+0x10330)[0x7f4ea484b330] /home/martin/portacle//lin/lib/libgdk_pixbuf-2.0.so.0(+0xa253)[0x7f4ea7096253] /home/martin/portacle//lin/lib/libgdk_pixbuf-2.0.so.0(gdk_pixbuf_get_formats+0xd)[0x7f4ea709866d] /home/martin/portacle//lin/lib/libgtk-x11-2.0.so.0(+0xff2dc)[0x7f4ea79d32dc] /home/martin/portacle//lin/lib/libgobject-2.0.so.0(g_type_create_instance+0x1eb)[0x7f4ea6c2ee3b] /home/martin/portacle//lin/lib/libgobject-2.0.so.0(+0x15355)[0x7f4ea6c13355] /home/martin/portacle//lin/lib/libgobject-2.0.so.0(g_object_newv+0x22d)[0x7f4ea6c1510d] /home/martin/portacle//lin/lib/libgobject-2.0.so.0(g_object_new+0xec)[0x7f4ea6c158bc] /home/martin/portacle//lin/lib/libgtk-x11-2.0.so.0(gtk_icon_theme_get_for_screen+0x77)[0x7f4ea79d35c7] /home/martin/portacle//lin/emacs/bin/emacs[0x4d91a5] /home/martin/portacle//lin/emacs/bin/emacs[0x4da968] /home/martin/portacle//lin/emacs/bin/emacs[0x4ca9a0] /home/martin/portacle//lin/emacs/bin/emacs[0x4cb464] /home/martin/portacle//lin/emacs/bin/emacs[0x550832] /home/martin/portacle//lin/emacs/bin/emacs[0x583d15] /home/martin/portacle//lin/emacs/bin/emacs[0x550282] /home/martin/portacle//lin/emacs/bin/emacs[0x550643] /home/martin/portacle//lin/emacs/bin/emacs[0x583d15] /home/martin/portacle//lin/emacs/bin/emacs[0x550643] /home/martin/portacle//lin/emacs/bin/emacs[0x551b70] /home/martin/portacle//lin/emacs/bin/emacs[0x55073a] /home/martin/portacle//lin/emacs/bin/emacs[0x583d15] /home/martin/portacle//lin/emacs/bin/emacs[0x550643] /home/martin/portacle//lin/emacs/bin/emacs[0x583d15] /home/martin/portacle//lin/emacs/bin/emacs[0x550643] /home/martin/portacle//lin/emacs/bin/emacs[0x583d15] /home/martin/portacle//lin/emacs/bin/emacs[0x550643] /home/martin/portacle//lin/emacs/bin/emacs[0x583d15] /home/martin/portacle//lin/emacs/bin/emacs[0x550643] /home/martin/portacle//lin/emacs/bin/emacs[0x583d15] /home/martin/portacle//lin/emacs/bin/emacs[0x54f7f3] /home/martin/portacle//lin/emacs/bin/emacs[0x54fb3e] /home/martin/portacle//lin/emacs/bin/emacs[0x553001] /home/martin/portacle//lin/emacs/bin/emacs[0x54f0dd] /home/martin/portacle//lin/emacs/bin/emacs[0x4e1ddc] /home/martin/portacle//lin/emacs/bin/emacs[0x54f08b] ... ./portacle.run: line 7: 28472 Segmentation fault (core dumped) "$ROOT/lin/launcher/portacle" "$@" $ ```
port
fails to start on fedora portacle won t start on fedora portacle desktop shown as portacle didn t do anything so i ran the exec command from within the same directory bash c cd dirname k portacle run kit couldn t load module usr lib linux gnu kit trust so usr lib linux gnu kit trust so cannot open shared object file no such file or directory kit couldn t load module usr lib linux gnu gnome keyring so usr lib linux gnu gnome keyring so cannot open shared object file no such file or directory warning arch dependent data dir home linus portacle lin emacs libexec emacs unknown linux gnu no such file or directory warning lisp directory home linus portacle lin emacs share emacs lisp no such file or directory glib cannot convert message conversion from character set utf to iso is not supported emacs gtk warning conversion from character set iso to utf is not supported emacs gtk warning unable to locate theme engine in module path adwaita emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path adwaita emacs gtk warning unable to locate theme engine in module path adwaita emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path pixmap emacs gtk warning unable to locate theme engine in module path adwaita glib gio message using the memory gsettings backend your settings will not be saved or shared with other applications emacs glib critical g error new literal assertion message null failed fatal error segmentation fault backtrace home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin lib libpthread so home martin portacle lin lib libgdk pixbuf so home martin portacle lin lib libgdk pixbuf so gdk pixbuf get formats home martin portacle lin lib libgtk so home martin portacle lin lib libgobject so g type create instance home martin portacle lin lib libgobject so home martin portacle lin lib libgobject so g object newv home martin portacle lin lib libgobject so g object new home martin portacle lin lib libgtk so gtk icon theme get for screen home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs home martin portacle lin emacs bin emacs portacle run line segmentation fault core dumped root lin launcher portacle
1
1,606
23,245,041,986
IssuesEvent
2022-08-03 19:15:22
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Excluded from the TOC
azure-supportability/svc triaged assigned-to-author doc-enhancement Pri2
This page seems to be excluded from the TOC by the commit https://github.com/MicrosoftDocs/azure-docs/commit/f273f46a4e4fdd644c06857c12bc911bf75fd068#diff-d29a77de58e37b3c6d943a73d35641b4 --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: fe04a05a-f753-ee1f-5171-fd2a81a030f9 * Version Independent ID: 1507a7ac-11cc-81a7-e932-9e8ef9644e2b * Content: [Azure Resource Manager vCPU quota increase requests](https://docs.microsoft.com/en-us/azure/azure-portal/supportability/resource-manager-core-quotas-request) * Content Source: [articles/azure-portal/supportability/resource-manager-core-quotas-request.md](https://github.com/Microsoft/azure-docs/blob/master/articles/azure-portal/supportability/resource-manager-core-quotas-request.md) * Service: **azure-supportability** * GitHub Login: @sowmyavenkat86 * Microsoft Alias: **svenkat**
True
Excluded from the TOC - This page seems to be excluded from the TOC by the commit https://github.com/MicrosoftDocs/azure-docs/commit/f273f46a4e4fdd644c06857c12bc911bf75fd068#diff-d29a77de58e37b3c6d943a73d35641b4 --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: fe04a05a-f753-ee1f-5171-fd2a81a030f9 * Version Independent ID: 1507a7ac-11cc-81a7-e932-9e8ef9644e2b * Content: [Azure Resource Manager vCPU quota increase requests](https://docs.microsoft.com/en-us/azure/azure-portal/supportability/resource-manager-core-quotas-request) * Content Source: [articles/azure-portal/supportability/resource-manager-core-quotas-request.md](https://github.com/Microsoft/azure-docs/blob/master/articles/azure-portal/supportability/resource-manager-core-quotas-request.md) * Service: **azure-supportability** * GitHub Login: @sowmyavenkat86 * Microsoft Alias: **svenkat**
port
excluded from the toc this page seems to be excluded from the toc by the commit document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service azure supportability github login microsoft alias svenkat
1
135,876
30,442,800,804
IssuesEvent
2023-07-15 09:20:51
linwu-hi/coding-time
https://api.github.com/repos/linwu-hi/coding-time
opened
poker
javascript typescript dart leetcode 数据结构和算法 data-structures algorithms
# TS实战之扑克牌排序 [在线运行](https://code.juejin.cn/pen/7254739493366333499) 我们用`ts实现扑克牌排序问题`,首先,我们将定义所需的数据类型,然后专注于模式查找算法,该算法有几个有趣的要点。 ## 类型和转换 定义一些我们需要的类型。`Rank`和`Suit`是明显的[联合类型](https://www.typescriptlang.org/docs/handbook/2/everyday-types.html#union-types)。 ```ts type Rank = | 'A' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9' | '10' | 'J' | 'Q' | 'K' type Suit = '♥' | '♦' | '♠' | '♣'; ``` 我们将使用`Card`对象进行处理,将rank和suit转换为数字。卡片将用从1(Ace)到13(King)的值表示,花色从1(红心)到4(梅花)。`rankToNumber()`和`suitToNumber()`函数处理从`Rank`和`Suit`值到数字的转换。 ```ts type Card = { rank: number; suit: number }; const rankToNumber = (rank: Rank): number => rank === 'A' ? 1 : rank === 'J' ? 11 : rank === 'Q' ? 12 : rank === 'K' ? 13 : Number(rank); const suitToNumber = (suit: Suit): number => suit === '♥' ? 1 : suit === '♦' ? 2 : suit === '♠' ? 3 : /* suit === "♣" */ 4; ``` ![-](./images/3.png) 这些类型用于内部工作;我们还必须定义手牌检测算法的结果类型。我们需要一个[枚举](https://www.typescriptlang.org/docs/handbook/enums.html)类型来表示手牌的可能值。这些值按照从最低("高牌")到最高("皇家同花顺")的顺序排列。 ```ts enum Hand { HighCard, // 高牌 OnePair, // 一对 TwoPairs, // 两对 ThreeOfAKind, // 三条 Straight, // 顺子 Flush, // 同花 FullHouse, // 葫芦 FourOfAKind, // 四条 StraightFlush, // 同花顺 RoyalFlush //皇家同花顺 } ``` ## 我们有什么手牌? 让我们首先定义我们将要构建的`handRank()`函数。我们的函数将接收一个包含`五张牌的元组`,并返回一个`Hand`结果。 ```ts export function handRank( cardStrings: [string, string, string, string, string] ): Hand { . . . } ``` 由于处理字符串比我们需要的要困难,我们将把牌字符串转换为具有数字`rank`和`suit`值的`Card`对象,以便更容易编写。 ```ts const cards: Card[] = cardStrings.map((str: string) => ({ rank: rankToNumber( str.substring(0, str.length - 1) as Rank ), suit: suitToNumber(str.at(-1) as Suit) })); . . . // 继续... ``` ![-](./images/4.png) 确定玩家手牌的价值的关键在于知道每个等级的牌有多少张,以及我们有多少计数。例如,如果我们有三张J和两张K,J的计数为3,K的计数为2。然后,知道我们有一个计数为三和一个计数为两的计数,我们可以确定我们有一个葫芦。另一个例子:如果我们有两个Q,两个A和一个5,我们会得到两个计数为两和一个计数为一;我们有两对。 生成计数很简单。我们希望A的计数在`countByRank[1]`处,因此我们不会使用`countByRank`数组的初始位置。类似地,花色的计数将位于`countBySuit[1]`到`countBySuit[4]`之间,因此我们也不会使用该数组的初始位置。 ```ts // ...继续 . . . const countBySuit = new Array(5).fill(0); const countByRank = new Array(15).fill(0); const countBySet = new Array(5).fill(0); cards.forEach((card: Card) => { countByRank[card.rank]++; countBySuit[card.suit]++; }); countByRank.forEach( (count: number) => count && countBySet[count]++ ); . . . // 继续... ``` 我们不要忘记A可能位于顺子的开头(A-2-3-4-5)或结尾(10-J-Q-K-A)。我们可以通过在K之后复制Aces计数来处理这个问题。 ```ts // ...继续 . . . countByRank[14] = countByRank[1]; . . . // 继续... ``` 现在我们可以开始识别手牌了。我们只需要查看按等级计数即可识别几种手牌: ```ts // ...继续 . . . if (count BySet[4] === 1 && countBySet[1] === 1) return Hand.FourOfAKind; else if (countBySet[3] && countBySet[2] === 1) return Hand.FullHouse; else if (countBySet[3] && countBySet[1] === 2) return Hand.ThreeOfAKind; else if (countBySet[2] === 2 && countBySet[1] === 1) return Hand.TwoPairs; else if (countBySet[2] === 1 && countBySet[1] === 3) return Hand.OnePair; . . . // 继续... ``` 例如,如果有四张相同等级的牌,我们知道玩家将获得“四条”。可能会问:如果`countBySet[4] === 1`,为什么还要测试`countBySet[1] === 1`?如果四张牌的等级相同,应该只有一张其他牌,对吗?答案是[“防御性编程”](https://en.wikipedia.org/wiki/Defensive_programming)——在开发代码时,有时会出现错误,通过在测试中更加具体,有助于排查错误。 上面的情况包括了所有某个等级出现多次的可能性。我们必须处理其他情况,包括顺子、同花和“高牌”。 ```ts // ...继续 . . . else if (countBySet[1] === 5) { if (countByRank.join('').includes('11111')) return !countBySuit.includes(5) ? Hand.Straight : countByRank.slice(10).join('') === '11111' ? Hand.RoyalFlush : Hand.StraightFlush; else { return countBySuit.includes(5) ? Hand.Flush : Hand.HighCard; } } else { throw new Error( 'Unknown hand! This cannot happen! Bad logic!' ); } ``` 这里我们再次进行防御性编程;即使我们知道我们有五个不同的等级,我们也确保逻辑工作良好,甚至在出现问题时抛出一个`throw`。 我们如何测试顺子?我们应该有五个连续的等级。如果我们查看`countByRank`数组,它应该有五个连续的1,所以通过执行`countByRank.join()`并检查生成的字符串是否包含`11111`,我们可以确定是顺子。 ![-](./images/5.png) 我们必须区分几种情况: * 如果没有五张相同花色的牌,那么它是一个普通的顺子 * 如果所有牌都是相同花色,如果顺子以一张A结束,则为皇家同花顺 * 如果所有牌都是相同花色,但我们不以A结束,那么我们有一个同花顺 如果我们没有顺子,只有两种可能性: * 如果所有牌都是相同花色,我们有一个同花 * 如果不是所有牌都是相同花色,我们有一个“高牌” 完整的函数如下所示: ```ts export function handRank( cardStrings: [string, string, string, string, string] ): Hand { const cards: Card[] = cardStrings.map((str: string) => ({ rank: rankToNumber( str.substring(0, str.length - 1) as Rank ), suit: suitToNumber(str.at(-1) as Suit) })); // We won't use the [0] place in the following arrays const countBySuit = new Array(5).fill(0); const countByRank = new Array(15).fill(0); const countBySet = new Array(5).fill(0); cards.forEach((card: Card) => { countByRank[card.rank]++; countBySuit[card.suit]++; }); countByRank.forEach( (count: number) => count && countBySet[count]++ ); // count the A also as a 14, for straights countByRank[14] = countByRank[1]; if (countBySet[4] === 1 && countBySet[1] === 1) return Hand.FourOfAKind; else if (countBySet[3] && countBySet[2] === 1) return Hand.FullHouse; else if (countBySet[3] && countBySet[1] === 2) return Hand.ThreeOfAKind; else if (countBySet[2] === 2 && countBySet[1] === 1) return Hand.TwoPairs; else if (countBySet[2] === 1 && countBySet[1] === 3) return Hand.OnePair; else if (countBySet[1] === 5) { if (countByRank.join('').includes('11111')) return !countBySuit.includes(5) ? Hand.Straight : countByRank.slice(10).join('') === '11111' ? Hand.RoyalFlush : Hand.StraightFlush; else { /* !countByRank.join("").includes("11111") */ return countBySuit.includes(5) ? Hand.Flush : Hand.HighCard; } } else { throw new Error( 'Unknown hand! This cannot happen! Bad logic!' ); } } ``` ## 测试代码 ```ts console.log(handRank(['3♥', '5♦', '8♣', 'A♥', '6♠'])); // 0 console.log(handRank(['3♥', '5♦', '8♣', 'A♥', '5♠'])); // 1 console.log(handRank(['3♥', '5♦', '3♣', 'A♥', '5♠'])); // 2 console.log(handRank(['3♥', '5♦', '8♣', '5♥', '5♠'])); // 3 console.log(handRank(['3♥', '2♦', 'A♣', '5♥', '4♠'])); // 4 console.log(handRank(['J♥', '10♦', 'A♣', 'Q♥', 'K♠'])); // 4 console.log(handRank(['3♥', '4♦', '7♣', '5♥', '6♠'])); // 4 console.log(handRank(['3♥', '4♥', '9♥', '5♥', '6♥'])); // 5 console.log(handRank(['3♥', '5♦', '3♣', '5♥', '3♠'])); // 6 console.log(handRank(['3♥', '3♦', '3♣', '5♥', '3♠'])); // 7 console.log(handRank(['3♥', '4♥', '7♥', '5♥', '6♥'])); // 8 console.log(handRank(['K♥', 'Q♥', 'A♥', '10♥', 'J♥'])); // 9 ``` [在线运行](https://code.juejin.cn/pen/7254739493366333499)
1.0
poker - # TS实战之扑克牌排序 [在线运行](https://code.juejin.cn/pen/7254739493366333499) 我们用`ts实现扑克牌排序问题`,首先,我们将定义所需的数据类型,然后专注于模式查找算法,该算法有几个有趣的要点。 ## 类型和转换 定义一些我们需要的类型。`Rank`和`Suit`是明显的[联合类型](https://www.typescriptlang.org/docs/handbook/2/everyday-types.html#union-types)。 ```ts type Rank = | 'A' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9' | '10' | 'J' | 'Q' | 'K' type Suit = '♥' | '♦' | '♠' | '♣'; ``` 我们将使用`Card`对象进行处理,将rank和suit转换为数字。卡片将用从1(Ace)到13(King)的值表示,花色从1(红心)到4(梅花)。`rankToNumber()`和`suitToNumber()`函数处理从`Rank`和`Suit`值到数字的转换。 ```ts type Card = { rank: number; suit: number }; const rankToNumber = (rank: Rank): number => rank === 'A' ? 1 : rank === 'J' ? 11 : rank === 'Q' ? 12 : rank === 'K' ? 13 : Number(rank); const suitToNumber = (suit: Suit): number => suit === '♥' ? 1 : suit === '♦' ? 2 : suit === '♠' ? 3 : /* suit === "♣" */ 4; ``` ![-](./images/3.png) 这些类型用于内部工作;我们还必须定义手牌检测算法的结果类型。我们需要一个[枚举](https://www.typescriptlang.org/docs/handbook/enums.html)类型来表示手牌的可能值。这些值按照从最低("高牌")到最高("皇家同花顺")的顺序排列。 ```ts enum Hand { HighCard, // 高牌 OnePair, // 一对 TwoPairs, // 两对 ThreeOfAKind, // 三条 Straight, // 顺子 Flush, // 同花 FullHouse, // 葫芦 FourOfAKind, // 四条 StraightFlush, // 同花顺 RoyalFlush //皇家同花顺 } ``` ## 我们有什么手牌? 让我们首先定义我们将要构建的`handRank()`函数。我们的函数将接收一个包含`五张牌的元组`,并返回一个`Hand`结果。 ```ts export function handRank( cardStrings: [string, string, string, string, string] ): Hand { . . . } ``` 由于处理字符串比我们需要的要困难,我们将把牌字符串转换为具有数字`rank`和`suit`值的`Card`对象,以便更容易编写。 ```ts const cards: Card[] = cardStrings.map((str: string) => ({ rank: rankToNumber( str.substring(0, str.length - 1) as Rank ), suit: suitToNumber(str.at(-1) as Suit) })); . . . // 继续... ``` ![-](./images/4.png) 确定玩家手牌的价值的关键在于知道每个等级的牌有多少张,以及我们有多少计数。例如,如果我们有三张J和两张K,J的计数为3,K的计数为2。然后,知道我们有一个计数为三和一个计数为两的计数,我们可以确定我们有一个葫芦。另一个例子:如果我们有两个Q,两个A和一个5,我们会得到两个计数为两和一个计数为一;我们有两对。 生成计数很简单。我们希望A的计数在`countByRank[1]`处,因此我们不会使用`countByRank`数组的初始位置。类似地,花色的计数将位于`countBySuit[1]`到`countBySuit[4]`之间,因此我们也不会使用该数组的初始位置。 ```ts // ...继续 . . . const countBySuit = new Array(5).fill(0); const countByRank = new Array(15).fill(0); const countBySet = new Array(5).fill(0); cards.forEach((card: Card) => { countByRank[card.rank]++; countBySuit[card.suit]++; }); countByRank.forEach( (count: number) => count && countBySet[count]++ ); . . . // 继续... ``` 我们不要忘记A可能位于顺子的开头(A-2-3-4-5)或结尾(10-J-Q-K-A)。我们可以通过在K之后复制Aces计数来处理这个问题。 ```ts // ...继续 . . . countByRank[14] = countByRank[1]; . . . // 继续... ``` 现在我们可以开始识别手牌了。我们只需要查看按等级计数即可识别几种手牌: ```ts // ...继续 . . . if (count BySet[4] === 1 && countBySet[1] === 1) return Hand.FourOfAKind; else if (countBySet[3] && countBySet[2] === 1) return Hand.FullHouse; else if (countBySet[3] && countBySet[1] === 2) return Hand.ThreeOfAKind; else if (countBySet[2] === 2 && countBySet[1] === 1) return Hand.TwoPairs; else if (countBySet[2] === 1 && countBySet[1] === 3) return Hand.OnePair; . . . // 继续... ``` 例如,如果有四张相同等级的牌,我们知道玩家将获得“四条”。可能会问:如果`countBySet[4] === 1`,为什么还要测试`countBySet[1] === 1`?如果四张牌的等级相同,应该只有一张其他牌,对吗?答案是[“防御性编程”](https://en.wikipedia.org/wiki/Defensive_programming)——在开发代码时,有时会出现错误,通过在测试中更加具体,有助于排查错误。 上面的情况包括了所有某个等级出现多次的可能性。我们必须处理其他情况,包括顺子、同花和“高牌”。 ```ts // ...继续 . . . else if (countBySet[1] === 5) { if (countByRank.join('').includes('11111')) return !countBySuit.includes(5) ? Hand.Straight : countByRank.slice(10).join('') === '11111' ? Hand.RoyalFlush : Hand.StraightFlush; else { return countBySuit.includes(5) ? Hand.Flush : Hand.HighCard; } } else { throw new Error( 'Unknown hand! This cannot happen! Bad logic!' ); } ``` 这里我们再次进行防御性编程;即使我们知道我们有五个不同的等级,我们也确保逻辑工作良好,甚至在出现问题时抛出一个`throw`。 我们如何测试顺子?我们应该有五个连续的等级。如果我们查看`countByRank`数组,它应该有五个连续的1,所以通过执行`countByRank.join()`并检查生成的字符串是否包含`11111`,我们可以确定是顺子。 ![-](./images/5.png) 我们必须区分几种情况: * 如果没有五张相同花色的牌,那么它是一个普通的顺子 * 如果所有牌都是相同花色,如果顺子以一张A结束,则为皇家同花顺 * 如果所有牌都是相同花色,但我们不以A结束,那么我们有一个同花顺 如果我们没有顺子,只有两种可能性: * 如果所有牌都是相同花色,我们有一个同花 * 如果不是所有牌都是相同花色,我们有一个“高牌” 完整的函数如下所示: ```ts export function handRank( cardStrings: [string, string, string, string, string] ): Hand { const cards: Card[] = cardStrings.map((str: string) => ({ rank: rankToNumber( str.substring(0, str.length - 1) as Rank ), suit: suitToNumber(str.at(-1) as Suit) })); // We won't use the [0] place in the following arrays const countBySuit = new Array(5).fill(0); const countByRank = new Array(15).fill(0); const countBySet = new Array(5).fill(0); cards.forEach((card: Card) => { countByRank[card.rank]++; countBySuit[card.suit]++; }); countByRank.forEach( (count: number) => count && countBySet[count]++ ); // count the A also as a 14, for straights countByRank[14] = countByRank[1]; if (countBySet[4] === 1 && countBySet[1] === 1) return Hand.FourOfAKind; else if (countBySet[3] && countBySet[2] === 1) return Hand.FullHouse; else if (countBySet[3] && countBySet[1] === 2) return Hand.ThreeOfAKind; else if (countBySet[2] === 2 && countBySet[1] === 1) return Hand.TwoPairs; else if (countBySet[2] === 1 && countBySet[1] === 3) return Hand.OnePair; else if (countBySet[1] === 5) { if (countByRank.join('').includes('11111')) return !countBySuit.includes(5) ? Hand.Straight : countByRank.slice(10).join('') === '11111' ? Hand.RoyalFlush : Hand.StraightFlush; else { /* !countByRank.join("").includes("11111") */ return countBySuit.includes(5) ? Hand.Flush : Hand.HighCard; } } else { throw new Error( 'Unknown hand! This cannot happen! Bad logic!' ); } } ``` ## 测试代码 ```ts console.log(handRank(['3♥', '5♦', '8♣', 'A♥', '6♠'])); // 0 console.log(handRank(['3♥', '5♦', '8♣', 'A♥', '5♠'])); // 1 console.log(handRank(['3♥', '5♦', '3♣', 'A♥', '5♠'])); // 2 console.log(handRank(['3♥', '5♦', '8♣', '5♥', '5♠'])); // 3 console.log(handRank(['3♥', '2♦', 'A♣', '5♥', '4♠'])); // 4 console.log(handRank(['J♥', '10♦', 'A♣', 'Q♥', 'K♠'])); // 4 console.log(handRank(['3♥', '4♦', '7♣', '5♥', '6♠'])); // 4 console.log(handRank(['3♥', '4♥', '9♥', '5♥', '6♥'])); // 5 console.log(handRank(['3♥', '5♦', '3♣', '5♥', '3♠'])); // 6 console.log(handRank(['3♥', '3♦', '3♣', '5♥', '3♠'])); // 7 console.log(handRank(['3♥', '4♥', '7♥', '5♥', '6♥'])); // 8 console.log(handRank(['K♥', 'Q♥', 'A♥', '10♥', 'J♥'])); // 9 ``` [在线运行](https://code.juejin.cn/pen/7254739493366333499)
non_port
poker ts实战之扑克牌排序 我们用 ts实现扑克牌排序问题 ,首先,我们将定义所需的数据类型,然后专注于模式查找算法,该算法有几个有趣的要点。 类型和转换 定义一些我们需要的类型。 rank 和 suit 是明显的 ts type rank a j q k type suit ♥ ♦ ♠ ♣ 我们将使用 card 对象进行处理,将rank和suit转换为数字。 (ace) (king)的值表示, (红心) (梅花)。 ranktonumber 和 suittonumber 函数处理从 rank 和 suit 值到数字的转换。 ts type card rank number suit number const ranktonumber rank rank number rank a rank j rank q rank k number rank const suittonumber suit suit number suit ♥ suit ♦ suit ♠ suit ♣ images png 这些类型用于内部工作;我们还必须定义手牌检测算法的结果类型。我们需要一个 ts enum hand highcard 高牌 onepair 一对 twopairs 两对 threeofakind 三条 straight 顺子 flush 同花 fullhouse 葫芦 fourofakind 四条 straightflush 同花顺 royalflush 皇家同花顺 我们有什么手牌? 让我们首先定义我们将要构建的 handrank 函数。我们的函数将接收一个包含 五张牌的元组 ,并返回一个 hand 结果。 ts export function handrank cardstrings hand 由于处理字符串比我们需要的要困难,我们将把牌字符串转换为具有数字 rank 和 suit 值的 card 对象,以便更容易编写。 ts const cards card cardstrings map str string rank ranktonumber str substring str length as rank suit suittonumber str at as suit 继续 images png 确定玩家手牌的价值的关键在于知道每个等级的牌有多少张,以及我们有多少计数。例如,如果我们有三张j和两张k, , 。然后,知道我们有一个计数为三和一个计数为两的计数,我们可以确定我们有一个葫芦。另一个例子:如果我们有两个q, ,我们会得到两个计数为两和一个计数为一;我们有两对。 生成计数很简单。我们希望a的计数在 countbyrank 处,因此我们不会使用 countbyrank 数组的初始位置。类似地,花色的计数将位于 countbysuit 到 countbysuit 之间,因此我们也不会使用该数组的初始位置。 ts 继续 const countbysuit new array fill const countbyrank new array fill const countbyset new array fill cards foreach card card countbyrank countbysuit countbyrank foreach count number count countbyset 继续 我们不要忘记a可能位于顺子的开头(a )或结尾( j q k a)。我们可以通过在k之后复制aces计数来处理这个问题。 ts 继续 countbyrank countbyrank 继续 现在我们可以开始识别手牌了。我们只需要查看按等级计数即可识别几种手牌: ts 继续 if count byset countbyset return hand fourofakind else if countbyset countbyset return hand fullhouse else if countbyset countbyset return hand threeofakind else if countbyset countbyset return hand twopairs else if countbyset countbyset return hand onepair 继续 例如,如果有四张相同等级的牌,我们知道玩家将获得“四条”。可能会问:如果 countbyset ,为什么还要测试 countbyset ?如果四张牌的等级相同,应该只有一张其他牌,对吗?答案是 上面的情况包括了所有某个等级出现多次的可能性。我们必须处理其他情况,包括顺子、同花和“高牌”。 ts 继续 else if countbyset if countbyrank join includes return countbysuit includes hand straight countbyrank slice join hand royalflush hand straightflush else return countbysuit includes hand flush hand highcard else throw new error unknown hand this cannot happen bad logic 这里我们再次进行防御性编程;即使我们知道我们有五个不同的等级,我们也确保逻辑工作良好,甚至在出现问题时抛出一个 throw 。 我们如何测试顺子?我们应该有五个连续的等级。如果我们查看 countbyrank 数组, ,所以通过执行 countbyrank join 并检查生成的字符串是否包含 ,我们可以确定是顺子。 images png 我们必须区分几种情况: 如果没有五张相同花色的牌,那么它是一个普通的顺子 如果所有牌都是相同花色,如果顺子以一张a结束,则为皇家同花顺 如果所有牌都是相同花色,但我们不以a结束,那么我们有一个同花顺 如果我们没有顺子,只有两种可能性: 如果所有牌都是相同花色,我们有一个同花 如果不是所有牌都是相同花色,我们有一个“高牌” 完整的函数如下所示: ts export function handrank cardstrings hand const cards card cardstrings map str string rank ranktonumber str substring str length as rank suit suittonumber str at as suit we won t use the place in the following arrays const countbysuit new array fill const countbyrank new array fill const countbyset new array fill cards foreach card card countbyrank countbysuit countbyrank foreach count number count countbyset count the a also as a for straights countbyrank countbyrank if countbyset countbyset return hand fourofakind else if countbyset countbyset return hand fullhouse else if countbyset countbyset return hand threeofakind else if countbyset countbyset return hand twopairs else if countbyset countbyset return hand onepair else if countbyset if countbyrank join includes return countbysuit includes hand straight countbyrank slice join hand royalflush hand straightflush else countbyrank join includes return countbysuit includes hand flush hand highcard else throw new error unknown hand this cannot happen bad logic 测试代码 ts console log handrank console log handrank console log handrank console log handrank console log handrank console log handrank console log handrank console log handrank console log handrank console log handrank console log handrank console log handrank
0
928
12,219,309,256
IssuesEvent
2020-05-01 21:23:33
ocaml/opam
https://api.github.com/repos/ocaml/opam
closed
generate success/failure graphs
AREA: PLATFORM AREA: PORTABILITY KIND: FEATURE WISH
@yallop had the good idea of generating % success/failure graphs for builds across the repository. This will likely be an `opam-admin` option to generate the graph, so tracking it here (not a 1.2 blocker)
True
generate success/failure graphs - @yallop had the good idea of generating % success/failure graphs for builds across the repository. This will likely be an `opam-admin` option to generate the graph, so tracking it here (not a 1.2 blocker)
port
generate success failure graphs yallop had the good idea of generating success failure graphs for builds across the repository this will likely be an opam admin option to generate the graph so tracking it here not a blocker
1
448,616
12,954,406,331
IssuesEvent
2020-07-20 03:33:40
GoogleChrome/lighthouse
https://api.github.com/repos/GoogleChrome/lighthouse
closed
Tensorflow model assets import degrading performance scores randomly
needs-priority pending-close question
Hello lighthouse team, I'm trying to load a tjfs model locally for my web app ( in the end ) for a later point of time. All of my page metrics are fine except the fact that either lighthouse is giving me poor performance score or either giving me score of 90+ with note : " page loaded too slowly to finish within the time limit ". Is there a way I can avoid my model import to affect my performance scores since I want to start loading my model as soon as poosible after initial load. Link : https://toxicity-detector.web.app/ ( Checked with lighthouse chrome extension ) With regards, Aditya
1.0
Tensorflow model assets import degrading performance scores randomly - Hello lighthouse team, I'm trying to load a tjfs model locally for my web app ( in the end ) for a later point of time. All of my page metrics are fine except the fact that either lighthouse is giving me poor performance score or either giving me score of 90+ with note : " page loaded too slowly to finish within the time limit ". Is there a way I can avoid my model import to affect my performance scores since I want to start loading my model as soon as poosible after initial load. Link : https://toxicity-detector.web.app/ ( Checked with lighthouse chrome extension ) With regards, Aditya
non_port
tensorflow model assets import degrading performance scores randomly hello lighthouse team i m trying to load a tjfs model locally for my web app in the end for a later point of time all of my page metrics are fine except the fact that either lighthouse is giving me poor performance score or either giving me score of with note page loaded too slowly to finish within the time limit is there a way i can avoid my model import to affect my performance scores since i want to start loading my model as soon as poosible after initial load link checked with lighthouse chrome extension with regards aditya
0
1,978
30,925,044,823
IssuesEvent
2023-08-06 11:35:24
microsoft/winget-cli
https://api.github.com/repos/microsoft/winget-cli
closed
Symlinks are not created for portable installations
Portable
### Brief description of your issue When installing a portable app, the executable is correctly extracted and stored into the correct location, but the symlink referenced in the respective sqlite DB is not created. IMPORTANT: the chosen example is restic, but it is not limited to this tool. A different example would be VirusTotal.YARA ### Steps to reproduce Using the example of restic: 1. Run `winget install restic` ### Expected behavior - restic is installed to the specified portablePackageUserRoot - a symlink `C:\Users\<USER>\AppData\Local\Microsoft\WinGet\Links\restic.exe` is created pointing to the installed file is created - the directory `C:\Users\<USER>\AppData\Local\Microsoft\WinGet\Links\` is added to the users `PATH` It is possible to start the tool using `restic` ### Actual behavior - restic is installed to the specified portablePackageUserRoot - the directory `<portablePackageUserRoot>\restic.restic_Microsoft.Winget.Source_8wekyb3d8bbwe` is added to the users `PATH` This makes the tool not available under the alias `restic`, but only `restic_0.15.2_windows_amd64` which is the executable stored to disk ### Environment ```shell Windows Package Manager v1.5.1881 Copyright (c) Microsoft Corporation. All rights reserved. Windows: Windows.Desktop v10.0.19045.3271 System Architecture: X64 Package: Microsoft.DesktopAppInstaller v1.20.1881.0 Winget Directories ------------------------------------------------------------------------------------------------------------------------------- Logs %LOCALAPPDATA%\Packages\Microsoft.DesktopAppInstaller_8wekyb3d8bbwe\LocalState\DiagOutputDir User Settings %LOCALAPPDATA%\Packages\Microsoft.DesktopAppInstaller_8wekyb3d8bbwe\LocalState\settings.json Portable Links Directory (User) %LOCALAPPDATA%\Microsoft\WinGet\Links Portable Links Directory (Machine) C:\Program Files\WinGet\Links Portable Package Root (User) %USERPROFILE%\tools\ Portable Package Root C:\Program Files\WinGet\Packages Portable Package Root (x86) C:\Program Files (x86)\WinGet\Packages Links --------------------------------------------------------------------------- Privacy Statement https://aka.ms/winget-privacy License Agreement https://aka.ms/winget-license Third Party Notices https://aka.ms/winget-3rdPartyNotice Homepage https://aka.ms/winget Windows Store Terms https://www.microsoft.com/en-us/storedocs/terms-of-sale Admin Setting State -------------------------------------------------- LocalManifestFiles Disabled BypassCertificatePinningForMicrosoftStore Disabled InstallerHashOverride Disabled LocalArchiveMalwareScanOverride Disabled ```
True
Symlinks are not created for portable installations - ### Brief description of your issue When installing a portable app, the executable is correctly extracted and stored into the correct location, but the symlink referenced in the respective sqlite DB is not created. IMPORTANT: the chosen example is restic, but it is not limited to this tool. A different example would be VirusTotal.YARA ### Steps to reproduce Using the example of restic: 1. Run `winget install restic` ### Expected behavior - restic is installed to the specified portablePackageUserRoot - a symlink `C:\Users\<USER>\AppData\Local\Microsoft\WinGet\Links\restic.exe` is created pointing to the installed file is created - the directory `C:\Users\<USER>\AppData\Local\Microsoft\WinGet\Links\` is added to the users `PATH` It is possible to start the tool using `restic` ### Actual behavior - restic is installed to the specified portablePackageUserRoot - the directory `<portablePackageUserRoot>\restic.restic_Microsoft.Winget.Source_8wekyb3d8bbwe` is added to the users `PATH` This makes the tool not available under the alias `restic`, but only `restic_0.15.2_windows_amd64` which is the executable stored to disk ### Environment ```shell Windows Package Manager v1.5.1881 Copyright (c) Microsoft Corporation. All rights reserved. Windows: Windows.Desktop v10.0.19045.3271 System Architecture: X64 Package: Microsoft.DesktopAppInstaller v1.20.1881.0 Winget Directories ------------------------------------------------------------------------------------------------------------------------------- Logs %LOCALAPPDATA%\Packages\Microsoft.DesktopAppInstaller_8wekyb3d8bbwe\LocalState\DiagOutputDir User Settings %LOCALAPPDATA%\Packages\Microsoft.DesktopAppInstaller_8wekyb3d8bbwe\LocalState\settings.json Portable Links Directory (User) %LOCALAPPDATA%\Microsoft\WinGet\Links Portable Links Directory (Machine) C:\Program Files\WinGet\Links Portable Package Root (User) %USERPROFILE%\tools\ Portable Package Root C:\Program Files\WinGet\Packages Portable Package Root (x86) C:\Program Files (x86)\WinGet\Packages Links --------------------------------------------------------------------------- Privacy Statement https://aka.ms/winget-privacy License Agreement https://aka.ms/winget-license Third Party Notices https://aka.ms/winget-3rdPartyNotice Homepage https://aka.ms/winget Windows Store Terms https://www.microsoft.com/en-us/storedocs/terms-of-sale Admin Setting State -------------------------------------------------- LocalManifestFiles Disabled BypassCertificatePinningForMicrosoftStore Disabled InstallerHashOverride Disabled LocalArchiveMalwareScanOverride Disabled ```
port
symlinks are not created for portable installations brief description of your issue when installing a portable app the executable is correctly extracted and stored into the correct location but the symlink referenced in the respective sqlite db is not created important the chosen example is restic but it is not limited to this tool a different example would be virustotal yara steps to reproduce using the example of restic run winget install restic expected behavior restic is installed to the specified portablepackageuserroot a symlink c users appdata local microsoft winget links restic exe is created pointing to the installed file is created the directory c users appdata local microsoft winget links is added to the users path it is possible to start the tool using restic actual behavior restic is installed to the specified portablepackageuserroot the directory restic restic microsoft winget source is added to the users path this makes the tool not available under the alias restic but only restic windows which is the executable stored to disk environment shell windows package manager copyright c microsoft corporation all rights reserved windows windows desktop system architecture package microsoft desktopappinstaller winget directories logs localappdata packages microsoft desktopappinstaller localstate diagoutputdir user settings localappdata packages microsoft desktopappinstaller localstate settings json portable links directory user localappdata microsoft winget links portable links directory machine c program files winget links portable package root user userprofile tools portable package root c program files winget packages portable package root c program files winget packages links privacy statement license agreement third party notices homepage windows store terms admin setting state localmanifestfiles disabled bypasscertificatepinningformicrosoftstore disabled installerhashoverride disabled localarchivemalwarescanoverride disabled
1
297,842
9,182,303,999
IssuesEvent
2019-03-05 12:30:40
servicemesher/istio-official-translation
https://api.github.com/repos/servicemesher/istio-official-translation
closed
content/docs/examples/advanced-gateways/_index.md
lang/zh pending priority/P0 sync/update version/1.1
文件路径:content/docs/examples/advanced-gateways/_index.md [源码](https://github.com/istio/istio.github.io/tree/master/content/docs/examples/advanced-gateways/_index.md) [网址](https://istio.io//docs/examples/advanced-gateways/_index.htm) ```diff diff --git a/content/docs/examples/advanced-gateways/_index.md b/content/docs/examples/advanced-gateways/_index.md index 364a9901..68a4ed5e 100644 --- a/content/docs/examples/advanced-gateways/_index.md +++ b/content/docs/examples/advanced-gateways/_index.md @@ -1,5 +1,5 @@ --- -title: Edge Traffic Management +title: Advanced Edge Traffic Management description: A variety of advanced examples for managing traffic at the edge (i.e., ingress and egress traffic) of an Istio service mesh. weight: 61 keywords: [ingress, egress, gateway] ```
1.0
content/docs/examples/advanced-gateways/_index.md - 文件路径:content/docs/examples/advanced-gateways/_index.md [源码](https://github.com/istio/istio.github.io/tree/master/content/docs/examples/advanced-gateways/_index.md) [网址](https://istio.io//docs/examples/advanced-gateways/_index.htm) ```diff diff --git a/content/docs/examples/advanced-gateways/_index.md b/content/docs/examples/advanced-gateways/_index.md index 364a9901..68a4ed5e 100644 --- a/content/docs/examples/advanced-gateways/_index.md +++ b/content/docs/examples/advanced-gateways/_index.md @@ -1,5 +1,5 @@ --- -title: Edge Traffic Management +title: Advanced Edge Traffic Management description: A variety of advanced examples for managing traffic at the edge (i.e., ingress and egress traffic) of an Istio service mesh. weight: 61 keywords: [ingress, egress, gateway] ```
non_port
content docs examples advanced gateways index md 文件路径:content docs examples advanced gateways index md diff diff git a content docs examples advanced gateways index md b content docs examples advanced gateways index md index a content docs examples advanced gateways index md b content docs examples advanced gateways index md title edge traffic management title advanced edge traffic management description a variety of advanced examples for managing traffic at the edge i e ingress and egress traffic of an istio service mesh weight keywords
0
512,572
14,900,877,629
IssuesEvent
2021-01-21 15:52:01
department-of-veterans-affairs/va.gov-team
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
opened
Front end display when Walk ins value = "unknown"
Q121-priority frontend frontend-vamc vsa vsa-facilities
## Issue Description Within the Audiology accordion for [University](Drive](https://www.va.gov/pittsburgh-health-care/locations/pittsburgh-va-medical-center-university-drive/), the CMS value for "Walkins accepted" = Unknown. - On Staging, "Walkins accepted" = No. - On Prod, "Walkins accepted" does not appear at all. ### Possible Values for "Walk ins" and display settings If the value for "Walk ins" = "Yes" in CMS, the front end should display "Walk-ins accepted? Yes" If the value for "Walk ins" = "No" in CMS, the front end should display "Walk-ins accepted? No" If the value for "Walk ins" = "Unknown" in CMS, do not display the line "Walk-ins accepted" --- ## Tasks - [ ] Ensure the front end renders appropriately based on logic displayed above. ## Acceptance Criteria - [ ] When the value for "Walk ins" = "Yes" in CMS, the front end displays "Walk-ins accepted? Yes" - [ ] When the value for "Walk ins" = "No" in CMS, the front end displays "Walk-ins accepted? No" - [ ] When the value for "Walk ins" = "Unknown" in CMS, the line "Walk-ins accepted" is not displayed at all. This can be validated using the Audiology accordion for [University](Drive](https://www.va.gov/pittsburgh-health-care/locations/pittsburgh-va-medical-center-university-drive/) --- ## How to configure this issue - [ ] **Attached to a Milestone** (when will this be completed?) - [ ] **Attached to an Epic** (what body of work is this a part of?) - [ ] **Labeled with Team** (`product support`, `analytics-insights`, `operations`, `service-design`, `tools-be`, `tools-fe`) - [ ] **Labeled with Practice Area** (`backend`, `frontend`, `devops`, `design`, `research`, `product`, `ia`, `qa`, `analytics`, `contact center`, `research`, `accessibility`, `content`) - [ ] **Labeled with Type** (`bug`, `request`, `discovery`, `documentation`, etc.)
1.0
Front end display when Walk ins value = "unknown" - ## Issue Description Within the Audiology accordion for [University](Drive](https://www.va.gov/pittsburgh-health-care/locations/pittsburgh-va-medical-center-university-drive/), the CMS value for "Walkins accepted" = Unknown. - On Staging, "Walkins accepted" = No. - On Prod, "Walkins accepted" does not appear at all. ### Possible Values for "Walk ins" and display settings If the value for "Walk ins" = "Yes" in CMS, the front end should display "Walk-ins accepted? Yes" If the value for "Walk ins" = "No" in CMS, the front end should display "Walk-ins accepted? No" If the value for "Walk ins" = "Unknown" in CMS, do not display the line "Walk-ins accepted" --- ## Tasks - [ ] Ensure the front end renders appropriately based on logic displayed above. ## Acceptance Criteria - [ ] When the value for "Walk ins" = "Yes" in CMS, the front end displays "Walk-ins accepted? Yes" - [ ] When the value for "Walk ins" = "No" in CMS, the front end displays "Walk-ins accepted? No" - [ ] When the value for "Walk ins" = "Unknown" in CMS, the line "Walk-ins accepted" is not displayed at all. This can be validated using the Audiology accordion for [University](Drive](https://www.va.gov/pittsburgh-health-care/locations/pittsburgh-va-medical-center-university-drive/) --- ## How to configure this issue - [ ] **Attached to a Milestone** (when will this be completed?) - [ ] **Attached to an Epic** (what body of work is this a part of?) - [ ] **Labeled with Team** (`product support`, `analytics-insights`, `operations`, `service-design`, `tools-be`, `tools-fe`) - [ ] **Labeled with Practice Area** (`backend`, `frontend`, `devops`, `design`, `research`, `product`, `ia`, `qa`, `analytics`, `contact center`, `research`, `accessibility`, `content`) - [ ] **Labeled with Type** (`bug`, `request`, `discovery`, `documentation`, etc.)
non_port
front end display when walk ins value unknown issue description within the audiology accordion for drive the cms value for walkins accepted unknown on staging walkins accepted no on prod walkins accepted does not appear at all possible values for walk ins and display settings if the value for walk ins yes in cms the front end should display walk ins accepted yes if the value for walk ins no in cms the front end should display walk ins accepted no if the value for walk ins unknown in cms do not display the line walk ins accepted tasks ensure the front end renders appropriately based on logic displayed above acceptance criteria when the value for walk ins yes in cms the front end displays walk ins accepted yes when the value for walk ins no in cms the front end displays walk ins accepted no when the value for walk ins unknown in cms the line walk ins accepted is not displayed at all this can be validated using the audiology accordion for drive how to configure this issue attached to a milestone when will this be completed attached to an epic what body of work is this a part of labeled with team product support analytics insights operations service design tools be tools fe labeled with practice area backend frontend devops design research product ia qa analytics contact center research accessibility content labeled with type bug request discovery documentation etc
0
416,660
28,094,312,334
IssuesEvent
2023-03-30 14:51:38
VeryGoodOpenSource/dart_frog
https://api.github.com/repos/VeryGoodOpenSource/dart_frog
opened
docs: Custom Handler for non-file-based Routing
documentation
**Description** The docs do not currently state how to implement a custom router if file-based routing is not desired. This was requested in #467 and #530. I provided an example based on shelf_router in #530, but any other router would be fine too. **Requirements** - [ ] Provide a clear example on how to implement a non-file-based router - [ ] Provide references in Routing/Middleware/Custom Server Entrypoint sections
1.0
docs: Custom Handler for non-file-based Routing - **Description** The docs do not currently state how to implement a custom router if file-based routing is not desired. This was requested in #467 and #530. I provided an example based on shelf_router in #530, but any other router would be fine too. **Requirements** - [ ] Provide a clear example on how to implement a non-file-based router - [ ] Provide references in Routing/Middleware/Custom Server Entrypoint sections
non_port
docs custom handler for non file based routing description the docs do not currently state how to implement a custom router if file based routing is not desired this was requested in and i provided an example based on shelf router in but any other router would be fine too requirements provide a clear example on how to implement a non file based router provide references in routing middleware custom server entrypoint sections
0
1,410
2,544,426,081
IssuesEvent
2015-01-29 09:49:34
cogizz/metamodelsfilter_textcombine
https://api.github.com/repos/cogizz/metamodelsfilter_textcombine
closed
Fatal error: Class 'MetaModels\DcGeneral\Events\Table\FilterSetting\DrawSetting' not found
bug testing
Contao 3.3.5 metamodels/core dev-tng (5a98d965) Habe gerade über den Composer metamodelsfilter_textcombine in der Version dev-tng (b0c17243) installiert. Wenn ich die Attributeinstellungen für meine Filtereinstellungen editieren will, erhalte ich folgende Fehlermeldung: Fatal error: Class 'MetaModels\DcGeneral\Events\Table\FilterSetting\DrawSetting' not found in /.../.../.../.../.../composer/vendor/cogizz/metamodelsfilter_textcombine/src/system/modules/metamodelsfilter_textcombine/MetaModels/DcGeneral/Events/Table/FilterSetting/DrawTextCombineSetting.php on line 34 Was kann ich tun?
1.0
Fatal error: Class 'MetaModels\DcGeneral\Events\Table\FilterSetting\DrawSetting' not found - Contao 3.3.5 metamodels/core dev-tng (5a98d965) Habe gerade über den Composer metamodelsfilter_textcombine in der Version dev-tng (b0c17243) installiert. Wenn ich die Attributeinstellungen für meine Filtereinstellungen editieren will, erhalte ich folgende Fehlermeldung: Fatal error: Class 'MetaModels\DcGeneral\Events\Table\FilterSetting\DrawSetting' not found in /.../.../.../.../.../composer/vendor/cogizz/metamodelsfilter_textcombine/src/system/modules/metamodelsfilter_textcombine/MetaModels/DcGeneral/Events/Table/FilterSetting/DrawTextCombineSetting.php on line 34 Was kann ich tun?
non_port
fatal error class metamodels dcgeneral events table filtersetting drawsetting not found contao metamodels core dev tng habe gerade über den composer metamodelsfilter textcombine in der version dev tng installiert wenn ich die attributeinstellungen für meine filtereinstellungen editieren will erhalte ich folgende fehlermeldung fatal error class metamodels dcgeneral events table filtersetting drawsetting not found in composer vendor cogizz metamodelsfilter textcombine src system modules metamodelsfilter textcombine metamodels dcgeneral events table filtersetting drawtextcombinesetting php on line was kann ich tun
0
827
10,597,104,797
IssuesEvent
2019-10-09 23:17:11
Azure/azure-functions-host
https://api.github.com/repos/Azure/azure-functions-host
reopened
Missing FunctionName in some logs
P1 Supportability
There are a few areas where "FunctionName" is missing from our logs: - where Source contains "Host.Triggers.Timer" - where Source contains "Script.Host" and where Summary contains "updated status: Last=" - where Summary contains 'functions are in error' and Summary contains ".Run"
True
Missing FunctionName in some logs - There are a few areas where "FunctionName" is missing from our logs: - where Source contains "Host.Triggers.Timer" - where Source contains "Script.Host" and where Summary contains "updated status: Last=" - where Summary contains 'functions are in error' and Summary contains ".Run"
port
missing functionname in some logs there are a few areas where functionname is missing from our logs where source contains host triggers timer where source contains script host and where summary contains updated status last where summary contains functions are in error and summary contains run
1
166,857
14,079,884,977
IssuesEvent
2020-11-04 15:26:01
OpenBankingToolkit/openbanking-reference-implementation
https://api.github.com/repos/OpenBankingToolkit/openbanking-reference-implementation
closed
Event Notification API - Documentation
documentation fixed: smiths
## Story As a customer I want to test that `Event Notification API` implementation works properly. ## Acceptance criteria - Documentation created - Following the documentation as a developer I can test the `Event Notification API`. ## Tasks _OPTIONAL_ List of tasks required to implement this story - [x] Update de documentation on docs application service. ### Release Notes Affected App: DOCS Description: Add the Event Notification section to the documentation application. <end release notes>
1.0
Event Notification API - Documentation - ## Story As a customer I want to test that `Event Notification API` implementation works properly. ## Acceptance criteria - Documentation created - Following the documentation as a developer I can test the `Event Notification API`. ## Tasks _OPTIONAL_ List of tasks required to implement this story - [x] Update de documentation on docs application service. ### Release Notes Affected App: DOCS Description: Add the Event Notification section to the documentation application. <end release notes>
non_port
event notification api documentation story as a customer i want to test that event notification api implementation works properly acceptance criteria documentation created following the documentation as a developer i can test the event notification api tasks optional list of tasks required to implement this story update de documentation on docs application service release notes affected app docs description add the event notification section to the documentation application
0
359,480
25,239,833,048
IssuesEvent
2022-11-15 06:10:27
arcus-azure/arcus.templates
https://api.github.com/repos/arcus-azure/arcus.templates
closed
Update README with available project templates
documentation
**Is your feature request related to a problem? Please describe.** Currently, we only list the Web API and the Azure Service Bus worker project templates. **Describe the solution you'd like** List all the available project templates, or point them directly to our feature documentation, as it would become a rather long list if we would list (and maintain) them all in the README file.
1.0
Update README with available project templates - **Is your feature request related to a problem? Please describe.** Currently, we only list the Web API and the Azure Service Bus worker project templates. **Describe the solution you'd like** List all the available project templates, or point them directly to our feature documentation, as it would become a rather long list if we would list (and maintain) them all in the README file.
non_port
update readme with available project templates is your feature request related to a problem please describe currently we only list the web api and the azure service bus worker project templates describe the solution you d like list all the available project templates or point them directly to our feature documentation as it would become a rather long list if we would list and maintain them all in the readme file
0
216,598
16,663,631,080
IssuesEvent
2021-06-06 19:36:11
Kuifje02/vrpy
https://api.github.com/repos/Kuifje02/vrpy
opened
from_numpy_matrix() is ambiguous when edges have weight 0
documentation invalid
When using the [networkx.from_numpy_matrix()](https://networkx.org/documentation/stable/reference/generated/networkx.convert_matrix.from_numpy_matrix.html) method, there is a problem with edges with weight 0. Such edges are not created. Should put a warning in the docs.
1.0
from_numpy_matrix() is ambiguous when edges have weight 0 - When using the [networkx.from_numpy_matrix()](https://networkx.org/documentation/stable/reference/generated/networkx.convert_matrix.from_numpy_matrix.html) method, there is a problem with edges with weight 0. Such edges are not created. Should put a warning in the docs.
non_port
from numpy matrix is ambiguous when edges have weight when using the method there is a problem with edges with weight such edges are not created should put a warning in the docs
0
313,827
26,957,886,098
IssuesEvent
2023-02-08 16:04:15
unifyai/ivy
https://api.github.com/repos/unifyai/ivy
opened
Fix converters.test_from_backend_module
Sub Task Ivy Stateful API Failing Test
| | | |---|---| |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4002898723/jobs/6870536606" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/4002898723/jobs/6870536606" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4002898723/jobs/6870536606" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/4123979403/jobs/7122701774" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a> <details> <summary>Not found</summary> Not found </details>
1.0
Fix converters.test_from_backend_module - | | | |---|---| |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4002898723/jobs/6870536606" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/4002898723/jobs/6870536606" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4002898723/jobs/6870536606" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/4123979403/jobs/7122701774" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a> <details> <summary>Not found</summary> Not found </details>
non_port
fix converters test from backend module tensorflow img src torch img src numpy img src jax img src not found not found
0
126,453
26,858,310,216
IssuesEvent
2023-02-03 16:15:07
eclipse-theia/theia
https://api.github.com/repos/eclipse-theia/theia
closed
Backport Tabs API to 2023/1 Community Release
vscode
<!-- Please fill out the following content for a feature request. --> <!-- Please provide a clear description of the feature and any relevant information. --> ### Feature Description: There has been agreement in the community to backport the tabs API to the Jan. 2023 Community Release candidate. See https://github.com/eclipse-theia/theia/pull/12109
1.0
Backport Tabs API to 2023/1 Community Release - <!-- Please fill out the following content for a feature request. --> <!-- Please provide a clear description of the feature and any relevant information. --> ### Feature Description: There has been agreement in the community to backport the tabs API to the Jan. 2023 Community Release candidate. See https://github.com/eclipse-theia/theia/pull/12109
non_port
backport tabs api to community release feature description there has been agreement in the community to backport the tabs api to the jan community release candidate see
0
724
9,708,775,139
IssuesEvent
2019-05-28 08:35:35
microsoft/vscode
https://api.github.com/repos/microsoft/vscode
closed
Unable to write program user data when invoking VS Code Portable in a singularity image
bug install-update portable-mode
**Problem description:** I am trying to install vs code inside a singularity image. I, unfortunately, haven't been able to do this as I keep running into some problems. I first tried to install vs code inside the container using the[ .dep package](https://code.visualstudio.com/Download). However, as inside this container, the *vs code* program doesn't have write permissions to the user data and data folder on the main system it wont start. To solve this I tried using the portable version as this is explained in the [vscode portable documentation](https://code.visualstudio.com/docs/editor/portable). Unfortunately, also this gave me the `user-data and data directories should be writable` error. **System information:** - VSCode Version: 1.34.0 - OS Version: Ubuntu 16.04 (Singularity container) **Steps to Reproduce:** 1. Install singularity according to [this guide](https://www.sylabs.io/guides/3.2/user-guide/). 2. Build a ubuntu 16.04 singularity image by running the following command: `sudo singularity build --sandbox ubuntu1604 docker://ubuntu:16.04` 3. When finished run the shell as sudo by using `sudo singularity run --nv --writable ubuntu1604` 4. Download the lates tar.gz and unzip it: ``` curl -L "https://go.microsoft.com/fwlink/?LinkID=620884" > vscode-stable.tar.gz tar xzf vscode-stable.tar.gz ``` 5. Go into the *VSCode-linux-x64* folder. 6. Create a *user-data* and *data* folder as explained in the [vscode portable documentation](https://code.visualstudio.com/docs/editor/portable). 7. Try to run the VSCode program by executing `sh ./bin/code`. 8. You will now get the following error message: ![image](https://user-images.githubusercontent.com/17570430/58173708-5ab40380-7c9c-11e9-81dc-96abfaf69f64.png) This can be solved by running the shell as sudo but as this has some risks I was wondering if I can solve the error so that I can run vscode as a normal user from within a singularity container. **Extra information:** Does this issue occur when all extensions are disabled?: Yes **--verbose output:** ``Gtk-Message: Failed to load module "appmenu-gtk-module" Gtk-Message: Failed to load module "canberra-gtk-module" Gtk-Message: Failed to load module "canberra-gtk-module" [9043:0522/143121.884272:ERROR:bus.cc(394)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory Gtk-Message: GtkDialog mapped without a transient parent. This is discouraged. [main 2019-05-22T12:31:27.080Z] Error: listen EACCES /run/user/1000/vscode-5f7fbcc5-1.34.0-main.sock at Server.setupListenHandle [as _listen2] (net.js:1313:19) at listenInCluster (net.js:1378:12) at Server.listen (net.js:1477:5) at Promise (/jan/VSCode-linux-x64/resources/app/out/vs/code/electron-main/main.js:184:637) at new Promise (<anonymous>) at Object.t.serve (/jan/VSCode-linux-x64/resources/app/out/vs/code/electron-main/main.js:184:574) at n (/jan/VSCode-linux-x64/resources/app/out/vs/code/electron-main/main.js:490:263) at R (/jan/VSCode-linux-x64/resources/app/out/vs/code/electron-main/main.js:492:559) at l.invokeFunction (/jan/VSCode-linux-x64/resources/app/out/vs/code/electron-main/main.js:221:331) at then (/jan/VSCode-linux-x64/resources/app/out/vs/code/electron-main/main.js:494:347) [main 2019-05-22T12:31:27.084Z] Lifecycle#kill()``
True
Unable to write program user data when invoking VS Code Portable in a singularity image - **Problem description:** I am trying to install vs code inside a singularity image. I, unfortunately, haven't been able to do this as I keep running into some problems. I first tried to install vs code inside the container using the[ .dep package](https://code.visualstudio.com/Download). However, as inside this container, the *vs code* program doesn't have write permissions to the user data and data folder on the main system it wont start. To solve this I tried using the portable version as this is explained in the [vscode portable documentation](https://code.visualstudio.com/docs/editor/portable). Unfortunately, also this gave me the `user-data and data directories should be writable` error. **System information:** - VSCode Version: 1.34.0 - OS Version: Ubuntu 16.04 (Singularity container) **Steps to Reproduce:** 1. Install singularity according to [this guide](https://www.sylabs.io/guides/3.2/user-guide/). 2. Build a ubuntu 16.04 singularity image by running the following command: `sudo singularity build --sandbox ubuntu1604 docker://ubuntu:16.04` 3. When finished run the shell as sudo by using `sudo singularity run --nv --writable ubuntu1604` 4. Download the lates tar.gz and unzip it: ``` curl -L "https://go.microsoft.com/fwlink/?LinkID=620884" > vscode-stable.tar.gz tar xzf vscode-stable.tar.gz ``` 5. Go into the *VSCode-linux-x64* folder. 6. Create a *user-data* and *data* folder as explained in the [vscode portable documentation](https://code.visualstudio.com/docs/editor/portable). 7. Try to run the VSCode program by executing `sh ./bin/code`. 8. You will now get the following error message: ![image](https://user-images.githubusercontent.com/17570430/58173708-5ab40380-7c9c-11e9-81dc-96abfaf69f64.png) This can be solved by running the shell as sudo but as this has some risks I was wondering if I can solve the error so that I can run vscode as a normal user from within a singularity container. **Extra information:** Does this issue occur when all extensions are disabled?: Yes **--verbose output:** ``Gtk-Message: Failed to load module "appmenu-gtk-module" Gtk-Message: Failed to load module "canberra-gtk-module" Gtk-Message: Failed to load module "canberra-gtk-module" [9043:0522/143121.884272:ERROR:bus.cc(394)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory Gtk-Message: GtkDialog mapped without a transient parent. This is discouraged. [main 2019-05-22T12:31:27.080Z] Error: listen EACCES /run/user/1000/vscode-5f7fbcc5-1.34.0-main.sock at Server.setupListenHandle [as _listen2] (net.js:1313:19) at listenInCluster (net.js:1378:12) at Server.listen (net.js:1477:5) at Promise (/jan/VSCode-linux-x64/resources/app/out/vs/code/electron-main/main.js:184:637) at new Promise (<anonymous>) at Object.t.serve (/jan/VSCode-linux-x64/resources/app/out/vs/code/electron-main/main.js:184:574) at n (/jan/VSCode-linux-x64/resources/app/out/vs/code/electron-main/main.js:490:263) at R (/jan/VSCode-linux-x64/resources/app/out/vs/code/electron-main/main.js:492:559) at l.invokeFunction (/jan/VSCode-linux-x64/resources/app/out/vs/code/electron-main/main.js:221:331) at then (/jan/VSCode-linux-x64/resources/app/out/vs/code/electron-main/main.js:494:347) [main 2019-05-22T12:31:27.084Z] Lifecycle#kill()``
port
unable to write program user data when invoking vs code portable in a singularity image problem description i am trying to install vs code inside a singularity image i unfortunately haven t been able to do this as i keep running into some problems i first tried to install vs code inside the container using the however as inside this container the vs code program doesn t have write permissions to the user data and data folder on the main system it wont start to solve this i tried using the portable version as this is explained in the unfortunately also this gave me the user data and data directories should be writable error system information vscode version os version ubuntu singularity container steps to reproduce install singularity according to build a ubuntu singularity image by running the following command sudo singularity build sandbox docker ubuntu when finished run the shell as sudo by using sudo singularity run nv writable download the lates tar gz and unzip it curl l vscode stable tar gz tar xzf vscode stable tar gz go into the vscode linux folder create a user data and data folder as explained in the try to run the vscode program by executing sh bin code you will now get the following error message this can be solved by running the shell as sudo but as this has some risks i was wondering if i can solve the error so that i can run vscode as a normal user from within a singularity container extra information does this issue occur when all extensions are disabled yes verbose output gtk message failed to load module appmenu gtk module gtk message failed to load module canberra gtk module gtk message failed to load module canberra gtk module failed to connect to the bus failed to connect to socket var run dbus system bus socket no such file or directory gtk message gtkdialog mapped without a transient parent this is discouraged error listen eacces run user vscode main sock at server setuplistenhandle net js at listenincluster net js at server listen net js at promise jan vscode linux resources app out vs code electron main main js at new promise at object t serve jan vscode linux resources app out vs code electron main main js at n jan vscode linux resources app out vs code electron main main js at r jan vscode linux resources app out vs code electron main main js at l invokefunction jan vscode linux resources app out vs code electron main main js at then jan vscode linux resources app out vs code electron main main js lifecycle kill
1
253,689
21,699,294,615
IssuesEvent
2022-05-10 00:58:25
damccorm/test-migration-target
https://api.github.com/repos/damccorm/test-migration-target
opened
Enable org.apache.beam.sdk.transforms.GroupByKeyTest$WindowTests.testWindowFnPostMerging
P3 test runner-samza portability-samza
Imported from Jira [BEAM-12886](https://issues.apache.org/jira/browse/BEAM-12886). Original Jira may contain additional context. Reported by: kw2542. This issue has child subcomponents which were not migrated over. See the original Jira for more information.
1.0
Enable org.apache.beam.sdk.transforms.GroupByKeyTest$WindowTests.testWindowFnPostMerging - Imported from Jira [BEAM-12886](https://issues.apache.org/jira/browse/BEAM-12886). Original Jira may contain additional context. Reported by: kw2542. This issue has child subcomponents which were not migrated over. See the original Jira for more information.
non_port
enable org apache beam sdk transforms groupbykeytest windowtests testwindowfnpostmerging imported from jira original jira may contain additional context reported by this issue has child subcomponents which were not migrated over see the original jira for more information
0
636
8,551,440,249
IssuesEvent
2018-11-07 18:03:13
arangodb/arangodb
https://api.github.com/repos/arangodb/arangodb
closed
More descriptive error messages for ArangoSearch
1 Feature 3 ArangoSearch supportability
Using an unsupported analyzer results in the error `Query: AQL: unsupported SEARCH condition (while optimizing plan)`. While there is a more specific warning in the logs, the user usually does not see that and the error message could be more descriptive. Using a scorer with an illegal argument (e.g. `FOR doc IN view SORT BM25(doc.name)` or `SORT BM25({})`), the error message `Scorer function is designed to be used with ArangoSearch view only` is given. This could be more specific, especially as this will probably be a common mistake. Passing an illegal argument to `EXISTS`, e.g. `EXISTS("str")` results in the error message `Filter function is designed to be used with ArangoSearch view only (while optimizing ast)`.
True
More descriptive error messages for ArangoSearch - Using an unsupported analyzer results in the error `Query: AQL: unsupported SEARCH condition (while optimizing plan)`. While there is a more specific warning in the logs, the user usually does not see that and the error message could be more descriptive. Using a scorer with an illegal argument (e.g. `FOR doc IN view SORT BM25(doc.name)` or `SORT BM25({})`), the error message `Scorer function is designed to be used with ArangoSearch view only` is given. This could be more specific, especially as this will probably be a common mistake. Passing an illegal argument to `EXISTS`, e.g. `EXISTS("str")` results in the error message `Filter function is designed to be used with ArangoSearch view only (while optimizing ast)`.
port
more descriptive error messages for arangosearch using an unsupported analyzer results in the error query aql unsupported search condition while optimizing plan while there is a more specific warning in the logs the user usually does not see that and the error message could be more descriptive using a scorer with an illegal argument e g for doc in view sort doc name or sort the error message scorer function is designed to be used with arangosearch view only is given this could be more specific especially as this will probably be a common mistake passing an illegal argument to exists e g exists str results in the error message filter function is designed to be used with arangosearch view only while optimizing ast
1
1,918
30,209,568,167
IssuesEvent
2023-07-05 11:53:05
chapel-lang/chapel
https://api.github.com/repos/chapel-lang/chapel
closed
Chapel does not yet support LLVM 15
area: Compiler type: Portability
### Summary of Problem Since `clang` version 15 has become the main version distributed by the Arch Linux repositories, Chapel has been unable to build since it permits versions between 11 and 14. I manage the Chapel packages for the Arch User Repository, and wanted to check in with the team on the best way to handle this. I optimistically tried to change [this line](https://github.com/chapel-lang/chapel/blob/8ca23c39b7f97e3f1a30d6a5e16242d2559b9ec8/util/chplenv/chpl_llvm.py#L18), but unfortunately I got an error (somewhat deep) in the build process. The main options on my side are: - constraining the package version for `clang` (I hear this is not so simple, but may be good for robustness of the package in the long run) - personally downgrade my `clang` version and put temporary warnings on the package page (I really don't like this option) Build commands: ``` ./configure --prefix=/usr make ``` I'll be investigating the dependency constraints, but if anybody has suggestions that would be great too. ### Configuration Information - Output of `chpl --version`: `<currently erroring, was running 1.30 pre-release>` - Output of `$CHPL_HOME/util/printchplenv --anonymize`: - Back-end compiler and version, e.g. `gcc --version` or `clang --version`: `clang version 15.0.7`
True
Chapel does not yet support LLVM 15 - ### Summary of Problem Since `clang` version 15 has become the main version distributed by the Arch Linux repositories, Chapel has been unable to build since it permits versions between 11 and 14. I manage the Chapel packages for the Arch User Repository, and wanted to check in with the team on the best way to handle this. I optimistically tried to change [this line](https://github.com/chapel-lang/chapel/blob/8ca23c39b7f97e3f1a30d6a5e16242d2559b9ec8/util/chplenv/chpl_llvm.py#L18), but unfortunately I got an error (somewhat deep) in the build process. The main options on my side are: - constraining the package version for `clang` (I hear this is not so simple, but may be good for robustness of the package in the long run) - personally downgrade my `clang` version and put temporary warnings on the package page (I really don't like this option) Build commands: ``` ./configure --prefix=/usr make ``` I'll be investigating the dependency constraints, but if anybody has suggestions that would be great too. ### Configuration Information - Output of `chpl --version`: `<currently erroring, was running 1.30 pre-release>` - Output of `$CHPL_HOME/util/printchplenv --anonymize`: - Back-end compiler and version, e.g. `gcc --version` or `clang --version`: `clang version 15.0.7`
port
chapel does not yet support llvm summary of problem since clang version has become the main version distributed by the arch linux repositories chapel has been unable to build since it permits versions between and i manage the chapel packages for the arch user repository and wanted to check in with the team on the best way to handle this i optimistically tried to change but unfortunately i got an error somewhat deep in the build process the main options on my side are constraining the package version for clang i hear this is not so simple but may be good for robustness of the package in the long run personally downgrade my clang version and put temporary warnings on the package page i really don t like this option build commands configure prefix usr make i ll be investigating the dependency constraints but if anybody has suggestions that would be great too configuration information output of chpl version output of chpl home util printchplenv anonymize back end compiler and version e g gcc version or clang version clang version
1
653,693
21,610,692,062
IssuesEvent
2022-05-04 09:47:17
celo-org/celo-monorepo
https://api.github.com/repos/celo-org/celo-monorepo
closed
Add Forno API Key to ODIS
Priority: P0 Component: ODIS Component: Identity
Forno recently added an API key for rate limiting requests. We need to add API keys for the Combiner and Signers to prevent them from getting rate limited. Release TODO: - [ ] Test Combiner in staging - [ ] Test Signer in staging - [ ] Test Combiner in Alfajores - [ ] Test Signers in Alfajores - [ ] Release Combiner to mainnet - [ ] Release to mainnet cLabs signers - [ ] Release to mainnet partner signers (Note: Let's try to lump this with another change to limit overhead for partner signers, as they just upgraded to 1.1.9)
1.0
Add Forno API Key to ODIS - Forno recently added an API key for rate limiting requests. We need to add API keys for the Combiner and Signers to prevent them from getting rate limited. Release TODO: - [ ] Test Combiner in staging - [ ] Test Signer in staging - [ ] Test Combiner in Alfajores - [ ] Test Signers in Alfajores - [ ] Release Combiner to mainnet - [ ] Release to mainnet cLabs signers - [ ] Release to mainnet partner signers (Note: Let's try to lump this with another change to limit overhead for partner signers, as they just upgraded to 1.1.9)
non_port
add forno api key to odis forno recently added an api key for rate limiting requests we need to add api keys for the combiner and signers to prevent them from getting rate limited release todo test combiner in staging test signer in staging test combiner in alfajores test signers in alfajores release combiner to mainnet release to mainnet clabs signers release to mainnet partner signers note let s try to lump this with another change to limit overhead for partner signers as they just upgraded to
0
171,750
27,172,097,762
IssuesEvent
2023-02-17 20:27:13
dotnet/roslyn
https://api.github.com/repos/dotnet/roslyn
closed
Format align chain method calls in different lines
Area-IDE Feature Request Need Design Review IDE-Formatter
**Version Used**: **Steps to Reproduce**: select the following code and run ctrl + K F in VS ```c# buildCommand .ExecuteWithoutRestore() .Should() .Fail() .And .HaveStdOutContaining("NETSDK1004"); ``` **Expected Behavior**: ```c# buildCommand .ExecuteWithoutRestore() .Should() .Fail() .And .HaveStdOutContaining("NETSDK1004"); ``` **Actual Behavior**: No change
1.0
Format align chain method calls in different lines - **Version Used**: **Steps to Reproduce**: select the following code and run ctrl + K F in VS ```c# buildCommand .ExecuteWithoutRestore() .Should() .Fail() .And .HaveStdOutContaining("NETSDK1004"); ``` **Expected Behavior**: ```c# buildCommand .ExecuteWithoutRestore() .Should() .Fail() .And .HaveStdOutContaining("NETSDK1004"); ``` **Actual Behavior**: No change
non_port
format align chain method calls in different lines version used steps to reproduce select the following code and run ctrl k f in vs c buildcommand executewithoutrestore should fail and havestdoutcontaining expected behavior c buildcommand executewithoutrestore should fail and havestdoutcontaining actual behavior no change
0
241
4,793,268,113
IssuesEvent
2016-10-31 17:42:04
wahern/cqueues
https://api.github.com/repos/wahern/cqueues
closed
clock_getTime() with new Mac OS
packaging/portability
cqueues fails to make on Mac OS. It throws errors about clock_getTime(). I fixed it by removing this chunk of code in: `../src/cqueues.c` ``` #if __APPLE__ #include <time.h> /* struct timespec */ #include <errno.h> /* errno EINVAL */ #include <sys/time.h> /* TIMEVAL_TO_TIMESPEC struct timeval gettimeofday(3) */ #include <mach/mach_time.h> /* mach_timebase_info_data_t mach_timebase_info() mach_absolute_time() */ #define CLOCK_REALTIME 0 #define CLOCK_VIRTUAL 1 #define CLOCK_PROF 2 #define CLOCK_MONOTONIC 3 static mach_timebase_info_data_t clock_timebase = { .numer = 1, .denom = 1, }; /* clock_timebase */ void clock_gettime_init(void) __attribute__((constructor)); void clock_gettime_init(void) { if (mach_timebase_info(&clock_timebase) != KERN_SUCCESS) __builtin_abort(); } /* clock_gettime_init() */ static int clock_gettime(int clockid, struct timespec *ts) { switch (clockid) { case CLOCK_REALTIME: { struct timeval tv; if (0 != gettimeofday(&tv, 0)) return -1; TIMEVAL_TO_TIMESPEC(&tv, ts); return 0; } case CLOCK_MONOTONIC: { unsigned long long abt; abt = mach_absolute_time(); abt = abt * clock_timebase.numer / clock_timebase.denom; ts->tv_sec = abt / 1000000000UL; ts->tv_nsec = abt % 1000000000UL; return 0; } default: errno = EINVAL; return -1; } /* switch() */ } /* clock_gettime() */ #endif /* __APPLE__ */ ```
True
clock_getTime() with new Mac OS - cqueues fails to make on Mac OS. It throws errors about clock_getTime(). I fixed it by removing this chunk of code in: `../src/cqueues.c` ``` #if __APPLE__ #include <time.h> /* struct timespec */ #include <errno.h> /* errno EINVAL */ #include <sys/time.h> /* TIMEVAL_TO_TIMESPEC struct timeval gettimeofday(3) */ #include <mach/mach_time.h> /* mach_timebase_info_data_t mach_timebase_info() mach_absolute_time() */ #define CLOCK_REALTIME 0 #define CLOCK_VIRTUAL 1 #define CLOCK_PROF 2 #define CLOCK_MONOTONIC 3 static mach_timebase_info_data_t clock_timebase = { .numer = 1, .denom = 1, }; /* clock_timebase */ void clock_gettime_init(void) __attribute__((constructor)); void clock_gettime_init(void) { if (mach_timebase_info(&clock_timebase) != KERN_SUCCESS) __builtin_abort(); } /* clock_gettime_init() */ static int clock_gettime(int clockid, struct timespec *ts) { switch (clockid) { case CLOCK_REALTIME: { struct timeval tv; if (0 != gettimeofday(&tv, 0)) return -1; TIMEVAL_TO_TIMESPEC(&tv, ts); return 0; } case CLOCK_MONOTONIC: { unsigned long long abt; abt = mach_absolute_time(); abt = abt * clock_timebase.numer / clock_timebase.denom; ts->tv_sec = abt / 1000000000UL; ts->tv_nsec = abt % 1000000000UL; return 0; } default: errno = EINVAL; return -1; } /* switch() */ } /* clock_gettime() */ #endif /* __APPLE__ */ ```
port
clock gettime with new mac os cqueues fails to make on mac os it throws errors about clock gettime i fixed it by removing this chunk of code in src cqueues c if apple include struct timespec include errno einval include timeval to timespec struct timeval gettimeofday include mach timebase info data t mach timebase info mach absolute time define clock realtime define clock virtual define clock prof define clock monotonic static mach timebase info data t clock timebase numer denom clock timebase void clock gettime init void attribute constructor void clock gettime init void if mach timebase info clock timebase kern success builtin abort clock gettime init static int clock gettime int clockid struct timespec ts switch clockid case clock realtime struct timeval tv if gettimeofday tv return timeval to timespec tv ts return case clock monotonic unsigned long long abt abt mach absolute time abt abt clock timebase numer clock timebase denom ts tv sec abt ts tv nsec abt return default errno einval return switch clock gettime endif apple
1
480
6,963,712,408
IssuesEvent
2017-12-08 18:32:07
usnistgov/hiperc
https://api.github.com/repos/usnistgov/hiperc
closed
rethink BCs
enhancement portability
Use of `fp_t bc[2][2]` is opaque. Try simplifying, without losing generality if possible.
True
rethink BCs - Use of `fp_t bc[2][2]` is opaque. Try simplifying, without losing generality if possible.
port
rethink bcs use of fp t bc is opaque try simplifying without losing generality if possible
1
1,463
21,693,235,094
IssuesEvent
2022-05-09 17:21:36
damccorm/test-migration-target
https://api.github.com/repos/damccorm/test-migration-target
opened
Add support for timers in Spark portable streaming
P3 runner-spark improvement portability-spark
Add support for timely processing (using timers) for streaming on the portable Spark runner. Validates runner tests relying on timers (e.g. UsesTimersInParDo) should pass Imported from Jira [BEAM-10755](https://issues.apache.org/jira/browse/BEAM-10755). Original Jira may contain additional context. Reported by: annaqin.
True
Add support for timers in Spark portable streaming - Add support for timely processing (using timers) for streaming on the portable Spark runner. Validates runner tests relying on timers (e.g. UsesTimersInParDo) should pass Imported from Jira [BEAM-10755](https://issues.apache.org/jira/browse/BEAM-10755). Original Jira may contain additional context. Reported by: annaqin.
port
add support for timers in spark portable streaming add support for timely processing using timers for streaming on the portable spark runner validates runner tests relying on timers e g usestimersinpardo should pass imported from jira original jira may contain additional context reported by annaqin
1
68,365
3,286,721,280
IssuesEvent
2015-10-29 05:22:08
metamaps/metamaps_gen002
https://api.github.com/repos/metamaps/metamaps_gen002
opened
custom metacodes
ruby uservoice priority
Uservoice people have requested the ability to create their own metacodes. This functionality exists for admins, but isn't pretty. We need to discuss this more.
1.0
custom metacodes - Uservoice people have requested the ability to create their own metacodes. This functionality exists for admins, but isn't pretty. We need to discuss this more.
non_port
custom metacodes uservoice people have requested the ability to create their own metacodes this functionality exists for admins but isn t pretty we need to discuss this more
0
1,068
13,675,364,378
IssuesEvent
2020-09-29 12:35:21
openwall/john
https://api.github.com/repos/openwall/john
closed
md5crypt-opencl fails on NVIDIA with CL_INVALID_COMMAND_QUEUE
portability
``` ~/work/extern/arch/aur/john-git » john --test --format=md5crypt-opencl john: /opt/cuda/lib64/libOpenCL.so.1: no version information available (required by john) Device 0: GeForce GTX TITAN Benchmarking: md5crypt-opencl, crypt(3) $1$ [MD5 OpenCL]... OpenCL CL_INVALID_COMMAND_QUEUE error in opencl_cryptmd5_fmt_plug.c:227 - Error releasing memory mappings ``` john is built from git ([this AUR package](https://aur.archlinux.org/packages/john-git/), commit 7eeb2bfe1008a8a153daab4a71e9f0c391a17ff7). Build info: ``` john: /opt/cuda/lib64/libOpenCL.so.1: no version information available (required by john) Version: 1.8.0-jumbo-1-5593-g7eeb2bfe1+ Build: linux-gnu 64-bit AVX-ac OMP SIMD: AVX, interleaving: MD4:3 MD5:3 SHA1:1 SHA256:1 SHA512:1 System-wide exec: /usr/libexec/john System-wide home: /usr/share/john Private home: ~/.john $JOHN is /usr/share/john/ Format interface version: 14 Max. number of reported tunable costs: 3 Rec file version: REC4 Charset file version: CHR3 CHARSET_MIN: 1 (0x01) CHARSET_MAX: 255 (0xff) CHARSET_LENGTH: 24 SALT_HASH_SIZE: 1048576 Max. Markov mode level: 400 Max. Markov mode password length: 30 gcc version: 6.3.1 GNU libc version: 2.25 (loaded: 2.25) OpenCL headers version: 2.1 Crypto library: OpenSSL OpenSSL library version: 0100020bf OpenSSL 1.0.2k 26 Jan 2017 GMP library version: 6.1.2 File locking: fcntl() fseek(): fseek ftell(): ftell fopen(): fopen memmem(): System's ``` However, [the stable package (1.8.0.jumbo1)](https://www.archlinux.org/packages/community/x86_64/john/) fails this test too. Output of `john --list=opencl-devices`: ``` john: /opt/cuda/lib64/libOpenCL.so.1: no version information available (required by john) Platform #0 name: NVIDIA CUDA, version: OpenCL 1.2 CUDA 8.0.0 Device #0 (0) name: GeForce GTX TITAN Device vendor: NVIDIA Corporation Device type: GPU (LE) Device version: OpenCL 1.2 CUDA Driver version: 378.13 [recommended] Native vector widths: char 1, short 1, int 1, long 1 Preferred vector width: char 1, short 1, int 1, long 1 Global Memory: 5.0 GB Global Memory Cache: 224.2 KB Local Memory: 48.0 KB (Local) Max memory alloc. size: 1.0 GB Max clock (MHz): 875 Profiling timer res.: 1000 ns Max Work Group Size: 1024 Parallel compute cores: 14 CUDA cores: 2688 (14 x 192) Speed index: 2352000 Warp size: 32 Max. GPRs/work-group: 65536 Compute capability: 3.5 (sm_35) Kernel exec. timeout: no NVML id: 0 PCI device topology: 05:00.0 PCI lanes: 16/16 Fan speed: 32% Temperature: 47°C Utilization: 0% ``` System is running Arch Linux x86_64 with kernel 4.10.8-1-ARCH and the 378.13-5 nvidia driver package. I noticed a few issues regarding md5crypt-opencl on AMD cards, but none on NVIDIA.
True
md5crypt-opencl fails on NVIDIA with CL_INVALID_COMMAND_QUEUE - ``` ~/work/extern/arch/aur/john-git » john --test --format=md5crypt-opencl john: /opt/cuda/lib64/libOpenCL.so.1: no version information available (required by john) Device 0: GeForce GTX TITAN Benchmarking: md5crypt-opencl, crypt(3) $1$ [MD5 OpenCL]... OpenCL CL_INVALID_COMMAND_QUEUE error in opencl_cryptmd5_fmt_plug.c:227 - Error releasing memory mappings ``` john is built from git ([this AUR package](https://aur.archlinux.org/packages/john-git/), commit 7eeb2bfe1008a8a153daab4a71e9f0c391a17ff7). Build info: ``` john: /opt/cuda/lib64/libOpenCL.so.1: no version information available (required by john) Version: 1.8.0-jumbo-1-5593-g7eeb2bfe1+ Build: linux-gnu 64-bit AVX-ac OMP SIMD: AVX, interleaving: MD4:3 MD5:3 SHA1:1 SHA256:1 SHA512:1 System-wide exec: /usr/libexec/john System-wide home: /usr/share/john Private home: ~/.john $JOHN is /usr/share/john/ Format interface version: 14 Max. number of reported tunable costs: 3 Rec file version: REC4 Charset file version: CHR3 CHARSET_MIN: 1 (0x01) CHARSET_MAX: 255 (0xff) CHARSET_LENGTH: 24 SALT_HASH_SIZE: 1048576 Max. Markov mode level: 400 Max. Markov mode password length: 30 gcc version: 6.3.1 GNU libc version: 2.25 (loaded: 2.25) OpenCL headers version: 2.1 Crypto library: OpenSSL OpenSSL library version: 0100020bf OpenSSL 1.0.2k 26 Jan 2017 GMP library version: 6.1.2 File locking: fcntl() fseek(): fseek ftell(): ftell fopen(): fopen memmem(): System's ``` However, [the stable package (1.8.0.jumbo1)](https://www.archlinux.org/packages/community/x86_64/john/) fails this test too. Output of `john --list=opencl-devices`: ``` john: /opt/cuda/lib64/libOpenCL.so.1: no version information available (required by john) Platform #0 name: NVIDIA CUDA, version: OpenCL 1.2 CUDA 8.0.0 Device #0 (0) name: GeForce GTX TITAN Device vendor: NVIDIA Corporation Device type: GPU (LE) Device version: OpenCL 1.2 CUDA Driver version: 378.13 [recommended] Native vector widths: char 1, short 1, int 1, long 1 Preferred vector width: char 1, short 1, int 1, long 1 Global Memory: 5.0 GB Global Memory Cache: 224.2 KB Local Memory: 48.0 KB (Local) Max memory alloc. size: 1.0 GB Max clock (MHz): 875 Profiling timer res.: 1000 ns Max Work Group Size: 1024 Parallel compute cores: 14 CUDA cores: 2688 (14 x 192) Speed index: 2352000 Warp size: 32 Max. GPRs/work-group: 65536 Compute capability: 3.5 (sm_35) Kernel exec. timeout: no NVML id: 0 PCI device topology: 05:00.0 PCI lanes: 16/16 Fan speed: 32% Temperature: 47°C Utilization: 0% ``` System is running Arch Linux x86_64 with kernel 4.10.8-1-ARCH and the 378.13-5 nvidia driver package. I noticed a few issues regarding md5crypt-opencl on AMD cards, but none on NVIDIA.
port
opencl fails on nvidia with cl invalid command queue work extern arch aur john git » john test format opencl john opt cuda libopencl so no version information available required by john device geforce gtx titan benchmarking opencl crypt opencl cl invalid command queue error in opencl fmt plug c error releasing memory mappings john is built from git commit build info john opt cuda libopencl so no version information available required by john version jumbo build linux gnu bit avx ac omp simd avx interleaving system wide exec usr libexec john system wide home usr share john private home john john is usr share john format interface version max number of reported tunable costs rec file version charset file version charset min charset max charset length salt hash size max markov mode level max markov mode password length gcc version gnu libc version loaded opencl headers version crypto library openssl openssl library version openssl jan gmp library version file locking fcntl fseek fseek ftell ftell fopen fopen memmem system s however fails this test too output of john list opencl devices john opt cuda libopencl so no version information available required by john platform name nvidia cuda version opencl cuda device name geforce gtx titan device vendor nvidia corporation device type gpu le device version opencl cuda driver version native vector widths char short int long preferred vector width char short int long global memory gb global memory cache kb local memory kb local max memory alloc size gb max clock mhz profiling timer res ns max work group size parallel compute cores cuda cores x speed index warp size max gprs work group compute capability sm kernel exec timeout no nvml id pci device topology pci lanes fan speed temperature °c utilization system is running arch linux with kernel arch and the nvidia driver package i noticed a few issues regarding opencl on amd cards but none on nvidia
1
400,833
11,781,486,375
IssuesEvent
2020-03-16 22:36:24
LBNL-ETA/BEDES-Manager
https://api.github.com/repos/LBNL-ETA/BEDES-Manager
closed
Update "Download Sample File" to download the correct sample csv file
high priority
In import-csv, "Download Sample File" downloads the previous (with incorrect headers) sample csv file. Fix it to download the latest sample csv file (with the updated headers)
1.0
Update "Download Sample File" to download the correct sample csv file - In import-csv, "Download Sample File" downloads the previous (with incorrect headers) sample csv file. Fix it to download the latest sample csv file (with the updated headers)
non_port
update download sample file to download the correct sample csv file in import csv download sample file downloads the previous with incorrect headers sample csv file fix it to download the latest sample csv file with the updated headers
0
1,343
19,058,578,530
IssuesEvent
2021-11-26 02:19:59
PCSX2/pcsx2
https://api.github.com/repos/PCSX2/pcsx2
closed
Static Analysis of PVS-Studio
Enhancement / Feature Request Portability
Hello, I had the chance to examine PCSX2 project with PVS-Studio Static Analyzer. Github doesn't support attaching Excel files, but here is a link from my Dropbox: https://www.dropbox.com/s/sswkcfqj9g1x6vq/pcsx2_suite_2013.xlsx?dl=0 Again the report of the analysis is in Excel format. It is definitely worth looking at!! Even if some of the reported code mistakes could be false positive, or something that the devs of PCSX2 overlooked. You can get get more information and a detailed explanation about a specific error Codes using the online documentation: http://www.viva64.com/en/Vxxxx replacing "xxxx" with the code number. The good thing about the report, it gives you the file name, line number and a brief message explaining the error, visit viva64 to get the full explanation. I have seen some possible memory leaks in the report or dangerous bit shifting. Note: I didn't do the whole solution of PCSX2 suite, only PCSX2 project. If anybody interested, I could upload that too. There is another static analysis which is free and also good called CppCheck, Again, if anybody is interested, I could upload a report of that too. I hope this will help. Regards, Rebel_X
True
Static Analysis of PVS-Studio - Hello, I had the chance to examine PCSX2 project with PVS-Studio Static Analyzer. Github doesn't support attaching Excel files, but here is a link from my Dropbox: https://www.dropbox.com/s/sswkcfqj9g1x6vq/pcsx2_suite_2013.xlsx?dl=0 Again the report of the analysis is in Excel format. It is definitely worth looking at!! Even if some of the reported code mistakes could be false positive, or something that the devs of PCSX2 overlooked. You can get get more information and a detailed explanation about a specific error Codes using the online documentation: http://www.viva64.com/en/Vxxxx replacing "xxxx" with the code number. The good thing about the report, it gives you the file name, line number and a brief message explaining the error, visit viva64 to get the full explanation. I have seen some possible memory leaks in the report or dangerous bit shifting. Note: I didn't do the whole solution of PCSX2 suite, only PCSX2 project. If anybody interested, I could upload that too. There is another static analysis which is free and also good called CppCheck, Again, if anybody is interested, I could upload a report of that too. I hope this will help. Regards, Rebel_X
port
static analysis of pvs studio hello i had the chance to examine project with pvs studio static analyzer github doesn t support attaching excel files but here is a link from my dropbox again the report of the analysis is in excel format it is definitely worth looking at even if some of the reported code mistakes could be false positive or something that the devs of overlooked you can get get more information and a detailed explanation about a specific error codes using the online documentation replacing xxxx with the code number the good thing about the report it gives you the file name line number and a brief message explaining the error visit to get the full explanation i have seen some possible memory leaks in the report or dangerous bit shifting note i didn t do the whole solution of suite only project if anybody interested i could upload that too there is another static analysis which is free and also good called cppcheck again if anybody is interested i could upload a report of that too i hope this will help regards rebel x
1
1,940
30,512,337,249
IssuesEvent
2023-07-18 22:05:48
alcionai/corso
https://api.github.com/repos/alcionai/corso
opened
Add service level isolation
supportability
### What happened? While getting user info during backup, we try to [discover](https://github.com/alcionai/corso/blob/22f990a709b996a1f1783ec81ebb909430a79111/src/pkg/services/m365/api/users.go#L174C18-L174C18) all services which are enabled for the user. This approach has potential drawbacks. For example, assume that the user intends to an exchange backup. The backup may fail during service discovery if the user does not have a onedrive or if corso doesn't have file permissions. While we do handle such scenarios gracefully, we have noticed that graph api may change error messages on us, leading to backup failures. Proposed changes: 1. Corso should only discover & enable requested services. If the user is asking to do exchange backups, we should not attempt onedrive/sharepoint discovery. 2. This can be taken to another level by forcing isolation within a service (e.g. calendar/mail/contacts in exchange). ### Corso Version? Corso v0.10.0 ### Where are you running Corso? Linux ### Relevant log output _No response_
True
Add service level isolation - ### What happened? While getting user info during backup, we try to [discover](https://github.com/alcionai/corso/blob/22f990a709b996a1f1783ec81ebb909430a79111/src/pkg/services/m365/api/users.go#L174C18-L174C18) all services which are enabled for the user. This approach has potential drawbacks. For example, assume that the user intends to an exchange backup. The backup may fail during service discovery if the user does not have a onedrive or if corso doesn't have file permissions. While we do handle such scenarios gracefully, we have noticed that graph api may change error messages on us, leading to backup failures. Proposed changes: 1. Corso should only discover & enable requested services. If the user is asking to do exchange backups, we should not attempt onedrive/sharepoint discovery. 2. This can be taken to another level by forcing isolation within a service (e.g. calendar/mail/contacts in exchange). ### Corso Version? Corso v0.10.0 ### Where are you running Corso? Linux ### Relevant log output _No response_
port
add service level isolation what happened while getting user info during backup we try to all services which are enabled for the user this approach has potential drawbacks for example assume that the user intends to an exchange backup the backup may fail during service discovery if the user does not have a onedrive or if corso doesn t have file permissions while we do handle such scenarios gracefully we have noticed that graph api may change error messages on us leading to backup failures proposed changes corso should only discover enable requested services if the user is asking to do exchange backups we should not attempt onedrive sharepoint discovery this can be taken to another level by forcing isolation within a service e g calendar mail contacts in exchange corso version corso where are you running corso linux relevant log output no response
1
1,779
26,174,511,338
IssuesEvent
2023-01-02 07:52:38
primefaces/primeng
https://api.github.com/repos/primefaces/primeng
closed
Tab key in p-dialog with p-InputNumber
Type: Bug LTS-PORTABLE
[x] bug report => Search github for a similar issue or PR before submitting [ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap [ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35 **Plunkr Case (Bug Reports)** https://stackblitz.com/edit/github-dialog-tab?embed=1&file=src/app/app.component.html **Current behavior** * Click Button Show * Set cursor on first input field * Move cursor by pressing tab or shift+tab key * Try to override or to correct values of the input fields **Expected behavior** The whole value of a input should be selected in a p-dialog, easier to override * **Angular version:** 10.X * **PrimeNG version:** 10.0.2 (Possibly any version) * **Browser:** [all]
True
Tab key in p-dialog with p-InputNumber - [x] bug report => Search github for a similar issue or PR before submitting [ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap [ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35 **Plunkr Case (Bug Reports)** https://stackblitz.com/edit/github-dialog-tab?embed=1&file=src/app/app.component.html **Current behavior** * Click Button Show * Set cursor on first input field * Move cursor by pressing tab or shift+tab key * Try to override or to correct values of the input fields **Expected behavior** The whole value of a input should be selected in a p-dialog, easier to override * **Angular version:** 10.X * **PrimeNG version:** 10.0.2 (Possibly any version) * **Browser:** [all]
port
tab key in p dialog with p inputnumber bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see plunkr case bug reports current behavior click button show set cursor on first input field move cursor by pressing tab or shift tab key try to override or to correct values of the input fields expected behavior the whole value of a input should be selected in a p dialog easier to override angular version x primeng version possibly any version browser
1
1,467
21,694,545,716
IssuesEvent
2022-05-09 18:41:11
damccorm/test-migration-target
https://api.github.com/repos/damccorm/test-migration-target
opened
Rework dependency structure of Flink job server jar
P3 improvement runner-flink portability-flink
Enabling the strict dependency checker (BEAM-10961) revealed that we are unnecessarily making :runners:flink:1.x a compile dependency of :runners:flink:1.x:job-server. :runners:flink:1.x is not needed at compile time at all, so it can probably be a runtimeOnly dependency instead. Imported from Jira [BEAM-11664](https://issues.apache.org/jira/browse/BEAM-11664). Original Jira may contain additional context. Reported by: ibzib.
True
Rework dependency structure of Flink job server jar - Enabling the strict dependency checker (BEAM-10961) revealed that we are unnecessarily making :runners:flink:1.x a compile dependency of :runners:flink:1.x:job-server. :runners:flink:1.x is not needed at compile time at all, so it can probably be a runtimeOnly dependency instead. Imported from Jira [BEAM-11664](https://issues.apache.org/jira/browse/BEAM-11664). Original Jira may contain additional context. Reported by: ibzib.
port
rework dependency structure of flink job server jar enabling the strict dependency checker beam revealed that we are unnecessarily making runners flink x a compile dependency of runners flink x job server runners flink x is not needed at compile time at all so it can probably be a runtimeonly dependency instead imported from jira original jira may contain additional context reported by ibzib
1
1,933
30,347,684,811
IssuesEvent
2023-07-11 16:31:11
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Need to remove section "Memory dump collection"
azure-supportability/svc triaged assigned-to-author doc-enhancement Pri1
[Enter feedback here] The section pertaining to "Memory dump collection" needs to be removed in this section. We are reviewing this information and will update this section at a later time. --- #### Document Details ⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.* * ID: 43a60f94-0826-8cdf-04b3-c5f20d299720 * Version Independent ID: 7d1dcfe7-4e68-6b98-1003-9154e7a0b22d * Content: [How to create an Azure support request - Azure supportability](https://learn.microsoft.com/en-us/azure/azure-portal/supportability/how-to-create-azure-support-request#memory-dump-collection) * Content Source: [articles/azure-portal/supportability/how-to-create-azure-support-request.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/azure-portal/supportability/how-to-create-azure-support-request.md) * Service: **azure-supportability** * GitHub Login: @JnHs * Microsoft Alias: **jenhayes**
True
Need to remove section "Memory dump collection" - [Enter feedback here] The section pertaining to "Memory dump collection" needs to be removed in this section. We are reviewing this information and will update this section at a later time. --- #### Document Details ⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.* * ID: 43a60f94-0826-8cdf-04b3-c5f20d299720 * Version Independent ID: 7d1dcfe7-4e68-6b98-1003-9154e7a0b22d * Content: [How to create an Azure support request - Azure supportability](https://learn.microsoft.com/en-us/azure/azure-portal/supportability/how-to-create-azure-support-request#memory-dump-collection) * Content Source: [articles/azure-portal/supportability/how-to-create-azure-support-request.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/azure-portal/supportability/how-to-create-azure-support-request.md) * Service: **azure-supportability** * GitHub Login: @JnHs * Microsoft Alias: **jenhayes**
port
need to remove section memory dump collection the section pertaining to memory dump collection needs to be removed in this section we are reviewing this information and will update this section at a later time document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service azure supportability github login jnhs microsoft alias jenhayes
1
1,493
6,122,924,471
IssuesEvent
2017-06-23 02:02:42
Endogix/WebFormWeaver
https://api.github.com/repos/Endogix/WebFormWeaver
opened
Export form structure
architecture business logic feature presentation logic
Allow the user to export the form structure when they are finished editing the form, through a button near the bottom. Exporting will show a modal with the code - as optionally either HTML code, JSON, or XML. There should also be the option to specify an endpoint that this script can connect to through AJAX to save to a server, with another option to specify a function to call when the connection is complete (most likely to be a redirect function). ## Acceptance criteria User should be able to: - [ ] Export the form as HTML code - [ ] Export the form as a JSON string - [ ] Export the form as an XML string - [ ] Export the form to an endpoint through AJAX (either JSON or XML) - [ ] Configure a function in the options to call when then connection to the endpoint is complete
1.0
Export form structure - Allow the user to export the form structure when they are finished editing the form, through a button near the bottom. Exporting will show a modal with the code - as optionally either HTML code, JSON, or XML. There should also be the option to specify an endpoint that this script can connect to through AJAX to save to a server, with another option to specify a function to call when the connection is complete (most likely to be a redirect function). ## Acceptance criteria User should be able to: - [ ] Export the form as HTML code - [ ] Export the form as a JSON string - [ ] Export the form as an XML string - [ ] Export the form to an endpoint through AJAX (either JSON or XML) - [ ] Configure a function in the options to call when then connection to the endpoint is complete
non_port
export form structure allow the user to export the form structure when they are finished editing the form through a button near the bottom exporting will show a modal with the code as optionally either html code json or xml there should also be the option to specify an endpoint that this script can connect to through ajax to save to a server with another option to specify a function to call when the connection is complete most likely to be a redirect function acceptance criteria user should be able to export the form as html code export the form as a json string export the form as an xml string export the form to an endpoint through ajax either json or xml configure a function in the options to call when then connection to the endpoint is complete
0
415
6,575,568,093
IssuesEvent
2017-09-11 16:31:30
openucx/ucx
https://api.github.com/repos/openucx/ucx
closed
Compilation failure with clang 3.6.1
bug portability
> ./configure --prefix=$PWD/inst --disable-numa CC=clang && make -Bj breaks down the compilation on master branch with the following output: ``` CC async/libucs_la-pipe.lo CC async/libucs_la-thread.lo CC config/libucs_la-global_opts.lo config/global_opts.c:32:30: error: initializer overrides prior initialization of this subobject [-Werror,-Winitializer-overrides] .stats_dest = "", ^~ config/global_opts.c:29:30: note: previous initialization is here .stats_dest = "", ^~ config/global_opts.c:34:30: error: initializer overrides prior initialization of this subobject [-Werror,-Winitializer-overrides] .memtrack_dest = "", ^~ config/global_opts.c:31:30: note: previous initialization is here .memtrack_dest = "", ^~ 2 errors generated. make[2]: *** [config/libucs_la-global_opts.lo] Error 1 ``` and ``` wireup/wireup.c:548:24: error: equality comparison with extraneous parentheses [-Werror,-Wparentheses-equality] if ((ep->cfg_index == new_cfg_index)) { ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~ wireup/wireup.c:548:24: note: remove extraneous parentheses around the comparison to silence this warning if ((ep->cfg_index == new_cfg_index)) { ~ ^ ~ wireup/wireup.c:548:24: note: use '=' to turn this equality comparison into an assignment if ((ep->cfg_index == new_cfg_index)) { ```
True
Compilation failure with clang 3.6.1 - > ./configure --prefix=$PWD/inst --disable-numa CC=clang && make -Bj breaks down the compilation on master branch with the following output: ``` CC async/libucs_la-pipe.lo CC async/libucs_la-thread.lo CC config/libucs_la-global_opts.lo config/global_opts.c:32:30: error: initializer overrides prior initialization of this subobject [-Werror,-Winitializer-overrides] .stats_dest = "", ^~ config/global_opts.c:29:30: note: previous initialization is here .stats_dest = "", ^~ config/global_opts.c:34:30: error: initializer overrides prior initialization of this subobject [-Werror,-Winitializer-overrides] .memtrack_dest = "", ^~ config/global_opts.c:31:30: note: previous initialization is here .memtrack_dest = "", ^~ 2 errors generated. make[2]: *** [config/libucs_la-global_opts.lo] Error 1 ``` and ``` wireup/wireup.c:548:24: error: equality comparison with extraneous parentheses [-Werror,-Wparentheses-equality] if ((ep->cfg_index == new_cfg_index)) { ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~ wireup/wireup.c:548:24: note: remove extraneous parentheses around the comparison to silence this warning if ((ep->cfg_index == new_cfg_index)) { ~ ^ ~ wireup/wireup.c:548:24: note: use '=' to turn this equality comparison into an assignment if ((ep->cfg_index == new_cfg_index)) { ```
port
compilation failure with clang configure prefix pwd inst disable numa cc clang make bj breaks down the compilation on master branch with the following output cc async libucs la pipe lo cc async libucs la thread lo cc config libucs la global opts lo config global opts c error initializer overrides prior initialization of this subobject stats dest config global opts c note previous initialization is here stats dest config global opts c error initializer overrides prior initialization of this subobject memtrack dest config global opts c note previous initialization is here memtrack dest errors generated make error and wireup wireup c error equality comparison with extraneous parentheses if ep cfg index new cfg index wireup wireup c note remove extraneous parentheses around the comparison to silence this warning if ep cfg index new cfg index wireup wireup c note use to turn this equality comparison into an assignment if ep cfg index new cfg index
1
1,654
23,804,439,341
IssuesEvent
2022-09-03 20:26:06
systemd/systemd
https://api.github.com/repos/systemd/systemd
reopened
TEST-29-PORTABLE is flaky under sanitizers
bug 🐛 tests portabled
### systemd version the issue has been seen with latest main ### Used distribution Arch Linux ### Linux kernel version used _No response_ ### CPU architectures issue was seen on _No response_ ### Component systemd-portabled, tests ### Expected behaviour you didn't see TEST-29-PORTABLE should pass reliably(ish). ### Unexpected behaviour you saw Recently I noticed an uptrend in TEST-29-PORTABLE related fails, mostly concentrated around failing `minimal-app0.service`: ``` [ 22.571421] systemd[1]: Starting minimal-app0.service... [ 22.653735] systemd[375]: Allocating context for crypt device /usr/share/minimal_0.verity. [ 22.654544] systemd[376]: Trying to open and read device /usr/share/minimal_0.verity with direct-io. [ 22.654817] systemd[375]: Trying to open and read device /usr/share/minimal_0.verity with direct-io. [ 22.657815] systemd[376]: Initialising device-mapper backend library. [ 22.657977] systemd[376]: Trying to load VERITY crypt type from device /usr/share/minimal_0.verity. [ 22.658121] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1/job/351 interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=937 reply_cookie=0 signature=sa{sv}as error-name=n/a error-message=n/a [ 22.658283] systemd[376]: Crypto backend (OpenSSL 1.1.1q 5 Jul 2022) initialized in cryptsetup library version 2.4.3. [ 22.658407] systemd[376]: Detected kernel Linux 5.18.12-arch1-1 x86_64. [ 22.821073] kernel: device-mapper: uevent: version 1.0.3 [ 22.821315] kernel: device-mapper: ioctl: 4.46.0-ioctl (2022-02-22) initialised: [email protected] [ 22.825921] kernel: loop2: detected capacity change from 0 to 184 [ 22.827129] kernel: loop3: detected capacity change from 0 to 184 [ 22.767974] systemd[1]: sys-devices-virtual-block-dm\x2d0.device: Changed dead -> plugged [ 22.768109] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=UnitNew cookie=1060 reply_cookie=0 signature=so error-name=n/a error-message=n/a [ 22.768245] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=UnitNew cookie=1061 reply_cookie=0 signature=so error-name=n/a error-message=n/a [ 22.768369] systemd[1]: dev-disk-by\x2ddiskseq-17.device: Job 372 dev-disk-by\x2ddiskseq-17.device/nop finished, result=done [ 22.768522] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=JobRemoved cookie=1062 reply_cookie=0 signature=uoss error-name=n/a error-message=n/a [ 22.768677] systemd[1]: dev-loop3.device: Job 373 dev-loop3.device/nop finished, result=done [ 22.768808] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=JobRemoved cookie=1063 reply_cookie=0 signature=uoss error-name=n/a error-message=n/a ... [ 22.770481] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1/unit/dev_2ddisk_2dby_5cx2ddiskseq_2d17_2edevice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=1070 reply_cookie=0 signature=sa{sv}as error-name=n/a error-message=n/a [ 22.770637] systemd[376]: device-mapper: create ioctl on f01cff8db2a0f9ea70a3261ea6a34050d9377fa64831d98427569e8d94cfa567-verity CRYPT-VERITY-38f962cef7174fb4b132c46a4a8a6aa3-f01cff8db2a0f9ea70a3261ea6a34050d9377fa64831d98427569e8d94cfa567-verity failed: Device or resource busy [ 22.770805] systemd[376]: Udev cookie 0xd4dd849 (semid 1) decremented to 1 ... [ 22.973176] systemd[375]: Applying namespace mount on /run/systemd/unit-root/run/host/os-release [ 22.973301] systemd[376]: Successfully mounted /run/systemd/inaccessible/dir to /run/systemd/unit-root/run/credentials [ 22.973447] systemd[376]: Applying namespace mount on /run/systemd/unit-root/run/host/os-release [ 22.973587] systemd[375]: Followed source symlinks /etc/os-release → /usr/lib/os-release. [ 22.973735] systemd[375]: Bind-mounting /usr/lib/os-release on /run/systemd/unit-root/run/host/os-release (MS_BIND|MS_REC "")... [ 22.973861] systemd[375]: Failed to mount /usr/lib/os-release (type n/a) on /run/systemd/unit-root/run/host/os-release (MS_BIND|MS_REC ""): No such file or directory [ 22.973989] systemd[376]: Followed source symlinks /etc/os-release → /usr/lib/os-release. [ 22.974118] systemd[376]: Bind-mounting /usr/lib/os-release on /run/systemd/unit-root/run/host/os-release (MS_BIND|MS_REC "")... [ 22.974288] systemd[376]: Failed to mount /usr/lib/os-release (type n/a) on /run/systemd/unit-root/run/host/os-release (MS_BIND|MS_REC ""): No such file or directory [ 22.974418] systemd[375]: Bind-mounting /usr/lib/os-release on /run/systemd/unit-root/run/host/os-release (MS_BIND|MS_REC "")... [ 22.974551] systemd[376]: Failed to create destination mount point node '/run/systemd/unit-root/run/host/os-release': Operation not permitted [ 22.974701] systemd[375]: Successfully mounted /usr/lib/os-release to /run/systemd/unit-root/run/host/os-release [ 22.974831] systemd[376]: Failed to mount /usr/lib/os-release to /run/systemd/unit-root/run/host/os-release: No such file or directory [ 22.974965] systemd[375]: Applying namespace mount on /run/systemd/unit-root/run/systemd/incoming [ 22.975088] systemd[375]: Followed source symlinks /run/systemd/propagate/minimal-app0-foo.service → /run/systemd/propagate/minimal-app0-foo.service. [ 22.975220] systemd[376]: Releasing crypt device /dev/loop3 context. [ 22.975352] systemd[375]: Bind-mounting /run/systemd/propagate/minimal-app0-foo.service on /run/systemd/unit-root/run/systemd/incoming (MS_BIND "")... [ 22.975474] systemd[376]: Releasing device-mapper backend. [ 22.975600] systemd[375]: Successfully mounted /run/systemd/propagate/minimal-app0-foo.service to /run/systemd/unit-root/run/systemd/incoming [ 22.975736] systemd[375]: Applying namespace mount on /run/systemd/unit-root/sys [ 22.975856] systemd[375]: Bind-mounting /sys on /run/systemd/unit-root/sys (MS_BIND|MS_REC "")... [ 22.975973] systemd[375]: Applying namespace mount on /run/systemd/unit-root/tmp [ 22.976103] systemd[375]: Bind-mounting /tmp/systemd-private-aa79f66eaddb4df58ede18f5c18ba3bf-minimal-app0-foo.service-ANJ00G/tmp on /run/systemd/unit-root/tmp (MS_BIND|MS_REC "")... [ 22.976268] systemd[375]: Successfully mounted /tmp/systemd-private-aa79f66eaddb4df58ede18f5c18ba3bf-minimal-app0-foo.service-ANJ00G/tmp to /run/systemd/unit-root/tmp [ 22.976466] systemd[375]: Applying namespace mount on /run/systemd/unit-root/var/tmp [ 22.976601] systemd[375]: Bind-mounting /var/tmp/systemd-private-aa79f66eaddb4df58ede18f5c18ba3bf-minimal-app0-foo.service-bQZ4Ew/tmp on /run/systemd/unit-root/var/tmp (MS_BIND|MS_REC "")... [ 22.976764] systemd[375]: Successfully mounted /var/tmp/systemd-private-aa79f66eaddb4df58ede18f5c18ba3bf-minimal-app0-foo.service-bQZ4Ew/tmp to /run/systemd/unit-root/var/tmp [ 22.976881] systemd[375]: Remounted /run/systemd/unit-root/etc/machine-id. [ 22.977019] systemd[375]: Remounted /run/systemd/unit-root/etc/resolv.conf. [ 22.977137] systemd[375]: Remounted /run/systemd/unit-root/run/credentials. [ 22.977256] systemd[375]: Remounted /run/systemd/unit-root/run/host/os-release. [ 22.977380] systemd[375]: Remounted /run/systemd/unit-root/run/systemd/incoming. [ 22.978290] systemd[375]: Remounted /run/systemd/unit-root/proc. [ 22.979550] systemd[375]: Remounted /run/systemd/unit-root/run/credentials. [ 22.981318] systemd[375]: Remounted /run/systemd/unit-root/sys. [ 22.981501] systemd[375]: Remounted /run/systemd/unit-root/sys/fs/bpf. [ 22.981646] systemd[375]: Remounted /run/systemd/unit-root/sys/fs/pstore. [ 22.981786] systemd[375]: Remounted /run/systemd/unit-root/sys/kernel/config. [ 22.981921] systemd[375]: Remounted /run/systemd/unit-root/sys/fs/cgroup. [ 22.982050] systemd[375]: Remounted /run/systemd/unit-root/sys/kernel/tracing. [ 22.982180] systemd[375]: Remounted /run/systemd/unit-root/sys/kernel/security. [ 22.982314] systemd[375]: Remounted /run/systemd/unit-root/sys/kernel/debug. [ 22.982552] systemd[375]: Releasing crypt device /usr/share/minimal_0.verity context. [ 22.982719] systemd[375]: Releasing device-mapper backend. [ 22.982846] systemd[375]: Closing read only fd for /usr/share/minimal_0.verity. [ 22.983047] systemd[375]: Closed loop /dev/loop3 (/usr/share/minimal_0.verity). [ 22.986724] systemd[375]: minimal-app0-foo.service: Executing: cat /usr/lib/os-release [ 23.000841] systemd[376]: minimal-app0.service: Failed to set up mount namespacing: /run/systemd/unit-root/run/host/os-release: No such file or directory [ 23.001204] systemd[376]: minimal-app0.service: Failed at step NAMESPACE spawning cat: No such file or directory [ 23.009991] systemd[1]: systemd-journald.service: Received EPOLLHUP on stored fd 43 (stored), closing. [ 23.010264] systemd[1]: minimal-app0.service: Control group is empty. [ 23.010423] systemd[1]: Received SIGCHLD from PID 376 ((cat)). [ 23.010650] systemd[1]: Child 376 ((cat)) died (code=exited, status=226/NAMESPACE) [ 23.010804] systemd[1]: minimal-app0.service: Child 376 belongs to minimal-app0.service. [ 23.011028] systemd[1]: minimal-app0.service: Control process exited, code=exited, status=226/NAMESPACE [ 23.011234] systemd[1]: minimal-app0.service: Got final SIGCHLD for state start-pre. [ 23.011431] systemd[1]: minimal-app0.service: Failed with result 'exit-code'. ``` Full journals: * [systemd.journal.tar.gz](https://github.com/systemd/systemd/files/9213531/systemd.journal.tar.gz) * [systemd.journal.tar.gz](https://github.com/systemd/systemd/files/9213553/systemd.journal.tar.gz) So far I've seen this only in the CentOS CI sanitizer run, so I'm opening this as (not only) a tracker while I dig deeper. ### Steps to reproduce the problem _No response_ ### Additional program output to the terminal or log subsystem illustrating the issue _No response_
True
TEST-29-PORTABLE is flaky under sanitizers - ### systemd version the issue has been seen with latest main ### Used distribution Arch Linux ### Linux kernel version used _No response_ ### CPU architectures issue was seen on _No response_ ### Component systemd-portabled, tests ### Expected behaviour you didn't see TEST-29-PORTABLE should pass reliably(ish). ### Unexpected behaviour you saw Recently I noticed an uptrend in TEST-29-PORTABLE related fails, mostly concentrated around failing `minimal-app0.service`: ``` [ 22.571421] systemd[1]: Starting minimal-app0.service... [ 22.653735] systemd[375]: Allocating context for crypt device /usr/share/minimal_0.verity. [ 22.654544] systemd[376]: Trying to open and read device /usr/share/minimal_0.verity with direct-io. [ 22.654817] systemd[375]: Trying to open and read device /usr/share/minimal_0.verity with direct-io. [ 22.657815] systemd[376]: Initialising device-mapper backend library. [ 22.657977] systemd[376]: Trying to load VERITY crypt type from device /usr/share/minimal_0.verity. [ 22.658121] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1/job/351 interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=937 reply_cookie=0 signature=sa{sv}as error-name=n/a error-message=n/a [ 22.658283] systemd[376]: Crypto backend (OpenSSL 1.1.1q 5 Jul 2022) initialized in cryptsetup library version 2.4.3. [ 22.658407] systemd[376]: Detected kernel Linux 5.18.12-arch1-1 x86_64. [ 22.821073] kernel: device-mapper: uevent: version 1.0.3 [ 22.821315] kernel: device-mapper: ioctl: 4.46.0-ioctl (2022-02-22) initialised: [email protected] [ 22.825921] kernel: loop2: detected capacity change from 0 to 184 [ 22.827129] kernel: loop3: detected capacity change from 0 to 184 [ 22.767974] systemd[1]: sys-devices-virtual-block-dm\x2d0.device: Changed dead -> plugged [ 22.768109] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=UnitNew cookie=1060 reply_cookie=0 signature=so error-name=n/a error-message=n/a [ 22.768245] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=UnitNew cookie=1061 reply_cookie=0 signature=so error-name=n/a error-message=n/a [ 22.768369] systemd[1]: dev-disk-by\x2ddiskseq-17.device: Job 372 dev-disk-by\x2ddiskseq-17.device/nop finished, result=done [ 22.768522] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=JobRemoved cookie=1062 reply_cookie=0 signature=uoss error-name=n/a error-message=n/a [ 22.768677] systemd[1]: dev-loop3.device: Job 373 dev-loop3.device/nop finished, result=done [ 22.768808] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=JobRemoved cookie=1063 reply_cookie=0 signature=uoss error-name=n/a error-message=n/a ... [ 22.770481] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1/unit/dev_2ddisk_2dby_5cx2ddiskseq_2d17_2edevice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=1070 reply_cookie=0 signature=sa{sv}as error-name=n/a error-message=n/a [ 22.770637] systemd[376]: device-mapper: create ioctl on f01cff8db2a0f9ea70a3261ea6a34050d9377fa64831d98427569e8d94cfa567-verity CRYPT-VERITY-38f962cef7174fb4b132c46a4a8a6aa3-f01cff8db2a0f9ea70a3261ea6a34050d9377fa64831d98427569e8d94cfa567-verity failed: Device or resource busy [ 22.770805] systemd[376]: Udev cookie 0xd4dd849 (semid 1) decremented to 1 ... [ 22.973176] systemd[375]: Applying namespace mount on /run/systemd/unit-root/run/host/os-release [ 22.973301] systemd[376]: Successfully mounted /run/systemd/inaccessible/dir to /run/systemd/unit-root/run/credentials [ 22.973447] systemd[376]: Applying namespace mount on /run/systemd/unit-root/run/host/os-release [ 22.973587] systemd[375]: Followed source symlinks /etc/os-release → /usr/lib/os-release. [ 22.973735] systemd[375]: Bind-mounting /usr/lib/os-release on /run/systemd/unit-root/run/host/os-release (MS_BIND|MS_REC "")... [ 22.973861] systemd[375]: Failed to mount /usr/lib/os-release (type n/a) on /run/systemd/unit-root/run/host/os-release (MS_BIND|MS_REC ""): No such file or directory [ 22.973989] systemd[376]: Followed source symlinks /etc/os-release → /usr/lib/os-release. [ 22.974118] systemd[376]: Bind-mounting /usr/lib/os-release on /run/systemd/unit-root/run/host/os-release (MS_BIND|MS_REC "")... [ 22.974288] systemd[376]: Failed to mount /usr/lib/os-release (type n/a) on /run/systemd/unit-root/run/host/os-release (MS_BIND|MS_REC ""): No such file or directory [ 22.974418] systemd[375]: Bind-mounting /usr/lib/os-release on /run/systemd/unit-root/run/host/os-release (MS_BIND|MS_REC "")... [ 22.974551] systemd[376]: Failed to create destination mount point node '/run/systemd/unit-root/run/host/os-release': Operation not permitted [ 22.974701] systemd[375]: Successfully mounted /usr/lib/os-release to /run/systemd/unit-root/run/host/os-release [ 22.974831] systemd[376]: Failed to mount /usr/lib/os-release to /run/systemd/unit-root/run/host/os-release: No such file or directory [ 22.974965] systemd[375]: Applying namespace mount on /run/systemd/unit-root/run/systemd/incoming [ 22.975088] systemd[375]: Followed source symlinks /run/systemd/propagate/minimal-app0-foo.service → /run/systemd/propagate/minimal-app0-foo.service. [ 22.975220] systemd[376]: Releasing crypt device /dev/loop3 context. [ 22.975352] systemd[375]: Bind-mounting /run/systemd/propagate/minimal-app0-foo.service on /run/systemd/unit-root/run/systemd/incoming (MS_BIND "")... [ 22.975474] systemd[376]: Releasing device-mapper backend. [ 22.975600] systemd[375]: Successfully mounted /run/systemd/propagate/minimal-app0-foo.service to /run/systemd/unit-root/run/systemd/incoming [ 22.975736] systemd[375]: Applying namespace mount on /run/systemd/unit-root/sys [ 22.975856] systemd[375]: Bind-mounting /sys on /run/systemd/unit-root/sys (MS_BIND|MS_REC "")... [ 22.975973] systemd[375]: Applying namespace mount on /run/systemd/unit-root/tmp [ 22.976103] systemd[375]: Bind-mounting /tmp/systemd-private-aa79f66eaddb4df58ede18f5c18ba3bf-minimal-app0-foo.service-ANJ00G/tmp on /run/systemd/unit-root/tmp (MS_BIND|MS_REC "")... [ 22.976268] systemd[375]: Successfully mounted /tmp/systemd-private-aa79f66eaddb4df58ede18f5c18ba3bf-minimal-app0-foo.service-ANJ00G/tmp to /run/systemd/unit-root/tmp [ 22.976466] systemd[375]: Applying namespace mount on /run/systemd/unit-root/var/tmp [ 22.976601] systemd[375]: Bind-mounting /var/tmp/systemd-private-aa79f66eaddb4df58ede18f5c18ba3bf-minimal-app0-foo.service-bQZ4Ew/tmp on /run/systemd/unit-root/var/tmp (MS_BIND|MS_REC "")... [ 22.976764] systemd[375]: Successfully mounted /var/tmp/systemd-private-aa79f66eaddb4df58ede18f5c18ba3bf-minimal-app0-foo.service-bQZ4Ew/tmp to /run/systemd/unit-root/var/tmp [ 22.976881] systemd[375]: Remounted /run/systemd/unit-root/etc/machine-id. [ 22.977019] systemd[375]: Remounted /run/systemd/unit-root/etc/resolv.conf. [ 22.977137] systemd[375]: Remounted /run/systemd/unit-root/run/credentials. [ 22.977256] systemd[375]: Remounted /run/systemd/unit-root/run/host/os-release. [ 22.977380] systemd[375]: Remounted /run/systemd/unit-root/run/systemd/incoming. [ 22.978290] systemd[375]: Remounted /run/systemd/unit-root/proc. [ 22.979550] systemd[375]: Remounted /run/systemd/unit-root/run/credentials. [ 22.981318] systemd[375]: Remounted /run/systemd/unit-root/sys. [ 22.981501] systemd[375]: Remounted /run/systemd/unit-root/sys/fs/bpf. [ 22.981646] systemd[375]: Remounted /run/systemd/unit-root/sys/fs/pstore. [ 22.981786] systemd[375]: Remounted /run/systemd/unit-root/sys/kernel/config. [ 22.981921] systemd[375]: Remounted /run/systemd/unit-root/sys/fs/cgroup. [ 22.982050] systemd[375]: Remounted /run/systemd/unit-root/sys/kernel/tracing. [ 22.982180] systemd[375]: Remounted /run/systemd/unit-root/sys/kernel/security. [ 22.982314] systemd[375]: Remounted /run/systemd/unit-root/sys/kernel/debug. [ 22.982552] systemd[375]: Releasing crypt device /usr/share/minimal_0.verity context. [ 22.982719] systemd[375]: Releasing device-mapper backend. [ 22.982846] systemd[375]: Closing read only fd for /usr/share/minimal_0.verity. [ 22.983047] systemd[375]: Closed loop /dev/loop3 (/usr/share/minimal_0.verity). [ 22.986724] systemd[375]: minimal-app0-foo.service: Executing: cat /usr/lib/os-release [ 23.000841] systemd[376]: minimal-app0.service: Failed to set up mount namespacing: /run/systemd/unit-root/run/host/os-release: No such file or directory [ 23.001204] systemd[376]: minimal-app0.service: Failed at step NAMESPACE spawning cat: No such file or directory [ 23.009991] systemd[1]: systemd-journald.service: Received EPOLLHUP on stored fd 43 (stored), closing. [ 23.010264] systemd[1]: minimal-app0.service: Control group is empty. [ 23.010423] systemd[1]: Received SIGCHLD from PID 376 ((cat)). [ 23.010650] systemd[1]: Child 376 ((cat)) died (code=exited, status=226/NAMESPACE) [ 23.010804] systemd[1]: minimal-app0.service: Child 376 belongs to minimal-app0.service. [ 23.011028] systemd[1]: minimal-app0.service: Control process exited, code=exited, status=226/NAMESPACE [ 23.011234] systemd[1]: minimal-app0.service: Got final SIGCHLD for state start-pre. [ 23.011431] systemd[1]: minimal-app0.service: Failed with result 'exit-code'. ``` Full journals: * [systemd.journal.tar.gz](https://github.com/systemd/systemd/files/9213531/systemd.journal.tar.gz) * [systemd.journal.tar.gz](https://github.com/systemd/systemd/files/9213553/systemd.journal.tar.gz) So far I've seen this only in the CentOS CI sanitizer run, so I'm opening this as (not only) a tracker while I dig deeper. ### Steps to reproduce the problem _No response_ ### Additional program output to the terminal or log subsystem illustrating the issue _No response_
port
test portable is flaky under sanitizers systemd version the issue has been seen with latest main used distribution arch linux linux kernel version used no response cpu architectures issue was seen on no response component systemd portabled tests expected behaviour you didn t see test portable should pass reliably ish unexpected behaviour you saw recently i noticed an uptrend in test portable related fails mostly concentrated around failing minimal service systemd starting minimal service systemd allocating context for crypt device usr share minimal verity systemd trying to open and read device usr share minimal verity with direct io systemd trying to open and read device usr share minimal verity with direct io systemd initialising device mapper backend library systemd trying to load verity crypt type from device usr share minimal verity systemd sent message type signal sender n a destination n a path org freedesktop job interface org freedesktop dbus properties member propertieschanged cookie reply cookie signature sa sv as error name n a error message n a systemd crypto backend openssl jul initialized in cryptsetup library version systemd detected kernel linux kernel device mapper uevent version kernel device mapper ioctl ioctl initialised dm devel redhat com kernel detected capacity change from to kernel detected capacity change from to systemd sys devices virtual block dm device changed dead plugged systemd sent message type signal sender n a destination n a path org freedesktop interface org freedesktop manager member unitnew cookie reply cookie signature so error name n a error message n a systemd sent message type signal sender n a destination n a path org freedesktop interface org freedesktop manager member unitnew cookie reply cookie signature so error name n a error message n a systemd dev disk by device job dev disk by device nop finished result done systemd sent message type signal sender n a destination n a path org freedesktop interface org freedesktop manager member jobremoved cookie reply cookie signature uoss error name n a error message n a systemd dev device job dev device nop finished result done systemd sent message type signal sender n a destination n a path org freedesktop interface org freedesktop manager member jobremoved cookie reply cookie signature uoss error name n a error message n a systemd sent message type signal sender n a destination n a path org freedesktop unit dev interface org freedesktop dbus properties member propertieschanged cookie reply cookie signature sa sv as error name n a error message n a systemd device mapper create ioctl on verity crypt verity verity failed device or resource busy systemd udev cookie semid decremented to systemd applying namespace mount on run systemd unit root run host os release systemd successfully mounted run systemd inaccessible dir to run systemd unit root run credentials systemd applying namespace mount on run systemd unit root run host os release systemd followed source symlinks etc os release → usr lib os release systemd bind mounting usr lib os release on run systemd unit root run host os release ms bind ms rec systemd failed to mount usr lib os release type n a on run systemd unit root run host os release ms bind ms rec no such file or directory systemd followed source symlinks etc os release → usr lib os release systemd bind mounting usr lib os release on run systemd unit root run host os release ms bind ms rec systemd failed to mount usr lib os release type n a on run systemd unit root run host os release ms bind ms rec no such file or directory systemd bind mounting usr lib os release on run systemd unit root run host os release ms bind ms rec systemd failed to create destination mount point node run systemd unit root run host os release operation not permitted systemd successfully mounted usr lib os release to run systemd unit root run host os release systemd failed to mount usr lib os release to run systemd unit root run host os release no such file or directory systemd applying namespace mount on run systemd unit root run systemd incoming systemd followed source symlinks run systemd propagate minimal foo service → run systemd propagate minimal foo service systemd releasing crypt device dev context systemd bind mounting run systemd propagate minimal foo service on run systemd unit root run systemd incoming ms bind systemd releasing device mapper backend systemd successfully mounted run systemd propagate minimal foo service to run systemd unit root run systemd incoming systemd applying namespace mount on run systemd unit root sys systemd bind mounting sys on run systemd unit root sys ms bind ms rec systemd applying namespace mount on run systemd unit root tmp systemd bind mounting tmp systemd private minimal foo service tmp on run systemd unit root tmp ms bind ms rec systemd successfully mounted tmp systemd private minimal foo service tmp to run systemd unit root tmp systemd applying namespace mount on run systemd unit root var tmp systemd bind mounting var tmp systemd private minimal foo service tmp on run systemd unit root var tmp ms bind ms rec systemd successfully mounted var tmp systemd private minimal foo service tmp to run systemd unit root var tmp systemd remounted run systemd unit root etc machine id systemd remounted run systemd unit root etc resolv conf systemd remounted run systemd unit root run credentials systemd remounted run systemd unit root run host os release systemd remounted run systemd unit root run systemd incoming systemd remounted run systemd unit root proc systemd remounted run systemd unit root run credentials systemd remounted run systemd unit root sys systemd remounted run systemd unit root sys fs bpf systemd remounted run systemd unit root sys fs pstore systemd remounted run systemd unit root sys kernel config systemd remounted run systemd unit root sys fs cgroup systemd remounted run systemd unit root sys kernel tracing systemd remounted run systemd unit root sys kernel security systemd remounted run systemd unit root sys kernel debug systemd releasing crypt device usr share minimal verity context systemd releasing device mapper backend systemd closing read only fd for usr share minimal verity systemd closed loop dev usr share minimal verity systemd minimal foo service executing cat usr lib os release systemd minimal service failed to set up mount namespacing run systemd unit root run host os release no such file or directory systemd minimal service failed at step namespace spawning cat no such file or directory systemd systemd journald service received epollhup on stored fd stored closing systemd minimal service control group is empty systemd received sigchld from pid cat systemd child cat died code exited status namespace systemd minimal service child belongs to minimal service systemd minimal service control process exited code exited status namespace systemd minimal service got final sigchld for state start pre systemd minimal service failed with result exit code full journals so far i ve seen this only in the centos ci sanitizer run so i m opening this as not only a tracker while i dig deeper steps to reproduce the problem no response additional program output to the terminal or log subsystem illustrating the issue no response
1
End of preview. Expand in Data Studio

Dataset Card for "binary-10IQR-port"

More Information needed

Downloads last month
18

Collection including karths/binary-10IQR-port