Dataset Viewer
Auto-converted to Parquet
Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
2
665
labels
stringlengths
4
554
body
stringlengths
3
235k
index
stringclasses
6 values
text_combine
stringlengths
96
235k
label
stringclasses
2 values
text
stringlengths
96
196k
binary_label
int64
0
1
96,422
20,017,081,280
IssuesEvent
2022-02-01 13:10:23
Regalis11/Barotrauma
https://api.github.com/repos/Regalis11/Barotrauma
closed
Error while attempting to host campaign server
Bug Duplicate Code Crash
- [✓] I have searched the issue tracker to check if the issue has already been reported. **Description** Unable to host a campaign online server. I have tried verifying files as well as reinstalling. **Steps To Reproduce** - Create a server and click on the campaign mission type - hourglass loading cursor comes up - error message appears - get sent to the server browser page Happens every time **Version** v0.15.23.0 Windows 10 (can provide further specifications if needed) **Additional information** Every time this happens there are three files that fail to be validated. **Log:** Error while reading a message from server. {Object reference not set to an instance of an object.} at Barotrauma.MultiPlayerCampaignSetupUI.UpdateLoadMenu(IEnumerable`1 saveFiles) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Screens\CampaignSetupUI\MultiPlayerCampaignSetupUI.cs:line 267 at Barotrauma.MultiPlayerCampaignSetupUI..ctor(GUIComponent newGameContainer, GUIComponent loadGameContainer, IEnumerable`1 saveFiles) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Screens\CampaignSetupUI\MultiPlayerCampaignSetupUI.cs:line 194 at Barotrauma.MultiPlayerCampaign.StartCampaignSetup(IEnumerable`1 saveFiles) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\GameSession\GameModes\MultiPlayerCampaign.cs:line 63 at Barotrauma.Networking.GameClient.ReadDataMessage(IReadMessage inc) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Networking\GameClient.cs:line 918 at Barotrauma.Networking.SteamP2POwnerPeer.HandleDataMessage(IReadMessage inc) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Networking\Primitives\Peers\SteamP2POwnerPeer.cs:line 0 at Barotrauma.Networking.SteamP2POwnerPeer.Update(Single deltaTime) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Networking\Primitives\Peers\SteamP2POwnerPeer.cs:line 227 at Barotrauma.Networking.GameClient.Update(Single deltaTime) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Networking\GameClient.cs:line 641
1.0
Error while attempting to host campaign server - - [✓] I have searched the issue tracker to check if the issue has already been reported. **Description** Unable to host a campaign online server. I have tried verifying files as well as reinstalling. **Steps To Reproduce** - Create a server and click on the campaign mission type - hourglass loading cursor comes up - error message appears - get sent to the server browser page Happens every time **Version** v0.15.23.0 Windows 10 (can provide further specifications if needed) **Additional information** Every time this happens there are three files that fail to be validated. **Log:** Error while reading a message from server. {Object reference not set to an instance of an object.} at Barotrauma.MultiPlayerCampaignSetupUI.UpdateLoadMenu(IEnumerable`1 saveFiles) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Screens\CampaignSetupUI\MultiPlayerCampaignSetupUI.cs:line 267 at Barotrauma.MultiPlayerCampaignSetupUI..ctor(GUIComponent newGameContainer, GUIComponent loadGameContainer, IEnumerable`1 saveFiles) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Screens\CampaignSetupUI\MultiPlayerCampaignSetupUI.cs:line 194 at Barotrauma.MultiPlayerCampaign.StartCampaignSetup(IEnumerable`1 saveFiles) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\GameSession\GameModes\MultiPlayerCampaign.cs:line 63 at Barotrauma.Networking.GameClient.ReadDataMessage(IReadMessage inc) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Networking\GameClient.cs:line 918 at Barotrauma.Networking.SteamP2POwnerPeer.HandleDataMessage(IReadMessage inc) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Networking\Primitives\Peers\SteamP2POwnerPeer.cs:line 0 at Barotrauma.Networking.SteamP2POwnerPeer.Update(Single deltaTime) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Networking\Primitives\Peers\SteamP2POwnerPeer.cs:line 227 at Barotrauma.Networking.GameClient.Update(Single deltaTime) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Networking\GameClient.cs:line 641
non_infrastructure
error while attempting to host campaign server i have searched the issue tracker to check if the issue has already been reported description unable to host a campaign online server i have tried verifying files as well as reinstalling steps to reproduce create a server and click on the campaign mission type hourglass loading cursor comes up error message appears get sent to the server browser page happens every time version windows can provide further specifications if needed additional information every time this happens there are three files that fail to be validated log error while reading a message from server object reference not set to an instance of an object at barotrauma multiplayercampaignsetupui updateloadmenu ienumerable savefiles in barotrauma barotraumaclient clientsource screens campaignsetupui multiplayercampaignsetupui cs line at barotrauma multiplayercampaignsetupui ctor guicomponent newgamecontainer guicomponent loadgamecontainer ienumerable savefiles in barotrauma barotraumaclient clientsource screens campaignsetupui multiplayercampaignsetupui cs line at barotrauma multiplayercampaign startcampaignsetup ienumerable savefiles in barotrauma barotraumaclient clientsource gamesession gamemodes multiplayercampaign cs line at barotrauma networking gameclient readdatamessage ireadmessage inc in barotrauma barotraumaclient clientsource networking gameclient cs line at barotrauma networking handledatamessage ireadmessage inc in barotrauma barotraumaclient clientsource networking primitives peers cs line at barotrauma networking update single deltatime in barotrauma barotraumaclient clientsource networking primitives peers cs line at barotrauma networking gameclient update single deltatime in barotrauma barotraumaclient clientsource networking gameclient cs line
0
3,270
4,175,346,865
IssuesEvent
2016-06-21 16:34:40
jlongster/debugger.html
https://api.github.com/repos/jlongster/debugger.html
closed
Add component unit tests
infrastructure
It would be nice to be able to write unit tests against our components. Tests would render components with fixture data, like storybook, and have assertions on the shape of the component and handler functions. We did some of this investigation work on tuesday: things to consider: + jsdom environment + shallow render
1.0
Add component unit tests - It would be nice to be able to write unit tests against our components. Tests would render components with fixture data, like storybook, and have assertions on the shape of the component and handler functions. We did some of this investigation work on tuesday: things to consider: + jsdom environment + shallow render
infrastructure
add component unit tests it would be nice to be able to write unit tests against our components tests would render components with fixture data like storybook and have assertions on the shape of the component and handler functions we did some of this investigation work on tuesday things to consider jsdom environment shallow render
1
24,753
17,691,448,768
IssuesEvent
2021-08-24 10:26:35
wellcomecollection/platform
https://api.github.com/repos/wellcomecollection/platform
closed
Clean up the miro-migration VHS
🚧 Infrastructure
The data in that table is very out-of-date – it refers to objects in Miro buckets that no longer exist. We should archive the contents, then deleted the associated infrastructure.
1.0
Clean up the miro-migration VHS - The data in that table is very out-of-date – it refers to objects in Miro buckets that no longer exist. We should archive the contents, then deleted the associated infrastructure.
infrastructure
clean up the miro migration vhs the data in that table is very out of date – it refers to objects in miro buckets that no longer exist we should archive the contents then deleted the associated infrastructure
1
18,151
12,811,874,946
IssuesEvent
2020-07-04 01:59:41
CodeForBaltimore/Bmore-Responsive
https://api.github.com/repos/CodeForBaltimore/Bmore-Responsive
closed
Refactor Casbin db connection to remove SQL from logs
duplicate infrastructure
### Task Currently, the Casbin db connection defaults to a robust logging. We do not need this level of detail in the logs for production. ### Acceptance Criteria - [x] Casbin db connection implements options
1.0
Refactor Casbin db connection to remove SQL from logs - ### Task Currently, the Casbin db connection defaults to a robust logging. We do not need this level of detail in the logs for production. ### Acceptance Criteria - [x] Casbin db connection implements options
infrastructure
refactor casbin db connection to remove sql from logs task currently the casbin db connection defaults to a robust logging we do not need this level of detail in the logs for production acceptance criteria casbin db connection implements options
1
270,387
8,459,564,122
IssuesEvent
2018-10-22 16:16:49
aeternity/elixir-node
https://api.github.com/repos/aeternity/elixir-node
closed
Rox DB not working if Peristence GenServer crashes
bug discussion low-priority
So if the Persistence GenServer, crashes, it is being restarted by it's Supervisor. Problem is that the way RoxDB is designed, at least in the library that we use, we cannot open it again until it is closed (at least that's what I have found). But we cannot manually close the DB. By design it is made to be automatically closed when the BEAM VM garbage collects it: Quoting: `The database will automatically be closed when the BEAM VM releases it for garbage collection.` So if we try to open the RoxDB again, we get the following message: `{:error, "IO error: While lock file: /home/gspasov/1work/aeternity/elixir-node/test/LOCK: No locks available"}` This means that we cannot get the reference of the DB, not of the families, until the VM garbage collects it, i.e. until we restart the project. This faces us with the question of how do we deal with this issue. For me there are 2 options: - Go around the problem and figure out a workaround (which is not a solution in my opinion). For me this will be to use another GenServer to keep the state of the DB in 2 places, so if one of them crashes we still have the db and families references; - Maybe use another DB?
1.0
Rox DB not working if Peristence GenServer crashes - So if the Persistence GenServer, crashes, it is being restarted by it's Supervisor. Problem is that the way RoxDB is designed, at least in the library that we use, we cannot open it again until it is closed (at least that's what I have found). But we cannot manually close the DB. By design it is made to be automatically closed when the BEAM VM garbage collects it: Quoting: `The database will automatically be closed when the BEAM VM releases it for garbage collection.` So if we try to open the RoxDB again, we get the following message: `{:error, "IO error: While lock file: /home/gspasov/1work/aeternity/elixir-node/test/LOCK: No locks available"}` This means that we cannot get the reference of the DB, not of the families, until the VM garbage collects it, i.e. until we restart the project. This faces us with the question of how do we deal with this issue. For me there are 2 options: - Go around the problem and figure out a workaround (which is not a solution in my opinion). For me this will be to use another GenServer to keep the state of the DB in 2 places, so if one of them crashes we still have the db and families references; - Maybe use another DB?
non_infrastructure
rox db not working if peristence genserver crashes so if the persistence genserver crashes it is being restarted by it s supervisor problem is that the way roxdb is designed at least in the library that we use we cannot open it again until it is closed at least that s what i have found but we cannot manually close the db by design it is made to be automatically closed when the beam vm garbage collects it quoting the database will automatically be closed when the beam vm releases it for garbage collection so if we try to open the roxdb again we get the following message error io error while lock file home gspasov aeternity elixir node test lock no locks available this means that we cannot get the reference of the db not of the families until the vm garbage collects it i e until we restart the project this faces us with the question of how do we deal with this issue for me there are options go around the problem and figure out a workaround which is not a solution in my opinion for me this will be to use another genserver to keep the state of the db in places so if one of them crashes we still have the db and families references maybe use another db
0
172,303
13,299,988,935
IssuesEvent
2020-08-25 10:36:33
mattermost/mattermost-server
https://api.github.com/repos/mattermost/mattermost-server
closed
Write Cypress test: "MM-T385 Invite new user to closed team using email invite"
Area/E2E Tests Difficulty/1:Easy Hackfest Help Wanted
This is part of __Cypress Test Automation Hackfest 🚀__. Please read more at https://github.com/mattermost/mattermost-server/issues/15120. See our [end-to-end testing documentation](https://developers.mattermost.com/contribute/webapp/end-to-end-tests/) for reference. <article class="mb-32"><h1 class="text-6xl md:text-7xl lg:text-8xl font-bold tracking-tighter leading-tight md:leading-none mb-12 text-center md:text-left">MM-T385 Invite new user to closed team using email invite</h1><div class="max-w-2xl mx-auto"><div><h3>Steps </h3><ol><li>Ensure that Main Menu ➜ Team Settings ➜ Allow any user with an account on this server... is set to `No`</li><li>Ensure "Allow only users with a specific email domain to join this team" is blank (i.e. any email address can be invited)</li><li>Open Main Menu and click `Invite People`</li><li>Enter an email address you can access (test user may access email via inbucket)</li><li>Click `Invite Members`</li><li>Check your email, and open the email with subject line:</li><li>`[Mattermost] invited you to join Team</li><li>Open the `Join Team` link in a separate / incognito browser</li><li>Create a new account using the email address you sent the invite to</li></ol><h3>Test Data</h3><img src="https://smartbear-tm4j-prod-us-west-2-attachment-rich-text.s3.us-west-2.amazonaws.com/embedded-f3277290f945470c4add5d21ef3dc7ca7b74388fc7152bfb6b99ae58c66a95a8-1579118958795-2020-01-15_15-08-40.png" style="width: 175px;" class="fr-fil fr-dii"><img src="https://smartbear-tm4j-prod-us-west-2-attachment-rich-text.s3.us-west-2.amazonaws.com/embedded-f3277290f945470c4add5d21ef3dc7ca7b74388fc7152bfb6b99ae58c66a95a8-1579118985721-2020-01-15_15-07-48.png" style="width: 123px;" class="fr-fil fr-dii"><h3>Expected</h3>New user is viewing Town Square channel of that team and "Welcome to Mattermost" tutorial is displayed in the center channel<hr></div></div></article> **Test Folder:** `/cypress/integration/team_settings` **Test code arrangement:** ``` describe('Team Settings', () => { it('MM-T385 Invite new user to closed team using email invite', () => { // code }); }); ``` If you're interested, please comment here and come [join our "Contributors" community channel](https://community.mattermost.com/core/channels/tickets) on our daily build server, where you can discuss questions with community members and the Mattermost core team. For technical advice or questions, please [join our "Developers" community channel](https://community.mattermost.com/core/channels/developers). New contributors please see our [Developer's Guide](https://developers.mattermost.com/contribute/getting-started/).
1.0
Write Cypress test: "MM-T385 Invite new user to closed team using email invite" - This is part of __Cypress Test Automation Hackfest 🚀__. Please read more at https://github.com/mattermost/mattermost-server/issues/15120. See our [end-to-end testing documentation](https://developers.mattermost.com/contribute/webapp/end-to-end-tests/) for reference. <article class="mb-32"><h1 class="text-6xl md:text-7xl lg:text-8xl font-bold tracking-tighter leading-tight md:leading-none mb-12 text-center md:text-left">MM-T385 Invite new user to closed team using email invite</h1><div class="max-w-2xl mx-auto"><div><h3>Steps </h3><ol><li>Ensure that Main Menu ➜ Team Settings ➜ Allow any user with an account on this server... is set to `No`</li><li>Ensure "Allow only users with a specific email domain to join this team" is blank (i.e. any email address can be invited)</li><li>Open Main Menu and click `Invite People`</li><li>Enter an email address you can access (test user may access email via inbucket)</li><li>Click `Invite Members`</li><li>Check your email, and open the email with subject line:</li><li>`[Mattermost] invited you to join Team</li><li>Open the `Join Team` link in a separate / incognito browser</li><li>Create a new account using the email address you sent the invite to</li></ol><h3>Test Data</h3><img src="https://smartbear-tm4j-prod-us-west-2-attachment-rich-text.s3.us-west-2.amazonaws.com/embedded-f3277290f945470c4add5d21ef3dc7ca7b74388fc7152bfb6b99ae58c66a95a8-1579118958795-2020-01-15_15-08-40.png" style="width: 175px;" class="fr-fil fr-dii"><img src="https://smartbear-tm4j-prod-us-west-2-attachment-rich-text.s3.us-west-2.amazonaws.com/embedded-f3277290f945470c4add5d21ef3dc7ca7b74388fc7152bfb6b99ae58c66a95a8-1579118985721-2020-01-15_15-07-48.png" style="width: 123px;" class="fr-fil fr-dii"><h3>Expected</h3>New user is viewing Town Square channel of that team and "Welcome to Mattermost" tutorial is displayed in the center channel<hr></div></div></article> **Test Folder:** `/cypress/integration/team_settings` **Test code arrangement:** ``` describe('Team Settings', () => { it('MM-T385 Invite new user to closed team using email invite', () => { // code }); }); ``` If you're interested, please comment here and come [join our "Contributors" community channel](https://community.mattermost.com/core/channels/tickets) on our daily build server, where you can discuss questions with community members and the Mattermost core team. For technical advice or questions, please [join our "Developers" community channel](https://community.mattermost.com/core/channels/developers). New contributors please see our [Developer's Guide](https://developers.mattermost.com/contribute/getting-started/).
non_infrastructure
write cypress test mm invite new user to closed team using email invite this is part of cypress test automation hackfest 🚀 please read more at see our for reference mm invite new user to closed team using email invite steps ensure that main menu ➜ team settings ➜ allow any user with an account on this server is set to no ensure allow only users with a specific email domain to join this team is blank i e any email address can be invited open main menu and click invite people enter an email address you can access test user may access email via inbucket click invite members check your email and open the email with subject line invited you to join team open the join team link in a separate incognito browser create a new account using the email address you sent the invite to test data expected new user is viewing town square channel of that team and welcome to mattermost tutorial is displayed in the center channel test folder cypress integration team settings test code arrangement describe team settings it mm invite new user to closed team using email invite code if you re interested please comment here and come on our daily build server where you can discuss questions with community members and the mattermost core team for technical advice or questions please new contributors please see our
0
15,058
11,310,078,055
IssuesEvent
2020-01-19 17:12:59
vlsidlyarevich/ideal-shop
https://api.github.com/repos/vlsidlyarevich/ideal-shop
closed
Setup parent maven/gradle project
infrastructure
For the purposes of microservice development we need parent project which will hold our Spring cloud version and other libs/plugins. It can be placed in root of project BUT there should be no modules section because we want to have separated services to multiply all the advantages of using separated stuff.
1.0
Setup parent maven/gradle project - For the purposes of microservice development we need parent project which will hold our Spring cloud version and other libs/plugins. It can be placed in root of project BUT there should be no modules section because we want to have separated services to multiply all the advantages of using separated stuff.
infrastructure
setup parent maven gradle project for the purposes of microservice development we need parent project which will hold our spring cloud version and other libs plugins it can be placed in root of project but there should be no modules section because we want to have separated services to multiply all the advantages of using separated stuff
1
15,316
11,456,621,820
IssuesEvent
2020-02-06 21:38:18
enarx/enarx
https://api.github.com/repos/enarx/enarx
opened
pre-push tests run in the current working tree
infrastructure
This means we can get false positives and false negatives because we're evaluating code that isn't checked in.
1.0
pre-push tests run in the current working tree - This means we can get false positives and false negatives because we're evaluating code that isn't checked in.
infrastructure
pre push tests run in the current working tree this means we can get false positives and false negatives because we re evaluating code that isn t checked in
1
1,953
3,440,217,428
IssuesEvent
2015-12-14 13:38:50
hackndev/zinc
https://api.github.com/repos/hackndev/zinc
closed
Fix makefile to build cargoized examples
infrastructure nightly fallout ready
Makefile is currently broken from #318 and will not build examples as expected. The whole its existence is slightly questionable now, as it's basically pre and post-processing around cargo. Maybe we need to make a simple wrapper around cargo anyway (sounds like a reasonable option given how cargo isn't that much cross-build friendly)?
1.0
Fix makefile to build cargoized examples - Makefile is currently broken from #318 and will not build examples as expected. The whole its existence is slightly questionable now, as it's basically pre and post-processing around cargo. Maybe we need to make a simple wrapper around cargo anyway (sounds like a reasonable option given how cargo isn't that much cross-build friendly)?
infrastructure
fix makefile to build cargoized examples makefile is currently broken from and will not build examples as expected the whole its existence is slightly questionable now as it s basically pre and post processing around cargo maybe we need to make a simple wrapper around cargo anyway sounds like a reasonable option given how cargo isn t that much cross build friendly
1
13,313
10,199,053,276
IssuesEvent
2019-08-13 07:30:28
npgsql/npgsql
https://api.github.com/repos/npgsql/npgsql
closed
Move version prefix to directory build properties
infrastructure
All projects in the `src` directory should inherit `VersionPrefix` from the central place which is `Directory.Build.props`. The `bump.sh` script must be updated too.
1.0
Move version prefix to directory build properties - All projects in the `src` directory should inherit `VersionPrefix` from the central place which is `Directory.Build.props`. The `bump.sh` script must be updated too.
infrastructure
move version prefix to directory build properties all projects in the src directory should inherit versionprefix from the central place which is directory build props the bump sh script must be updated too
1
249,575
26,954,447,098
IssuesEvent
2023-02-08 14:01:58
simplycubed/terraform-google-static-assets
https://api.github.com/repos/simplycubed/terraform-google-static-assets
closed
CVE-2016-9123 (High) detected in github.com/docker/distribution-v2.8.1+incompatible - autoclosed
security vulnerability
## CVE-2016-9123 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/docker/distribution-v2.8.1+incompatible</b></p></summary> <p></p> <p>Library home page: <a href="https://proxy.golang.org/github.com/docker/distribution/@v/v2.8.1+incompatible.zip">https://proxy.golang.org/github.com/docker/distribution/@v/v2.8.1+incompatible.zip</a></p> <p> Dependency Hierarchy: - github.com/gruntwork-io/terratest-v0.40.17 (Root Library) - github.com/google/go-containerregistry-v0.9.0 - :x: **github.com/docker/distribution-v2.8.1+incompatible** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/simplycubed/terraform-google-static-assets/commit/e49e2f33b77657ce4ab7eac9abebafc4a1fd18ba">e49e2f33b77657ce4ab7eac9abebafc4a1fd18ba</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> go-jose before 1.0.5 suffers from a CBC-HMAC integer overflow on 32-bit architectures. An integer overflow could lead to authentication bypass for CBC-HMAC encrypted ciphertexts on 32-bit architectures. <p>Publish Date: 2017-03-28 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-9123>CVE-2016-9123</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://osv.dev/vulnerability/GO-2020-0009">https://osv.dev/vulnerability/GO-2020-0009</a></p> <p>Release Date: 2017-03-28</p> <p>Fix Resolution: v1.0.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2016-9123 (High) detected in github.com/docker/distribution-v2.8.1+incompatible - autoclosed - ## CVE-2016-9123 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/docker/distribution-v2.8.1+incompatible</b></p></summary> <p></p> <p>Library home page: <a href="https://proxy.golang.org/github.com/docker/distribution/@v/v2.8.1+incompatible.zip">https://proxy.golang.org/github.com/docker/distribution/@v/v2.8.1+incompatible.zip</a></p> <p> Dependency Hierarchy: - github.com/gruntwork-io/terratest-v0.40.17 (Root Library) - github.com/google/go-containerregistry-v0.9.0 - :x: **github.com/docker/distribution-v2.8.1+incompatible** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/simplycubed/terraform-google-static-assets/commit/e49e2f33b77657ce4ab7eac9abebafc4a1fd18ba">e49e2f33b77657ce4ab7eac9abebafc4a1fd18ba</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> go-jose before 1.0.5 suffers from a CBC-HMAC integer overflow on 32-bit architectures. An integer overflow could lead to authentication bypass for CBC-HMAC encrypted ciphertexts on 32-bit architectures. <p>Publish Date: 2017-03-28 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-9123>CVE-2016-9123</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://osv.dev/vulnerability/GO-2020-0009">https://osv.dev/vulnerability/GO-2020-0009</a></p> <p>Release Date: 2017-03-28</p> <p>Fix Resolution: v1.0.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_infrastructure
cve high detected in github com docker distribution incompatible autoclosed cve high severity vulnerability vulnerable library github com docker distribution incompatible library home page a href dependency hierarchy github com gruntwork io terratest root library github com google go containerregistry x github com docker distribution incompatible vulnerable library found in head commit a href found in base branch master vulnerability details go jose before suffers from a cbc hmac integer overflow on bit architectures an integer overflow could lead to authentication bypass for cbc hmac encrypted ciphertexts on bit architectures publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
132,305
10,740,997,231
IssuesEvent
2019-10-29 19:18:53
OpenLiberty/open-liberty
https://api.github.com/repos/OpenLiberty/open-liberty
opened
Test Failure: com.ibm.ws.threading.policy.PolicyExecutorTest.testGroupedSubmits
team:Zombie Apocalypse test bug
``` testGroupedSubmits:junit.framework.AssertionFailedError: 2019-10-26-16:50:02:473 The response did not contain [SUCCESS]. Full output is: ERROR: Caught exception attempting to call test method testGroupedSubmits on servlet web.PolicyExecutorServlet java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Attempted arrival of unregistered party for java.util.concurrent.Phaser@86a90676[phase = 3 parties = 8 arrived = 8] at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205) at web.PolicyExecutorServlet.testGroupedSubmits(PolicyExecutorServlet.java:1751) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at componenttest.app.FATServlet.doGet(FATServlet.java:71) at javax.servlet.http.HttpServlet.service(HttpServlet.java:687) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1230) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:729) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:426) at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1218) at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1002) at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:75) at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:938) at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.run(DynamicVirtualHost.java:279) at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink$TaskWrapper.run(HttpDispatcherLink.java:1136) at com.ibm.ws.http.dispat ``` This test failure occurs due to a subtle behavior of java.util.concurrent.Phaser. The test case relies upon phaser.arriveAndAwaitAdvance, which it falsely assumes to be atomic. JavaDoc, however, states that it is equivalent to awaitAdvance(arrive()). This is important because, with arrive being an independent operation from the advance, it becomes possible, upon reaching phase 3 for accumulating tasks from the previous group (the ones intended for phase 3) to overlap the arrive operations from those that are intended for phase 4. This means there is a timing window for more than 8 parties to attempt to arrive at phase 3, thus causing the failure: ``` java.lang.IllegalStateException: Attempted arrival of unregistered party for java.util.concurrent.Phaser@86a90676[phase = 3 parties = 8 arrived = 8] ``` Here is one way the problem can occur: 8 tasks from first group attempt to arriveAndWaitForAdvance at phase 0. After 6 exit the method, 2 (from this first group) can remain in progress. 8 tasks from the second group attempt to arriveAndWaitForAdvance for phase 1. The 2 from the first group and 4 from the second group exit the method, leaving 4 (from the second group) in progress. 8 tasks from the third group attempt to arriveAndWaitForAdvance for phase 2. The 4 from the second group and 2 from the third group exit the method, leaving 6 (from the third group) in progress. 8 tasks from the fourth group attempt to arriveAndWaitForAdvance for phase 3. The 6 from the third group exit the method, leaving all 8 (from the fourth group) in progress. 8 tasks from the fifth group attempt to arriveAndWaitForAdvance for phase 4, however, nothing has forced phase 3 to have ended at this point and so any number of these could attempt to arrive into phase 3 and fail due to extra unregistered parties. The simplest correction to the test that otherwise preserves its logic would be to eliminate the final group of submits such that there is no group 5 to make an unreliable attempt at a fourth phase, instead making 3 the final phase.
1.0
Test Failure: com.ibm.ws.threading.policy.PolicyExecutorTest.testGroupedSubmits - ``` testGroupedSubmits:junit.framework.AssertionFailedError: 2019-10-26-16:50:02:473 The response did not contain [SUCCESS]. Full output is: ERROR: Caught exception attempting to call test method testGroupedSubmits on servlet web.PolicyExecutorServlet java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Attempted arrival of unregistered party for java.util.concurrent.Phaser@86a90676[phase = 3 parties = 8 arrived = 8] at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205) at web.PolicyExecutorServlet.testGroupedSubmits(PolicyExecutorServlet.java:1751) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at componenttest.app.FATServlet.doGet(FATServlet.java:71) at javax.servlet.http.HttpServlet.service(HttpServlet.java:687) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1230) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:729) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:426) at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1218) at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1002) at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:75) at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:938) at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.run(DynamicVirtualHost.java:279) at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink$TaskWrapper.run(HttpDispatcherLink.java:1136) at com.ibm.ws.http.dispat ``` This test failure occurs due to a subtle behavior of java.util.concurrent.Phaser. The test case relies upon phaser.arriveAndAwaitAdvance, which it falsely assumes to be atomic. JavaDoc, however, states that it is equivalent to awaitAdvance(arrive()). This is important because, with arrive being an independent operation from the advance, it becomes possible, upon reaching phase 3 for accumulating tasks from the previous group (the ones intended for phase 3) to overlap the arrive operations from those that are intended for phase 4. This means there is a timing window for more than 8 parties to attempt to arrive at phase 3, thus causing the failure: ``` java.lang.IllegalStateException: Attempted arrival of unregistered party for java.util.concurrent.Phaser@86a90676[phase = 3 parties = 8 arrived = 8] ``` Here is one way the problem can occur: 8 tasks from first group attempt to arriveAndWaitForAdvance at phase 0. After 6 exit the method, 2 (from this first group) can remain in progress. 8 tasks from the second group attempt to arriveAndWaitForAdvance for phase 1. The 2 from the first group and 4 from the second group exit the method, leaving 4 (from the second group) in progress. 8 tasks from the third group attempt to arriveAndWaitForAdvance for phase 2. The 4 from the second group and 2 from the third group exit the method, leaving 6 (from the third group) in progress. 8 tasks from the fourth group attempt to arriveAndWaitForAdvance for phase 3. The 6 from the third group exit the method, leaving all 8 (from the fourth group) in progress. 8 tasks from the fifth group attempt to arriveAndWaitForAdvance for phase 4, however, nothing has forced phase 3 to have ended at this point and so any number of these could attempt to arrive into phase 3 and fail due to extra unregistered parties. The simplest correction to the test that otherwise preserves its logic would be to eliminate the final group of submits such that there is no group 5 to make an unreliable attempt at a fourth phase, instead making 3 the final phase.
non_infrastructure
test failure com ibm ws threading policy policyexecutortest testgroupedsubmits testgroupedsubmits junit framework assertionfailederror the response did not contain full output is error caught exception attempting to call test method testgroupedsubmits on servlet web policyexecutorservlet java util concurrent executionexception java lang illegalstateexception attempted arrival of unregistered party for java util concurrent phaser at java base java util concurrent futuretask report futuretask java at java base java util concurrent futuretask get futuretask java at web policyexecutorservlet testgroupedsubmits policyexecutorservlet java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at componenttest app fatservlet doget fatservlet java at javax servlet http httpservlet service httpservlet java at javax servlet http httpservlet service httpservlet java at com ibm ws webcontainer servlet servletwrapper service servletwrapper java at com ibm ws webcontainer servlet servletwrapper handlerequest servletwrapper java at com ibm ws webcontainer servlet servletwrapper handlerequest servletwrapper java at com ibm ws webcontainer filter webappfiltermanager invokefilters webappfiltermanager java at com ibm ws webcontainer filter webappfiltermanager invokefilters webappfiltermanager java at com ibm ws webcontainer servlet cacheservletwrapper handlerequest cacheservletwrapper java at com ibm ws webcontainer webcontainer handlerequest webcontainer java at com ibm ws webcontainer osgi dynamicvirtualhost run dynamicvirtualhost java at com ibm ws http dispatcher internal channel httpdispatcherlink taskwrapper run httpdispatcherlink java at com ibm ws http dispat this test failure occurs due to a subtle behavior of java util concurrent phaser the test case relies upon phaser arriveandawaitadvance which it falsely assumes to be atomic javadoc however states that it is equivalent to awaitadvance arrive this is important because with arrive being an independent operation from the advance it becomes possible upon reaching phase for accumulating tasks from the previous group the ones intended for phase to overlap the arrive operations from those that are intended for phase this means there is a timing window for more than parties to attempt to arrive at phase thus causing the failure java lang illegalstateexception attempted arrival of unregistered party for java util concurrent phaser here is one way the problem can occur tasks from first group attempt to arriveandwaitforadvance at phase after exit the method from this first group can remain in progress tasks from the second group attempt to arriveandwaitforadvance for phase the from the first group and from the second group exit the method leaving from the second group in progress tasks from the third group attempt to arriveandwaitforadvance for phase the from the second group and from the third group exit the method leaving from the third group in progress tasks from the fourth group attempt to arriveandwaitforadvance for phase the from the third group exit the method leaving all from the fourth group in progress tasks from the fifth group attempt to arriveandwaitforadvance for phase however nothing has forced phase to have ended at this point and so any number of these could attempt to arrive into phase and fail due to extra unregistered parties the simplest correction to the test that otherwise preserves its logic would be to eliminate the final group of submits such that there is no group to make an unreliable attempt at a fourth phase instead making the final phase
0
33,330
27,392,187,434
IssuesEvent
2023-02-28 16:58:57
celestiaorg/test-infra
https://api.github.com/repos/celestiaorg/test-infra
closed
testground/app/infra: Piping metrics from validators into influxdb
enhancement testground infrastructure
ATM, celestia-app/core has all the metrics necessary to analyse network behaviour from a validator's perspective. We need to find a way how to pipe all those emitted metrics into testground's influxDB for post-execution analysis
1.0
testground/app/infra: Piping metrics from validators into influxdb - ATM, celestia-app/core has all the metrics necessary to analyse network behaviour from a validator's perspective. We need to find a way how to pipe all those emitted metrics into testground's influxDB for post-execution analysis
infrastructure
testground app infra piping metrics from validators into influxdb atm celestia app core has all the metrics necessary to analyse network behaviour from a validator s perspective we need to find a way how to pipe all those emitted metrics into testground s influxdb for post execution analysis
1
17,446
12,037,653,094
IssuesEvent
2020-04-13 22:21:41
geneontology/pipeline
https://api.github.com/repos/geneontology/pipeline
closed
Pipeline fails on ecocyc sanity check
bug (B: affects usability)
Currently, due to crossing the a date watershed, ecocyc fails the Sanity I category, halting the pipeline. @pgaudet waiting for feedback from ecocyc about whether the dropped IEAs can be reduced. In the interim, just so things like testing and ontology releases can go forward, I'll be easing the restrictions on ecosys in sanity checks.
True
Pipeline fails on ecocyc sanity check - Currently, due to crossing the a date watershed, ecocyc fails the Sanity I category, halting the pipeline. @pgaudet waiting for feedback from ecocyc about whether the dropped IEAs can be reduced. In the interim, just so things like testing and ontology releases can go forward, I'll be easing the restrictions on ecosys in sanity checks.
non_infrastructure
pipeline fails on ecocyc sanity check currently due to crossing the a date watershed ecocyc fails the sanity i category halting the pipeline pgaudet waiting for feedback from ecocyc about whether the dropped ieas can be reduced in the interim just so things like testing and ontology releases can go forward i ll be easing the restrictions on ecosys in sanity checks
0
11,356
9,115,954,562
IssuesEvent
2019-02-22 07:23:02
askmench/mench-web-app
https://api.github.com/repos/askmench/mench-web-app
closed
DB Time estimate in seconds
DB/Server/Infrastructure
Currently, time is stored in hours which causes some issues when rounding down. Need to convert all to seconds to remove rounding errors
1.0
DB Time estimate in seconds - Currently, time is stored in hours which causes some issues when rounding down. Need to convert all to seconds to remove rounding errors
infrastructure
db time estimate in seconds currently time is stored in hours which causes some issues when rounding down need to convert all to seconds to remove rounding errors
1
65,664
12,652,433,675
IssuesEvent
2020-06-17 03:34:26
microsoft/Azure-Kinect-Sensor-SDK
https://api.github.com/repos/microsoft/Azure-Kinect-Sensor-SDK
opened
Error E1696 cannot open source file "k4a/k4a.hpp" | green screen example
Bug Code Sample Triage Needed
When trying to build ALL_BUILD in the green screen project within Visual Studio 2019, I get the following error: `Error (active) E1696 cannot open source file "k4a/k4a.hpp"` I've tried: - Installing the Kinect Azure libraries via NuGet - Including a k4a folder in the project root with k4a.hpp inside, - Right clicking _ALL_BUILD → Properties → Configuration Properties → VC++ Directories_ and adding the path to k4a.hpp under _Include Directories_. **To Reproduce** 1. Use CMake GUI to configure and generate project files. 2. Open Project.sln 3. Right click ALL_BUILD in Solution Explorer 4. Click Build 5. Error appears in Error List **Desktop (please complete the following information):** - Windows 10 Version 1909 for x64 - Azure Kinect SDK v1.4.0
1.0
Error E1696 cannot open source file "k4a/k4a.hpp" | green screen example - When trying to build ALL_BUILD in the green screen project within Visual Studio 2019, I get the following error: `Error (active) E1696 cannot open source file "k4a/k4a.hpp"` I've tried: - Installing the Kinect Azure libraries via NuGet - Including a k4a folder in the project root with k4a.hpp inside, - Right clicking _ALL_BUILD → Properties → Configuration Properties → VC++ Directories_ and adding the path to k4a.hpp under _Include Directories_. **To Reproduce** 1. Use CMake GUI to configure and generate project files. 2. Open Project.sln 3. Right click ALL_BUILD in Solution Explorer 4. Click Build 5. Error appears in Error List **Desktop (please complete the following information):** - Windows 10 Version 1909 for x64 - Azure Kinect SDK v1.4.0
non_infrastructure
error cannot open source file hpp green screen example when trying to build all build in the green screen project within visual studio i get the following error error active cannot open source file hpp i ve tried installing the kinect azure libraries via nuget including a folder in the project root with hpp inside right clicking all build → properties → configuration properties → vc directories and adding the path to hpp under include directories to reproduce use cmake gui to configure and generate project files open project sln right click all build in solution explorer click build error appears in error list desktop please complete the following information windows version for azure kinect sdk
0
256,755
19,457,376,086
IssuesEvent
2021-12-23 01:47:10
JosephJamesCoop/your-portland-itinerary
https://api.github.com/repos/JosephJamesCoop/your-portland-itinerary
closed
Local Storage
documentation enhancement
incorporate client-side storage to store persistent data. Allow application to retain clients itinerary add ons or removals.
1.0
Local Storage - incorporate client-side storage to store persistent data. Allow application to retain clients itinerary add ons or removals.
non_infrastructure
local storage incorporate client side storage to store persistent data allow application to retain clients itinerary add ons or removals
0
245,530
26,549,261,612
IssuesEvent
2023-01-20 05:26:28
nidhi7598/linux-3.0.35_CVE-2022-45934
https://api.github.com/repos/nidhi7598/linux-3.0.35_CVE-2022-45934
opened
WS-2022-0018 (High) detected in linuxlinux-3.0.49
security vulnerability
## WS-2022-0018 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-3.0.49</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v3.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v3.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-3.0.35_CVE-2022-45934/commit/5e23b7f9d2dd0154edd54986754eecd5b5308571">5e23b7f9d2dd0154edd54986754eecd5b5308571</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/ipv4/af_inet.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> net: fix use-after-free in tw_timer_handler <p>Publish Date: 2022-01-11 <p>URL: <a href=https://github.com/gregkh/linux/commit/08eacbd141e2495d2fcdde84358a06c4f95cbb13>WS-2022-0018</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://osv.dev/vulnerability/GSD-2022-1000053">https://osv.dev/vulnerability/GSD-2022-1000053</a></p> <p>Release Date: 2022-01-11</p> <p>Fix Resolution: v5.15.13</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2022-0018 (High) detected in linuxlinux-3.0.49 - ## WS-2022-0018 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-3.0.49</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v3.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v3.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-3.0.35_CVE-2022-45934/commit/5e23b7f9d2dd0154edd54986754eecd5b5308571">5e23b7f9d2dd0154edd54986754eecd5b5308571</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/ipv4/af_inet.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> net: fix use-after-free in tw_timer_handler <p>Publish Date: 2022-01-11 <p>URL: <a href=https://github.com/gregkh/linux/commit/08eacbd141e2495d2fcdde84358a06c4f95cbb13>WS-2022-0018</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://osv.dev/vulnerability/GSD-2022-1000053">https://osv.dev/vulnerability/GSD-2022-1000053</a></p> <p>Release Date: 2022-01-11</p> <p>Fix Resolution: v5.15.13</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_infrastructure
ws high detected in linuxlinux ws high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files net af inet c vulnerability details net fix use after free in tw timer handler publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
463,378
13,264,268,879
IssuesEvent
2020-08-21 03:08:12
mikeshardmind/SinbadCogs
https://api.github.com/repos/mikeshardmind/SinbadCogs
closed
[V3 RSS] Multiple update improvement.
Low priority blocked enhancement
On high traffic feeds (ex. a reddit rss feed) there needs to be a way to collect multiple posts into one message from the bot. Should be optional to prevent it from being a formatting issue.
1.0
[V3 RSS] Multiple update improvement. - On high traffic feeds (ex. a reddit rss feed) there needs to be a way to collect multiple posts into one message from the bot. Should be optional to prevent it from being a formatting issue.
non_infrastructure
multiple update improvement on high traffic feeds ex a reddit rss feed there needs to be a way to collect multiple posts into one message from the bot should be optional to prevent it from being a formatting issue
0
35,579
14,749,831,448
IssuesEvent
2021-01-08 00:27:32
Azure/azure-sdk-for-net
https://api.github.com/repos/Azure/azure-sdk-for-net
closed
FrontDoor FrontendEndpoint update did report MetodNotAllowed
App Services Mgmt Service Attention customer-reported needs-team-attention question
We are working on a new platform. We wane use FrontDoor but for that, we need to automate the Certification Replace Process. I Downloaded the Preview SDK and try to implement that. Because it's all-new there is now Documentation or Examples available at the moment. I am not sure if I di something wrong or if it’s a bug or not implemented yet. I Would Really appreciate some help. The following code illustrates what I try to do. ``` var sp = new ServicePrincipalLoginInformation { ClientId = "xxxxxxx-xxxxxxxxx-xxxxxx", ClientSecret = "xxxxxxx-xxxxxxxxx-xxxxxx" }; var credentials = new AzureCredentials(sp, context.Config.Azure.TenantId, AzureEnvironment.AzureGlobalCloud); var client = new FrontDoorManagementClient(credentials) { SubscriptionId = "xxxxxxx-xxxxxxxxx-xxxxxx" }; //Getting all FrontDoor instances var list = await client.FrontDoors.ListWithHttpMessagesAsync(); //Select the FrontDoor instance var ff = list.Body.Single(e => e.FriendlyName == frontDoorName); //Select the FrontendEndpoint by Hostname var root = ff.FrontendEndpoints.Single(e => e.HostName == domain); //KeyVault ResourceID var id = "/subscriptions/xxxxxxx-xxxxxxxxx-xxxxxx/resourceGroups/xxxxxxxxx/providers/Microsoft.KeyVault/vaults/xxxxxxxxx"; //Clone the Endpoint and add KeyVault Certificate config var endpoint = new FrontendEndpoint( id: root.Id, hostName: root.HostName, sessionAffinityEnabledState: root.SessionAffinityEnabledState, webApplicationFirewallPolicyLink: root.WebApplicationFirewallPolicyLink, name: root.Name, sessionAffinityTtlSeconds: root.SessionAffinityTtlSeconds, customHttpsConfiguration: new CustomHttpsConfiguration { CertificateSource = "AzureKeyVault", Vault = new KeyVaultCertificateSourceParametersVault(id: id), SecretName = "XXX", SecretVersion = "XXX" }); //Update -- Call failed: Operation returned an invalid status code 'MethodNotAllowed' await client.FrontendEndpoints.CreateOrUpdateAsync(resourceGroup, frontDoorName, root.Name, endpoint); ```
2.0
FrontDoor FrontendEndpoint update did report MetodNotAllowed - We are working on a new platform. We wane use FrontDoor but for that, we need to automate the Certification Replace Process. I Downloaded the Preview SDK and try to implement that. Because it's all-new there is now Documentation or Examples available at the moment. I am not sure if I di something wrong or if it’s a bug or not implemented yet. I Would Really appreciate some help. The following code illustrates what I try to do. ``` var sp = new ServicePrincipalLoginInformation { ClientId = "xxxxxxx-xxxxxxxxx-xxxxxx", ClientSecret = "xxxxxxx-xxxxxxxxx-xxxxxx" }; var credentials = new AzureCredentials(sp, context.Config.Azure.TenantId, AzureEnvironment.AzureGlobalCloud); var client = new FrontDoorManagementClient(credentials) { SubscriptionId = "xxxxxxx-xxxxxxxxx-xxxxxx" }; //Getting all FrontDoor instances var list = await client.FrontDoors.ListWithHttpMessagesAsync(); //Select the FrontDoor instance var ff = list.Body.Single(e => e.FriendlyName == frontDoorName); //Select the FrontendEndpoint by Hostname var root = ff.FrontendEndpoints.Single(e => e.HostName == domain); //KeyVault ResourceID var id = "/subscriptions/xxxxxxx-xxxxxxxxx-xxxxxx/resourceGroups/xxxxxxxxx/providers/Microsoft.KeyVault/vaults/xxxxxxxxx"; //Clone the Endpoint and add KeyVault Certificate config var endpoint = new FrontendEndpoint( id: root.Id, hostName: root.HostName, sessionAffinityEnabledState: root.SessionAffinityEnabledState, webApplicationFirewallPolicyLink: root.WebApplicationFirewallPolicyLink, name: root.Name, sessionAffinityTtlSeconds: root.SessionAffinityTtlSeconds, customHttpsConfiguration: new CustomHttpsConfiguration { CertificateSource = "AzureKeyVault", Vault = new KeyVaultCertificateSourceParametersVault(id: id), SecretName = "XXX", SecretVersion = "XXX" }); //Update -- Call failed: Operation returned an invalid status code 'MethodNotAllowed' await client.FrontendEndpoints.CreateOrUpdateAsync(resourceGroup, frontDoorName, root.Name, endpoint); ```
non_infrastructure
frontdoor frontendendpoint update did report metodnotallowed we are working on a new platform we wane use frontdoor but for that we need to automate the certification replace process i downloaded the preview sdk and try to implement that because it s all new there is now documentation or examples available at the moment i am not sure if i di something wrong or if it’s a bug or not implemented yet i would really appreciate some help the following code illustrates what i try to do var sp new serviceprincipallogininformation clientid xxxxxxx xxxxxxxxx xxxxxx clientsecret xxxxxxx xxxxxxxxx xxxxxx var credentials new azurecredentials sp context config azure tenantid azureenvironment azureglobalcloud var client new frontdoormanagementclient credentials subscriptionid xxxxxxx xxxxxxxxx xxxxxx getting all frontdoor instances var list await client frontdoors listwithhttpmessagesasync select the frontdoor instance var ff list body single e e friendlyname frontdoorname select the frontendendpoint by hostname var root ff frontendendpoints single e e hostname domain keyvault resourceid var id subscriptions xxxxxxx xxxxxxxxx xxxxxx resourcegroups xxxxxxxxx providers microsoft keyvault vaults xxxxxxxxx clone the endpoint and add keyvault certificate config var endpoint new frontendendpoint id root id hostname root hostname sessionaffinityenabledstate root sessionaffinityenabledstate webapplicationfirewallpolicylink root webapplicationfirewallpolicylink name root name sessionaffinityttlseconds root sessionaffinityttlseconds customhttpsconfiguration new customhttpsconfiguration certificatesource azurekeyvault vault new keyvaultcertificatesourceparametersvault id id secretname xxx secretversion xxx update call failed operation returned an invalid status code methodnotallowed await client frontendendpoints createorupdateasync resourcegroup frontdoorname root name endpoint
0
19,770
5,932,256,796
IssuesEvent
2017-05-24 08:53:17
jtreml/f1ticker
https://api.github.com/repos/jtreml/f1ticker
opened
Visual Improvements
CodePlex
<b>juergentreml[CodePlex]</b> <br />Adjusted border colors for flyout window and gadget itself, Adjusted text size and inserted horizontal lines for spacing, Corrected bugs regarding content aligning and justifying in the gadget
1.0
Visual Improvements - <b>juergentreml[CodePlex]</b> <br />Adjusted border colors for flyout window and gadget itself, Adjusted text size and inserted horizontal lines for spacing, Corrected bugs regarding content aligning and justifying in the gadget
non_infrastructure
visual improvements juergentreml adjusted border colors for flyout window and gadget itself adjusted text size and inserted horizontal lines for spacing corrected bugs regarding content aligning and justifying in the gadget
0
230,157
18,508,006,794
IssuesEvent
2021-10-19 21:12:02
nbrugger-tgm/reactj
https://api.github.com/repos/nbrugger-tgm/reactj
closed
[CI] Add code-coverage with Codacy
testing
As Codacy analysis works better than code-climate i would like code-coverage reports to be sent to codacy Ref : https://docs.codacy.com/coverage-reporter/#generating-coverage Integrate the test reporting into `CircleCI` since the format there is easier than Github Actions
1.0
[CI] Add code-coverage with Codacy - As Codacy analysis works better than code-climate i would like code-coverage reports to be sent to codacy Ref : https://docs.codacy.com/coverage-reporter/#generating-coverage Integrate the test reporting into `CircleCI` since the format there is easier than Github Actions
non_infrastructure
add code coverage with codacy as codacy analysis works better than code climate i would like code coverage reports to be sent to codacy ref integrate the test reporting into circleci since the format there is easier than github actions
0
27,346
21,648,052,592
IssuesEvent
2022-05-06 06:01:28
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
Reevaluate tests/tests*OutsideWindows.txt files for .NET Core 2.0
test-enhancement area-Infrastructure-coreclr no-recent-activity backlog-cleanup-candidate
Make sure these files accurately reflect the state of tests in the tree. Some may be passing now due to netstandard2.0 work for example.
1.0
Reevaluate tests/tests*OutsideWindows.txt files for .NET Core 2.0 - Make sure these files accurately reflect the state of tests in the tree. Some may be passing now due to netstandard2.0 work for example.
infrastructure
reevaluate tests tests outsidewindows txt files for net core make sure these files accurately reflect the state of tests in the tree some may be passing now due to work for example
1
112,440
9,574,617,838
IssuesEvent
2019-05-07 02:36:56
codice/ddf
https://api.github.com/repos/codice/ddf
closed
Add unit tests for map settings, info, and context menu
:microscope: Test Improvements
<!-- Have you read DDF's Code of Conduct? By filing an Issue, you are expected to comply with it, including treating everyone with respect: https://github.com/codice/ddf/blob/master/.github/CODE_OF_CONDUCT.md Do you want to ask a question? Are you looking for support? The DDF Developers group - https://groups.google.com/forum/#!forum/ddf-developers is the best place for getting support. --> ### Description Add unit tests for map-settings/info/context-menu. #### Expected behavior: Unit tests for map-settings, map-info, map-context-menu run during build process. ### Version N/A ### Additional Information N/A
1.0
Add unit tests for map settings, info, and context menu - <!-- Have you read DDF's Code of Conduct? By filing an Issue, you are expected to comply with it, including treating everyone with respect: https://github.com/codice/ddf/blob/master/.github/CODE_OF_CONDUCT.md Do you want to ask a question? Are you looking for support? The DDF Developers group - https://groups.google.com/forum/#!forum/ddf-developers is the best place for getting support. --> ### Description Add unit tests for map-settings/info/context-menu. #### Expected behavior: Unit tests for map-settings, map-info, map-context-menu run during build process. ### Version N/A ### Additional Information N/A
non_infrastructure
add unit tests for map settings info and context menu have you read ddf s code of conduct by filing an issue you are expected to comply with it including treating everyone with respect do you want to ask a question are you looking for support the ddf developers group is the best place for getting support description add unit tests for map settings info context menu expected behavior unit tests for map settings map info map context menu run during build process version n a additional information n a
0
4,049
4,788,692,599
IssuesEvent
2016-10-30 18:03:12
LOZORD/xanadu
https://api.github.com/repos/LOZORD/xanadu
closed
Add testing
hacktoberfest help wanted infrastructure
Testings should be ran as `npm run test`. I'm enforcing (:sunglasses:) that we use Mocha, Chai, and Sinon for testing (as `devDependencies`). The testing directory `test/` structure should mimic exactly the structure of `dist/` (which mimics `src/` via Babel).
1.0
Add testing - Testings should be ran as `npm run test`. I'm enforcing (:sunglasses:) that we use Mocha, Chai, and Sinon for testing (as `devDependencies`). The testing directory `test/` structure should mimic exactly the structure of `dist/` (which mimics `src/` via Babel).
infrastructure
add testing testings should be ran as npm run test i m enforcing sunglasses that we use mocha chai and sinon for testing as devdependencies the testing directory test structure should mimic exactly the structure of dist which mimics src via babel
1
8,515
7,463,571,918
IssuesEvent
2018-04-01 07:30:22
RITlug/TigerOS
https://api.github.com/repos/RITlug/TigerOS
opened
Remove dash-to-dock repackaging from RITlug repositories and mirrors website
duplicate easyfix infrastructure priority:low
<!-- Thanks for filing a new issue on TigerOS! To help us help you, please use this template for filing your bug, feature request, or other topic. If you use this template, it helps the developers review your ticket and figure out the problem. If you don't use this template, we may close your issue as not enough information. --> # Summary Since repackaging dash-to-dock, an official RPM has been created for this package. Thus, our repackage is now no longer necessary. <!-- Choose the type of issue you are filing. You can choose one by typing [X] in one of the fields. For example, if a bug report, change the line below to… [X] Bug report --> * This issue is a… * [ ] Bug report * [ ] Feature request * [X] Other issue * [ ] Question <!-- Please read the wiki first! --> * **Describe the issue / feature in 1-2 sentences**: # Details The builder.ritlug.com website also currently hosts the repkg dash-to-dock. This can be removed due to the new [dash-to-dock](https://github.com/RITlug/tigeros-dash-to-dock "TigerOS dash-to-dock") package no longer needing this repackage. <!-- If you have other details to include, like screenshots, stacktraces, or something more detailed, please include it here! If you have a long stacktrace, DO NOT PASTE IT HERE! Please use Pastebin and add a link here. --> <!-- Phew, all done! Thank you so much for filing a new issue! We'll try to get back to you soon. -->
1.0
Remove dash-to-dock repackaging from RITlug repositories and mirrors website - <!-- Thanks for filing a new issue on TigerOS! To help us help you, please use this template for filing your bug, feature request, or other topic. If you use this template, it helps the developers review your ticket and figure out the problem. If you don't use this template, we may close your issue as not enough information. --> # Summary Since repackaging dash-to-dock, an official RPM has been created for this package. Thus, our repackage is now no longer necessary. <!-- Choose the type of issue you are filing. You can choose one by typing [X] in one of the fields. For example, if a bug report, change the line below to… [X] Bug report --> * This issue is a… * [ ] Bug report * [ ] Feature request * [X] Other issue * [ ] Question <!-- Please read the wiki first! --> * **Describe the issue / feature in 1-2 sentences**: # Details The builder.ritlug.com website also currently hosts the repkg dash-to-dock. This can be removed due to the new [dash-to-dock](https://github.com/RITlug/tigeros-dash-to-dock "TigerOS dash-to-dock") package no longer needing this repackage. <!-- If you have other details to include, like screenshots, stacktraces, or something more detailed, please include it here! If you have a long stacktrace, DO NOT PASTE IT HERE! Please use Pastebin and add a link here. --> <!-- Phew, all done! Thank you so much for filing a new issue! We'll try to get back to you soon. -->
infrastructure
remove dash to dock repackaging from ritlug repositories and mirrors website thanks for filing a new issue on tigeros to help us help you please use this template for filing your bug feature request or other topic if you use this template it helps the developers review your ticket and figure out the problem if you don t use this template we may close your issue as not enough information summary since repackaging dash to dock an official rpm has been created for this package thus our repackage is now no longer necessary choose the type of issue you are filing you can choose one by typing in one of the fields for example if a bug report change the line below to… bug report this issue is a… bug report feature request other issue question describe the issue feature in sentences details the builder ritlug com website also currently hosts the repkg dash to dock this can be removed due to the new tigeros dash to dock package no longer needing this repackage if you have other details to include like screenshots stacktraces or something more detailed please include it here if you have a long stacktrace do not paste it here please use pastebin and add a link here phew all done thank you so much for filing a new issue we ll try to get back to you soon
1
280,015
8,677,001,773
IssuesEvent
2018-11-30 15:38:16
DemocraciaEnRed/leyesabiertas-web
https://api.github.com/repos/DemocraciaEnRed/leyesabiertas-web
closed
Cambiar titulo, bajada y mail oficial
priority: high
- [x] El nombre de la plataforma debe ser Portal de Leyes Abiertas - [x] Bajada (texto debajo del titulo): Plataforma de intervención ciudadana en propuestas de ley - [x] Agregar el mail oficial en contacto e info estática
1.0
Cambiar titulo, bajada y mail oficial - - [x] El nombre de la plataforma debe ser Portal de Leyes Abiertas - [x] Bajada (texto debajo del titulo): Plataforma de intervención ciudadana en propuestas de ley - [x] Agregar el mail oficial en contacto e info estática
non_infrastructure
cambiar titulo bajada y mail oficial el nombre de la plataforma debe ser portal de leyes abiertas bajada texto debajo del titulo plataforma de intervención ciudadana en propuestas de ley agregar el mail oficial en contacto e info estática
0
770
2,891,875,529
IssuesEvent
2015-06-15 09:14:52
insieme/insieme
https://api.github.com/repos/insieme/insieme
opened
iPic3D integration tests
enhancement infrastructure
Make the iPic3D code ready for the integration testing framework, create separate task for it on the continuous integration server.
1.0
iPic3D integration tests - Make the iPic3D code ready for the integration testing framework, create separate task for it on the continuous integration server.
infrastructure
integration tests make the code ready for the integration testing framework create separate task for it on the continuous integration server
1
9,939
8,257,876,052
IssuesEvent
2018-09-13 07:19:26
raiden-network/raiden
https://api.github.com/repos/raiden-network/raiden
closed
Fix automatic deployment
P2 infrastructure
## Problem Definition During the last release we noticed that the automated release system doesn't work correctly. Needs to be fixed. Details in the [travis build](https://travis-ci.org/raiden-network/raiden/builds/405474632)
1.0
Fix automatic deployment - ## Problem Definition During the last release we noticed that the automated release system doesn't work correctly. Needs to be fixed. Details in the [travis build](https://travis-ci.org/raiden-network/raiden/builds/405474632)
infrastructure
fix automatic deployment problem definition during the last release we noticed that the automated release system doesn t work correctly needs to be fixed details in the
1
607,428
18,782,335,068
IssuesEvent
2021-11-08 08:29:24
code-ready/crc
https://api.github.com/repos/code-ready/crc
closed
[BUG] Unable to upgrade according the documentation with windows tray enabled
kind/bug priority/minor status/stale
### General information Tested on downstream environments ## CRC version ```bash CodeReady Containers version: 1.25.0+0e5748c8 OpenShift version: 4.7.5 (embedded in executable) ``` ## CRC config ```bash - consent-telemetry : no - enable-experimental-features : true ``` ## Host Operating System ```bash OS Name: Microsoft Windows 10 Pro OS Version: 10.0.19042 N/A Build 19042 ``` ### Steps to reproduce 1. crc config set enable-experimental-features true 2. crc setup 2. crc delete 3. trying to update crc binary ### Expected Binary can be updated with newer version according to the defined steps on [documentation](https://code-ready.github.io/crc/#upgrading-codeready-containers_gsg) ### Actual Can not copy the new binary (can not delete the previous one due to file lock) ![release125-windowstrayprocess](https://user-images.githubusercontent.com/1957899/114548219-c112e800-9c5f-11eb-897b-ec3f3786846f.png) In this scenario a cleanup command is required to destroy the dangling proces ```bash crc cleanup ``` ### Logs ```bash PS C:\Users\crcqe> crc setup INFO Checking if podman remote executable is cached INFO Checking if admin-helper executable is cached INFO Checking minimum RAM requirements INFO Checking if running in a shell with administrator rights INFO Checking Windows 10 release INFO Checking Windows edition INFO Checking if Hyper-V is installed and operational INFO Checking if user is a member of the Hyper-V Administrators group INFO Checking if Hyper-V service is enabled INFO Checking if the Hyper-V virtual switch exist INFO Found Virtual Switch to use: Default Switch INFO Checking if tray executable is present INFO Checking if CodeReady Containers daemon is installed INFO Installing CodeReady Containers daemon INFO Will run as admin: Create symlink to daemon batch file in start-up folder INFO Checking if tray is installed INFO Installing CodeReady Containers tray INFO Will run as admin: Create symlink to tray in start-up folder INFO Checking if CRC bundle is extracted in '$HOME/.crc' INFO Checking if C:\Users\crcqe\.crc\cache\crc_hyperv_4.7.5.crcbundle exists Your system is correctly setup for using CodeReady Containers, you can now run 'crc start' to start the OpenShift cluster PS C:\Users\crcqe> crc delete --log-level debug DEBU CodeReady Containers version: 1.25.0+0e5748c8 DEBU OpenShift version: 4.7.5 (embedded in executable) DEBU Running 'crc delete' DEBU Checking file: C:\Users\crcqe\.crc\machines\crc\.crc-exist Machine does not exist. Use 'crc start' to create it ```
1.0
[BUG] Unable to upgrade according the documentation with windows tray enabled - ### General information Tested on downstream environments ## CRC version ```bash CodeReady Containers version: 1.25.0+0e5748c8 OpenShift version: 4.7.5 (embedded in executable) ``` ## CRC config ```bash - consent-telemetry : no - enable-experimental-features : true ``` ## Host Operating System ```bash OS Name: Microsoft Windows 10 Pro OS Version: 10.0.19042 N/A Build 19042 ``` ### Steps to reproduce 1. crc config set enable-experimental-features true 2. crc setup 2. crc delete 3. trying to update crc binary ### Expected Binary can be updated with newer version according to the defined steps on [documentation](https://code-ready.github.io/crc/#upgrading-codeready-containers_gsg) ### Actual Can not copy the new binary (can not delete the previous one due to file lock) ![release125-windowstrayprocess](https://user-images.githubusercontent.com/1957899/114548219-c112e800-9c5f-11eb-897b-ec3f3786846f.png) In this scenario a cleanup command is required to destroy the dangling proces ```bash crc cleanup ``` ### Logs ```bash PS C:\Users\crcqe> crc setup INFO Checking if podman remote executable is cached INFO Checking if admin-helper executable is cached INFO Checking minimum RAM requirements INFO Checking if running in a shell with administrator rights INFO Checking Windows 10 release INFO Checking Windows edition INFO Checking if Hyper-V is installed and operational INFO Checking if user is a member of the Hyper-V Administrators group INFO Checking if Hyper-V service is enabled INFO Checking if the Hyper-V virtual switch exist INFO Found Virtual Switch to use: Default Switch INFO Checking if tray executable is present INFO Checking if CodeReady Containers daemon is installed INFO Installing CodeReady Containers daemon INFO Will run as admin: Create symlink to daemon batch file in start-up folder INFO Checking if tray is installed INFO Installing CodeReady Containers tray INFO Will run as admin: Create symlink to tray in start-up folder INFO Checking if CRC bundle is extracted in '$HOME/.crc' INFO Checking if C:\Users\crcqe\.crc\cache\crc_hyperv_4.7.5.crcbundle exists Your system is correctly setup for using CodeReady Containers, you can now run 'crc start' to start the OpenShift cluster PS C:\Users\crcqe> crc delete --log-level debug DEBU CodeReady Containers version: 1.25.0+0e5748c8 DEBU OpenShift version: 4.7.5 (embedded in executable) DEBU Running 'crc delete' DEBU Checking file: C:\Users\crcqe\.crc\machines\crc\.crc-exist Machine does not exist. Use 'crc start' to create it ```
non_infrastructure
unable to upgrade according the documentation with windows tray enabled general information tested on downstream environments crc version bash codeready containers version openshift version embedded in executable crc config bash consent telemetry no enable experimental features true host operating system bash os name microsoft windows pro os version n a build steps to reproduce crc config set enable experimental features true crc setup crc delete trying to update crc binary expected binary can be updated with newer version according to the defined steps on actual can not copy the new binary can not delete the previous one due to file lock in this scenario a cleanup command is required to destroy the dangling proces bash crc cleanup logs bash ps c users crcqe crc setup info checking if podman remote executable is cached info checking if admin helper executable is cached info checking minimum ram requirements info checking if running in a shell with administrator rights info checking windows release info checking windows edition info checking if hyper v is installed and operational info checking if user is a member of the hyper v administrators group info checking if hyper v service is enabled info checking if the hyper v virtual switch exist info found virtual switch to use default switch info checking if tray executable is present info checking if codeready containers daemon is installed info installing codeready containers daemon info will run as admin create symlink to daemon batch file in start up folder info checking if tray is installed info installing codeready containers tray info will run as admin create symlink to tray in start up folder info checking if crc bundle is extracted in home crc info checking if c users crcqe crc cache crc hyperv crcbundle exists your system is correctly setup for using codeready containers you can now run crc start to start the openshift cluster ps c users crcqe crc delete log level debug debu codeready containers version debu openshift version embedded in executable debu running crc delete debu checking file c users crcqe crc machines crc crc exist machine does not exist use crc start to create it
0
874
2,984,923,265
IssuesEvent
2015-07-18 13:48:04
hackndev/zinc
https://api.github.com/repos/hackndev/zinc
closed
Modify examples to be dedicated crates
cleanup infrastructure nightly fallout
As a followup to #330, we need to refactor all the examples to be dedicated crates. This also shows how hard zinc is to use for external users. I'd expect a zinc app to be just one more crate. It is unreasonable to expect the users to download zinc source and add a new "example" entry. * [x] [blink](https://github.com/farcaller/zinc/commit/23ba2d49d214d4f45e7ae2a14e2280072bab441d) in #318 * [x] blink_k20 * [x] blink_k20_isr * [x] blink_lpc17xx * [x] blink_pt * [x] blink_stm32f4 * [x] blink_stm32l1 * [x] blink_tiva_c * [x] bluenrg_stm32l1 * [x] dht22 * [x] empty * [x] lcd_tiva_c * [x] uart * [x] uart_tiva_c * [x] usart_stm32l1
1.0
Modify examples to be dedicated crates - As a followup to #330, we need to refactor all the examples to be dedicated crates. This also shows how hard zinc is to use for external users. I'd expect a zinc app to be just one more crate. It is unreasonable to expect the users to download zinc source and add a new "example" entry. * [x] [blink](https://github.com/farcaller/zinc/commit/23ba2d49d214d4f45e7ae2a14e2280072bab441d) in #318 * [x] blink_k20 * [x] blink_k20_isr * [x] blink_lpc17xx * [x] blink_pt * [x] blink_stm32f4 * [x] blink_stm32l1 * [x] blink_tiva_c * [x] bluenrg_stm32l1 * [x] dht22 * [x] empty * [x] lcd_tiva_c * [x] uart * [x] uart_tiva_c * [x] usart_stm32l1
infrastructure
modify examples to be dedicated crates as a followup to we need to refactor all the examples to be dedicated crates this also shows how hard zinc is to use for external users i d expect a zinc app to be just one more crate it is unreasonable to expect the users to download zinc source and add a new example entry in blink blink isr blink blink pt blink blink blink tiva c bluenrg empty lcd tiva c uart uart tiva c usart
1
14,952
3,907,998,169
IssuesEvent
2016-04-19 14:37:49
plk/biblatex
https://api.github.com/repos/plk/biblatex
closed
Add a "quick start" guide to the manual
documentation enhancement
Just documenting another item on the to-do list. Any suggestions for the format or content would be welcome here.
1.0
Add a "quick start" guide to the manual - Just documenting another item on the to-do list. Any suggestions for the format or content would be welcome here.
non_infrastructure
add a quick start guide to the manual just documenting another item on the to do list any suggestions for the format or content would be welcome here
0
4,133
4,836,653,200
IssuesEvent
2016-11-08 20:13:26
devtools-html/debugger.html
https://api.github.com/repos/devtools-html/debugger.html
closed
`npm run firefox` is not starting firefox with --start-debugger-server
infrastructure
When we upgraded selenium + geckodriver, we stopped passing _--start-debugger-server_ into the firefox command. I created this issue with `geckodriver` to follow up earlier today and will look into it tomorrow https://github.com/mozilla/geckodriver/issues/260. The solution should be a fairly simple api change.
1.0
`npm run firefox` is not starting firefox with --start-debugger-server - When we upgraded selenium + geckodriver, we stopped passing _--start-debugger-server_ into the firefox command. I created this issue with `geckodriver` to follow up earlier today and will look into it tomorrow https://github.com/mozilla/geckodriver/issues/260. The solution should be a fairly simple api change.
infrastructure
npm run firefox is not starting firefox with start debugger server when we upgraded selenium geckodriver we stopped passing start debugger server into the firefox command i created this issue with geckodriver to follow up earlier today and will look into it tomorrow the solution should be a fairly simple api change
1
7,864
7,114,538,065
IssuesEvent
2018-01-18 01:17:26
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
closed
init-tools.cmd hangs when running out of disk space
area-Infrastructure enhancement
After open the cmd file,I just get the following info. Installing dotnet cli... I have checked the log file > Running init-tools.cmd > Installing 'https://dotnetcli.azureedge.net/dotnet/Sdk/2.0.0/dotnet-sdk-2.0.0-win-x64.zip' to 'D:\ChuckLu\Git\GitHub\dotnet\corefx\Tools\dotnetcli\dotnet-sdk-2.0.0-win-x64.zip' >
1.0
init-tools.cmd hangs when running out of disk space - After open the cmd file,I just get the following info. Installing dotnet cli... I have checked the log file > Running init-tools.cmd > Installing 'https://dotnetcli.azureedge.net/dotnet/Sdk/2.0.0/dotnet-sdk-2.0.0-win-x64.zip' to 'D:\ChuckLu\Git\GitHub\dotnet\corefx\Tools\dotnetcli\dotnet-sdk-2.0.0-win-x64.zip' >
infrastructure
init tools cmd hangs when running out of disk space after open the cmd file i just get the following info installing dotnet cli i have checked the log file running init tools cmd installing to d chucklu git github dotnet corefx tools dotnetcli dotnet sdk win zip
1
35,699
32,050,369,892
IssuesEvent
2023-09-23 13:30:30
IntelPython/dpctl
https://api.github.com/repos/IntelPython/dpctl
opened
Implement GH action to purge dppy/label/dev of old artifacts
infrastructure
The `dppy/label/dev` channel often runs out of space. It would be useful to also have a cron-scheduled GH action to purge old artifacts from the channel.
1.0
Implement GH action to purge dppy/label/dev of old artifacts - The `dppy/label/dev` channel often runs out of space. It would be useful to also have a cron-scheduled GH action to purge old artifacts from the channel.
infrastructure
implement gh action to purge dppy label dev of old artifacts the dppy label dev channel often runs out of space it would be useful to also have a cron scheduled gh action to purge old artifacts from the channel
1
6,523
6,495,665,193
IssuesEvent
2017-08-22 06:48:41
SatelliteQE/robottelo
https://api.github.com/repos/SatelliteQE/robottelo
closed
Turn RHEL image name constants into robottelo.properties' settings
6.1 6.2 6.3 enhancement Infrastructure RFE
We need to have it because of Vault Requests and we dont want to change the code seamlessly with evry RHEL dot release. Settings is the right way. Related changes in robottelo-ci are covered by https://github.com/SatelliteQE/robottelo-ci/issues/497
1.0
Turn RHEL image name constants into robottelo.properties' settings - We need to have it because of Vault Requests and we dont want to change the code seamlessly with evry RHEL dot release. Settings is the right way. Related changes in robottelo-ci are covered by https://github.com/SatelliteQE/robottelo-ci/issues/497
infrastructure
turn rhel image name constants into robottelo properties settings we need to have it because of vault requests and we dont want to change the code seamlessly with evry rhel dot release settings is the right way related changes in robottelo ci are covered by
1
8,346
7,349,200,502
IssuesEvent
2018-03-08 09:50:55
outcobra/outstanding-cobra
https://api.github.com/repos/outcobra/outstanding-cobra
opened
Security audit
M-C-backend M-C-infrastructure P-3-medium T-task
We should perform a quick security audit for our application. Including manual and automated testing (e.g. [Vega Report](https://subgraph.com/vega/index.en.html)). The servers are already being scanned weekly by OpenVAS/Greenbone and issues fixed accordingly.
1.0
Security audit - We should perform a quick security audit for our application. Including manual and automated testing (e.g. [Vega Report](https://subgraph.com/vega/index.en.html)). The servers are already being scanned weekly by OpenVAS/Greenbone and issues fixed accordingly.
infrastructure
security audit we should perform a quick security audit for our application including manual and automated testing e g the servers are already being scanned weekly by openvas greenbone and issues fixed accordingly
1
70,320
3,322,382,569
IssuesEvent
2015-11-09 14:17:16
ow2-proactive/studio
https://api.github.com/repos/ow2-proactive/studio
opened
Drag&Drop fails sometime if make it slowly.
priority:minor
Drag&Drop fails sometime if make it slowly. The tasks dropdown (id="task-menu") in the studio is closed every time the function isConnected (studio-client.js) is triggered. We should prevent this behaviour because when you don’t know the interface, you make it slowly and this is disturbing.
1.0
Drag&Drop fails sometime if make it slowly. - Drag&Drop fails sometime if make it slowly. The tasks dropdown (id="task-menu") in the studio is closed every time the function isConnected (studio-client.js) is triggered. We should prevent this behaviour because when you don’t know the interface, you make it slowly and this is disturbing.
non_infrastructure
drag drop fails sometime if make it slowly drag drop fails sometime if make it slowly the tasks dropdown id task menu in the studio is closed every time the function isconnected studio client js is triggered we should prevent this behaviour because when you don’t know the interface you make it slowly and this is disturbing
0
822,523
30,876,241,248
IssuesEvent
2023-08-03 14:27:41
etro-js/etro
https://api.github.com/repos/etro-js/etro
opened
Add `onDraw` option to `Movie.record()`
type:feature priority:medium
This optional user-provided callback should run at the end of every call to `Movie._render()`
1.0
Add `onDraw` option to `Movie.record()` - This optional user-provided callback should run at the end of every call to `Movie._render()`
non_infrastructure
add ondraw option to movie record this optional user provided callback should run at the end of every call to movie render
0
92,437
8,364,005,818
IssuesEvent
2018-10-03 21:20:37
bokeh/bokeh
https://api.github.com/repos/bokeh/bokeh
closed
verify_all() doesn't give information what failed
tag: component: tests type: bug
This is the output from `py.test`: ``` ================================================================= FAILURES ================================================================== _________________________________________________________ Test___all__.test___all__ _________________________________________________________ self = <bokeh._testing.util.api.verify_all.<locals>.Test___all__ object at 0x7f2107dbab70> def test___all__(self): if isinstance(module, string_types): mod = importlib.import_module(module) else: mod = module assert hasattr(mod, "__all__") > assert mod.__all__ == ALL E AssertionError bokeh/_testing/util/api.py:52: AssertionError ``` I don't know what's the origin of failure and what's the difference. Running py.test with `-vv` helps to establish the offending file. To fix this, either `test__all__` has to be implemented, so that it reports the use-site (not the implementation site), or assertions should have informative error messages.
1.0
verify_all() doesn't give information what failed - This is the output from `py.test`: ``` ================================================================= FAILURES ================================================================== _________________________________________________________ Test___all__.test___all__ _________________________________________________________ self = <bokeh._testing.util.api.verify_all.<locals>.Test___all__ object at 0x7f2107dbab70> def test___all__(self): if isinstance(module, string_types): mod = importlib.import_module(module) else: mod = module assert hasattr(mod, "__all__") > assert mod.__all__ == ALL E AssertionError bokeh/_testing/util/api.py:52: AssertionError ``` I don't know what's the origin of failure and what's the difference. Running py.test with `-vv` helps to establish the offending file. To fix this, either `test__all__` has to be implemented, so that it reports the use-site (not the implementation site), or assertions should have informative error messages.
non_infrastructure
verify all doesn t give information what failed this is the output from py test failures test all test all self test all object at def test all self if isinstance module string types mod importlib import module module else mod module assert hasattr mod all assert mod all all e assertionerror bokeh testing util api py assertionerror i don t know what s the origin of failure and what s the difference running py test with vv helps to establish the offending file to fix this either test all has to be implemented so that it reports the use site not the implementation site or assertions should have informative error messages
0
343
2,652,902,403
IssuesEvent
2015-03-16 19:58:03
mroth/emojitrack-web
https://api.github.com/repos/mroth/emojitrack-web
opened
admin pages bootstrap 3 transition
infrastructure
_From @mroth on March 27, 2014 0:4_ and redesign a little to be more legible on mobile, so i can check up on things remotely more effectively _Copied from original issue: mroth/emojitrack#28_
1.0
admin pages bootstrap 3 transition - _From @mroth on March 27, 2014 0:4_ and redesign a little to be more legible on mobile, so i can check up on things remotely more effectively _Copied from original issue: mroth/emojitrack#28_
infrastructure
admin pages bootstrap transition from mroth on march and redesign a little to be more legible on mobile so i can check up on things remotely more effectively copied from original issue mroth emojitrack
1
4,224
3,003,352,068
IssuesEvent
2015-07-24 23:05:23
ash-lang/ash
https://api.github.com/repos/ash-lang/ash
opened
Default constructor body and super-class constructor calls.
analysis code-gen grammar proposal
If a class uses a default constructor and its superclass has a non-empty constructor, one of the superclass constructors must be called. ``` class Person(name : String, age : int) class Student(name : String, age : int, year : int) : Person(name, age) ``` Add a `construct` keyword that allows a class with a default constructor to execute code when the default constructor is called and after the fields have been assigned. ``` class Person(name : String, age : int) { construct { println("My default constructor was called!") } }
1.0
Default constructor body and super-class constructor calls. - If a class uses a default constructor and its superclass has a non-empty constructor, one of the superclass constructors must be called. ``` class Person(name : String, age : int) class Student(name : String, age : int, year : int) : Person(name, age) ``` Add a `construct` keyword that allows a class with a default constructor to execute code when the default constructor is called and after the fields have been assigned. ``` class Person(name : String, age : int) { construct { println("My default constructor was called!") } }
non_infrastructure
default constructor body and super class constructor calls if a class uses a default constructor and its superclass has a non empty constructor one of the superclass constructors must be called class person name string age int class student name string age int year int person name age add a construct keyword that allows a class with a default constructor to execute code when the default constructor is called and after the fields have been assigned class person name string age int construct println my default constructor was called
0
398,551
27,200,762,735
IssuesEvent
2023-02-20 09:33:13
acikkaynak/afetharita-roadmap
https://api.github.com/repos/acikkaynak/afetharita-roadmap
opened
[ACT]: Documenting achievements, could have been better and problems sections
documentation action
## Description According to the decision that has been made at the [meeting](https://github.com/acikkaynak/afetharita-roadmap/blob/main/Notes/Meetings/20230219.md) documentation for the below three sections should have been completed. 1. In the short term - What did we achieve? 2. What could’ve done better? 3. What kind of problems we had? ## Items to Complete - [x] In the short term - What did we achieve? - [x] What could’ve done better? - [x] What kind of problems we had? ## Supporting Information (Optional) https://github.com/acikkaynak/afetharita-roadmap/wiki/Mapping-the-Disaster:-The-Story-of-Afet-Harita
1.0
[ACT]: Documenting achievements, could have been better and problems sections - ## Description According to the decision that has been made at the [meeting](https://github.com/acikkaynak/afetharita-roadmap/blob/main/Notes/Meetings/20230219.md) documentation for the below three sections should have been completed. 1. In the short term - What did we achieve? 2. What could’ve done better? 3. What kind of problems we had? ## Items to Complete - [x] In the short term - What did we achieve? - [x] What could’ve done better? - [x] What kind of problems we had? ## Supporting Information (Optional) https://github.com/acikkaynak/afetharita-roadmap/wiki/Mapping-the-Disaster:-The-Story-of-Afet-Harita
non_infrastructure
documenting achievements could have been better and problems sections description according to the decision that has been made at the documentation for the below three sections should have been completed in the short term what did we achieve what could’ve done better what kind of problems we had items to complete in the short term what did we achieve what could’ve done better what kind of problems we had supporting information optional
0
126,209
4,974,148,686
IssuesEvent
2016-12-06 04:50:21
kduske/TrenchBroom
https://api.github.com/repos/kduske/TrenchBroom
reopened
Copy Paste Operation Causes Grid Misalignment
bug Platform:All Priority:Medium
Steps to reproduce: 1) New map. 2) Create a 16 unit cube at the edge of the starter brush. 3) Copy the 16 unit cube. 4) Paste the 16 unit cube. The pasted cube will be misaligned and you will need to lower the grid size to position it flush against the starter brush. If you use the duplication operation, the new brush aligns just fine. TrenchBroom 2.0.0 Beta Build 2f3c498 RelWithDebInfo As always, ignore if already reported.
1.0
Copy Paste Operation Causes Grid Misalignment - Steps to reproduce: 1) New map. 2) Create a 16 unit cube at the edge of the starter brush. 3) Copy the 16 unit cube. 4) Paste the 16 unit cube. The pasted cube will be misaligned and you will need to lower the grid size to position it flush against the starter brush. If you use the duplication operation, the new brush aligns just fine. TrenchBroom 2.0.0 Beta Build 2f3c498 RelWithDebInfo As always, ignore if already reported.
non_infrastructure
copy paste operation causes grid misalignment steps to reproduce new map create a unit cube at the edge of the starter brush copy the unit cube paste the unit cube the pasted cube will be misaligned and you will need to lower the grid size to position it flush against the starter brush if you use the duplication operation the new brush aligns just fine trenchbroom beta build relwithdebinfo as always ignore if already reported
0
101,082
30,863,061,675
IssuesEvent
2023-08-03 05:44:56
vuejs/vitepress
https://api.github.com/repos/vuejs/vitepress
closed
outDir logic is too confusing now
bug build
### Describe the bug I'm trying to build a site in a custom folder and noticed several issues. My site is located in the folder `sites/mysite.com`. When I run following command in the root of my project: ``` npx vitepress build sites/mysite.com --outDir public ``` Instead of writing to ${workplaceFolder}/public it actually still resolves outDir relatively to sites/mySite.com so to make it working I need to use currently `../../public` or `$(pwd)/public` which are both too confusing because from CLI call it looks like i write to something above. My suggestion is that relative path needs to be resolved relatively to cwd, not a docs folder. But even like that what I find even more strange - this setting only impacts assets, while actual html pages are still located in the .vitepress/dist folder. Do you know how to fix that too? THanks! ### Reproduction Just create a nested project like sites/test.site and try to build it to a public/test.site folder in your root. ### Expected behavior - command like `vitepress build path/to/my/site --outDir public` resolves to a public folder in your root - not in the package. - html pages should be also built respectively to outDir parameter ### System Info ```sh System: OS: Linux 5.15 Debian GNU/Linux 11 (bullseye) 11 (bullseye) CPU: (12) x64 12th Gen Intel(R) Core(TM) i7-1265U Memory: 11.75 GB / 15.34 GB Container: Yes Shell: 5.1.4 - /bin/bash Binaries: Node: 20.3.1 - /usr/local/bin/node Yarn: 1.22.19 - /usr/local/bin/yarn npm: 9.6.7 - /usr/local/bin/npm pnpm: 8.6.6 - /usr/local/share/npm-global/bin/pnpm npmPackages: vitepress: ^1.0.0-beta.6 => 1.0.0-beta.6 ``` ### Additional context _No response_ ### Validations - [X] Check if you're on the [latest VitePress version](https://github.com/vuejs/vitepress/releases/latest). - [X] Follow our [Code of Conduct](https://vuejs.org/about/coc.html) - [X] Read the [docs](https://vitepress.dev). - [X] Check that there isn't already an issue that reports the same bug to avoid creating a duplicate.
1.0
outDir logic is too confusing now - ### Describe the bug I'm trying to build a site in a custom folder and noticed several issues. My site is located in the folder `sites/mysite.com`. When I run following command in the root of my project: ``` npx vitepress build sites/mysite.com --outDir public ``` Instead of writing to ${workplaceFolder}/public it actually still resolves outDir relatively to sites/mySite.com so to make it working I need to use currently `../../public` or `$(pwd)/public` which are both too confusing because from CLI call it looks like i write to something above. My suggestion is that relative path needs to be resolved relatively to cwd, not a docs folder. But even like that what I find even more strange - this setting only impacts assets, while actual html pages are still located in the .vitepress/dist folder. Do you know how to fix that too? THanks! ### Reproduction Just create a nested project like sites/test.site and try to build it to a public/test.site folder in your root. ### Expected behavior - command like `vitepress build path/to/my/site --outDir public` resolves to a public folder in your root - not in the package. - html pages should be also built respectively to outDir parameter ### System Info ```sh System: OS: Linux 5.15 Debian GNU/Linux 11 (bullseye) 11 (bullseye) CPU: (12) x64 12th Gen Intel(R) Core(TM) i7-1265U Memory: 11.75 GB / 15.34 GB Container: Yes Shell: 5.1.4 - /bin/bash Binaries: Node: 20.3.1 - /usr/local/bin/node Yarn: 1.22.19 - /usr/local/bin/yarn npm: 9.6.7 - /usr/local/bin/npm pnpm: 8.6.6 - /usr/local/share/npm-global/bin/pnpm npmPackages: vitepress: ^1.0.0-beta.6 => 1.0.0-beta.6 ``` ### Additional context _No response_ ### Validations - [X] Check if you're on the [latest VitePress version](https://github.com/vuejs/vitepress/releases/latest). - [X] Follow our [Code of Conduct](https://vuejs.org/about/coc.html) - [X] Read the [docs](https://vitepress.dev). - [X] Check that there isn't already an issue that reports the same bug to avoid creating a duplicate.
non_infrastructure
outdir logic is too confusing now describe the bug i m trying to build a site in a custom folder and noticed several issues my site is located in the folder sites mysite com when i run following command in the root of my project npx vitepress build sites mysite com outdir public instead of writing to workplacefolder public it actually still resolves outdir relatively to sites mysite com so to make it working i need to use currently public or pwd public which are both too confusing because from cli call it looks like i write to something above my suggestion is that relative path needs to be resolved relatively to cwd not a docs folder but even like that what i find even more strange this setting only impacts assets while actual html pages are still located in the vitepress dist folder do you know how to fix that too thanks reproduction just create a nested project like sites test site and try to build it to a public test site folder in your root expected behavior command like vitepress build path to my site outdir public resolves to a public folder in your root not in the package html pages should be also built respectively to outdir parameter system info sh system os linux debian gnu linux bullseye bullseye cpu gen intel r core tm memory gb gb container yes shell bin bash binaries node usr local bin node yarn usr local bin yarn npm usr local bin npm pnpm usr local share npm global bin pnpm npmpackages vitepress beta beta additional context no response validations check if you re on the follow our read the check that there isn t already an issue that reports the same bug to avoid creating a duplicate
0
43,781
7,064,997,277
IssuesEvent
2018-01-06 14:43:28
jekyll/jekyll
https://api.github.com/repos/jekyll/jekyll
closed
Header on jekyllrb.com doesn't link to new release 3.7.0
documentation
Hey, just noticed that the header still shows and links to the previous release `3.6.2`. Locally everything works fine, i assume the docs site just has to be regenerated? Is there a way to start a rebuild without a commit? <img width="1504" alt="screen shot 2018-01-06 at 11 23 28" src="https://user-images.githubusercontent.com/570608/34639234-1034d166-f2d4-11e7-8319-b7f526d053fe.png"> cc: @jekyll/documentation
1.0
Header on jekyllrb.com doesn't link to new release 3.7.0 - Hey, just noticed that the header still shows and links to the previous release `3.6.2`. Locally everything works fine, i assume the docs site just has to be regenerated? Is there a way to start a rebuild without a commit? <img width="1504" alt="screen shot 2018-01-06 at 11 23 28" src="https://user-images.githubusercontent.com/570608/34639234-1034d166-f2d4-11e7-8319-b7f526d053fe.png"> cc: @jekyll/documentation
non_infrastructure
header on jekyllrb com doesn t link to new release hey just noticed that the header still shows and links to the previous release locally everything works fine i assume the docs site just has to be regenerated is there a way to start a rebuild without a commit img width alt screen shot at src cc jekyll documentation
0
261,821
8,246,381,973
IssuesEvent
2018-09-11 12:47:14
dojot/dojot
https://api.github.com/repos/dojot/dojot
opened
GUI - Usability problem when creating a new flow
Priority:Medium Team:Frontend Type:Bug
The scroll bar does not reach the bottom of the screen. Some nodes are not shown (eg geofence). ![scroolbar_flow](https://user-images.githubusercontent.com/37310063/45360726-d5be0980-b5a6-11e8-9a34-725854d5d3eb.png) maximized window: ![screenshot_8](https://user-images.githubusercontent.com/37310063/45360761-f1291480-b5a6-11e8-84b8-3ff83fe34025.png) **Affected Version**: v0.3.0-beta1 (0.3.0-nightly_20180807)
1.0
GUI - Usability problem when creating a new flow - The scroll bar does not reach the bottom of the screen. Some nodes are not shown (eg geofence). ![scroolbar_flow](https://user-images.githubusercontent.com/37310063/45360726-d5be0980-b5a6-11e8-9a34-725854d5d3eb.png) maximized window: ![screenshot_8](https://user-images.githubusercontent.com/37310063/45360761-f1291480-b5a6-11e8-84b8-3ff83fe34025.png) **Affected Version**: v0.3.0-beta1 (0.3.0-nightly_20180807)
non_infrastructure
gui usability problem when creating a new flow the scroll bar does not reach the bottom of the screen some nodes are not shown eg geofence maximized window affected version nightly
0
30,109
24,546,214,076
IssuesEvent
2022-10-12 08:59:52
nf-core/tools
https://api.github.com/repos/nf-core/tools
closed
Make `check_up_to_date()` to check for subworkflows also.
enhancement infrastructure
### Description of feature The `check_up_to_date()` function in [modules_json.py](https://github.com/nf-core/tools/blob/dec66abe1c36a8975a952e1f80f045cab65bbf72/nf_core/modules/modules_json.py#L439) is only checking for modules. We need to update the function so it also checks `subworkflows`.
1.0
Make `check_up_to_date()` to check for subworkflows also. - ### Description of feature The `check_up_to_date()` function in [modules_json.py](https://github.com/nf-core/tools/blob/dec66abe1c36a8975a952e1f80f045cab65bbf72/nf_core/modules/modules_json.py#L439) is only checking for modules. We need to update the function so it also checks `subworkflows`.
infrastructure
make check up to date to check for subworkflows also description of feature the check up to date function in is only checking for modules we need to update the function so it also checks subworkflows
1
29,443
24,015,048,138
IssuesEvent
2022-09-14 23:12:16
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
opened
[Mono][Codespace] Add mono desktop build optioins
area-Infrastructure-mono
Scenarios to add: - mono+libs - mono+libs /p:MonoEnableLlvm=true
1.0
[Mono][Codespace] Add mono desktop build optioins - Scenarios to add: - mono+libs - mono+libs /p:MonoEnableLlvm=true
infrastructure
add mono desktop build optioins scenarios to add mono libs mono libs p monoenablellvm true
1
34,542
30,114,621,282
IssuesEvent
2023-06-30 10:25:38
kuznia-rdzeni/coreblocks
https://api.github.com/repos/kuznia-rdzeni/coreblocks
opened
Synthesis benchmark for full core
infrastructure
Currently, only the basic core is measured. We should do this for full core also, maybe synthesize both.
1.0
Synthesis benchmark for full core - Currently, only the basic core is measured. We should do this for full core also, maybe synthesize both.
infrastructure
synthesis benchmark for full core currently only the basic core is measured we should do this for full core also maybe synthesize both
1
35,517
31,780,864,790
IssuesEvent
2023-09-12 17:19:59
finos/FDC3
https://api.github.com/repos/finos/FDC3
opened
Update tsdx version in repo to resolve 17 moderate vulnerabilities
help wanted good first issue api project infrastructure
### Area of Issue [x] API Upgrading tsdx to 0.13.3 would resolve 17 moderate vulnerabilities in the FDC3 repo - but is a breaking change. I'm not sure what upgrade steps are required. ### npm audit report ``` jsdom <=16.5.3 Severity: moderate Insufficient Granularity of Access Control in JSDom - https://github.com/advisories/GHSA-f4c9-cqv8-9v98 Depends on vulnerable versions of request Depends on vulnerable versions of request-promise-native Depends on vulnerable versions of tough-cookie fix available via `npm audit fix --force` Will install [email protected], which is a breaking change node_modules/jsdom jest-environment-jsdom 10.0.2 - 25.5.0 Depends on vulnerable versions of jsdom node_modules/jest-environment-jsdom jest-config 12.1.1-alpha.2935e14d - 25.5.4 Depends on vulnerable versions of @jest/test-sequencer Depends on vulnerable versions of jest-environment-jsdom Depends on vulnerable versions of jest-jasmine2 node_modules/jest-config jest-cli 12.1.1-alpha.2935e14d || 12.1.2-alpha.6230044c - 25.5.4 Depends on vulnerable versions of @jest/core Depends on vulnerable versions of jest-config node_modules/jest-cli jest 12.1.2-alpha.6230044c - 25.5.4 Depends on vulnerable versions of @jest/core Depends on vulnerable versions of jest-cli node_modules/jest tsdx >=0.14.0 Depends on vulnerable versions of jest node_modules/tsdx jest-runner 21.0.0-alpha.1 - 25.5.4 Depends on vulnerable versions of jest-config Depends on vulnerable versions of jest-jasmine2 Depends on vulnerable versions of jest-runtime node_modules/jest-runner @jest/test-sequencer <=25.5.4 Depends on vulnerable versions of jest-runner Depends on vulnerable versions of jest-runtime node_modules/@jest/test-sequencer jest-runtime 12.1.1-alpha.2935e14d - 25.5.4 Depends on vulnerable versions of jest-config node_modules/jest-runtime jest-jasmine2 24.2.0-alpha.0 - 25.5.4 Depends on vulnerable versions of jest-runtime node_modules/jest-jasmine2 node-notifier <8.0.1 Severity: moderate OS Command Injection in node-notifier - https://github.com/advisories/GHSA-5fw9-fq32-wv5p fix available via `npm audit fix --force` Will install [email protected], which is a breaking change node_modules/node-notifier @jest/reporters <=26.4.0 Depends on vulnerable versions of node-notifier node_modules/@jest/reporters @jest/core <=25.5.4 Depends on vulnerable versions of @jest/reporters Depends on vulnerable versions of jest-config Depends on vulnerable versions of jest-runner Depends on vulnerable versions of jest-runtime node_modules/@jest/core request * Severity: moderate Server-Side Request Forgery in Request - https://github.com/advisories/GHSA-p8p7-x288-28g6 Depends on vulnerable versions of tough-cookie fix available via `npm audit fix --force` Will install [email protected], which is a breaking change node_modules/request request-promise-core * Depends on vulnerable versions of request node_modules/request-promise-core request-promise-native >=1.0.0 Depends on vulnerable versions of request Depends on vulnerable versions of request-promise-core Depends on vulnerable versions of tough-cookie node_modules/request-promise-native tough-cookie <4.1.3 Severity: moderate tough-cookie Prototype Pollution vulnerability - https://github.com/advisories/GHSA-72xf-g2v4-qvf3 fix available via `npm audit fix --force` Will install [email protected], which is a breaking change node_modules/request-promise-native/node_modules/tough-cookie node_modules/request/node_modules/tough-cookie node_modules/tough-cookie 17 moderate severity vulnerabilities ```
1.0
Update tsdx version in repo to resolve 17 moderate vulnerabilities - ### Area of Issue [x] API Upgrading tsdx to 0.13.3 would resolve 17 moderate vulnerabilities in the FDC3 repo - but is a breaking change. I'm not sure what upgrade steps are required. ### npm audit report ``` jsdom <=16.5.3 Severity: moderate Insufficient Granularity of Access Control in JSDom - https://github.com/advisories/GHSA-f4c9-cqv8-9v98 Depends on vulnerable versions of request Depends on vulnerable versions of request-promise-native Depends on vulnerable versions of tough-cookie fix available via `npm audit fix --force` Will install [email protected], which is a breaking change node_modules/jsdom jest-environment-jsdom 10.0.2 - 25.5.0 Depends on vulnerable versions of jsdom node_modules/jest-environment-jsdom jest-config 12.1.1-alpha.2935e14d - 25.5.4 Depends on vulnerable versions of @jest/test-sequencer Depends on vulnerable versions of jest-environment-jsdom Depends on vulnerable versions of jest-jasmine2 node_modules/jest-config jest-cli 12.1.1-alpha.2935e14d || 12.1.2-alpha.6230044c - 25.5.4 Depends on vulnerable versions of @jest/core Depends on vulnerable versions of jest-config node_modules/jest-cli jest 12.1.2-alpha.6230044c - 25.5.4 Depends on vulnerable versions of @jest/core Depends on vulnerable versions of jest-cli node_modules/jest tsdx >=0.14.0 Depends on vulnerable versions of jest node_modules/tsdx jest-runner 21.0.0-alpha.1 - 25.5.4 Depends on vulnerable versions of jest-config Depends on vulnerable versions of jest-jasmine2 Depends on vulnerable versions of jest-runtime node_modules/jest-runner @jest/test-sequencer <=25.5.4 Depends on vulnerable versions of jest-runner Depends on vulnerable versions of jest-runtime node_modules/@jest/test-sequencer jest-runtime 12.1.1-alpha.2935e14d - 25.5.4 Depends on vulnerable versions of jest-config node_modules/jest-runtime jest-jasmine2 24.2.0-alpha.0 - 25.5.4 Depends on vulnerable versions of jest-runtime node_modules/jest-jasmine2 node-notifier <8.0.1 Severity: moderate OS Command Injection in node-notifier - https://github.com/advisories/GHSA-5fw9-fq32-wv5p fix available via `npm audit fix --force` Will install [email protected], which is a breaking change node_modules/node-notifier @jest/reporters <=26.4.0 Depends on vulnerable versions of node-notifier node_modules/@jest/reporters @jest/core <=25.5.4 Depends on vulnerable versions of @jest/reporters Depends on vulnerable versions of jest-config Depends on vulnerable versions of jest-runner Depends on vulnerable versions of jest-runtime node_modules/@jest/core request * Severity: moderate Server-Side Request Forgery in Request - https://github.com/advisories/GHSA-p8p7-x288-28g6 Depends on vulnerable versions of tough-cookie fix available via `npm audit fix --force` Will install [email protected], which is a breaking change node_modules/request request-promise-core * Depends on vulnerable versions of request node_modules/request-promise-core request-promise-native >=1.0.0 Depends on vulnerable versions of request Depends on vulnerable versions of request-promise-core Depends on vulnerable versions of tough-cookie node_modules/request-promise-native tough-cookie <4.1.3 Severity: moderate tough-cookie Prototype Pollution vulnerability - https://github.com/advisories/GHSA-72xf-g2v4-qvf3 fix available via `npm audit fix --force` Will install [email protected], which is a breaking change node_modules/request-promise-native/node_modules/tough-cookie node_modules/request/node_modules/tough-cookie node_modules/tough-cookie 17 moderate severity vulnerabilities ```
infrastructure
update tsdx version in repo to resolve moderate vulnerabilities area of issue api upgrading tsdx to would resolve moderate vulnerabilities in the repo but is a breaking change i m not sure what upgrade steps are required npm audit report jsdom severity moderate insufficient granularity of access control in jsdom depends on vulnerable versions of request depends on vulnerable versions of request promise native depends on vulnerable versions of tough cookie fix available via npm audit fix force will install tsdx which is a breaking change node modules jsdom jest environment jsdom depends on vulnerable versions of jsdom node modules jest environment jsdom jest config alpha depends on vulnerable versions of jest test sequencer depends on vulnerable versions of jest environment jsdom depends on vulnerable versions of jest node modules jest config jest cli alpha alpha depends on vulnerable versions of jest core depends on vulnerable versions of jest config node modules jest cli jest alpha depends on vulnerable versions of jest core depends on vulnerable versions of jest cli node modules jest tsdx depends on vulnerable versions of jest node modules tsdx jest runner alpha depends on vulnerable versions of jest config depends on vulnerable versions of jest depends on vulnerable versions of jest runtime node modules jest runner jest test sequencer depends on vulnerable versions of jest runner depends on vulnerable versions of jest runtime node modules jest test sequencer jest runtime alpha depends on vulnerable versions of jest config node modules jest runtime jest alpha depends on vulnerable versions of jest runtime node modules jest node notifier severity moderate os command injection in node notifier fix available via npm audit fix force will install tsdx which is a breaking change node modules node notifier jest reporters depends on vulnerable versions of node notifier node modules jest reporters jest core depends on vulnerable versions of jest reporters depends on vulnerable versions of jest config depends on vulnerable versions of jest runner depends on vulnerable versions of jest runtime node modules jest core request severity moderate server side request forgery in request depends on vulnerable versions of tough cookie fix available via npm audit fix force will install tsdx which is a breaking change node modules request request promise core depends on vulnerable versions of request node modules request promise core request promise native depends on vulnerable versions of request depends on vulnerable versions of request promise core depends on vulnerable versions of tough cookie node modules request promise native tough cookie severity moderate tough cookie prototype pollution vulnerability fix available via npm audit fix force will install tsdx which is a breaking change node modules request promise native node modules tough cookie node modules request node modules tough cookie node modules tough cookie moderate severity vulnerabilities
1
422,869
12,287,490,746
IssuesEvent
2020-05-09 12:27:18
googleapis/elixir-google-api
https://api.github.com/repos/googleapis/elixir-google-api
opened
Synthesis failed for Vision
api: vision autosynth failure priority: p1 type: bug
Hello! Autosynth couldn't regenerate Vision. :broken_heart: Here's the output from running `synth.py`: ``` 2020-05-09 05:22:11 [INFO] logs will be written to: /tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api 2020-05-09 05:22:11,441 autosynth > logs will be written to: /tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api Switched to branch 'autosynth-vision' 2020-05-09 05:22:13 [INFO] Running synthtool 2020-05-09 05:22:13,103 autosynth > Running synthtool 2020-05-09 05:22:13 [INFO] ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/vision/synth.metadata', 'synth.py', '--'] 2020-05-09 05:22:13,104 autosynth > ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/vision/synth.metadata', 'synth.py', '--'] 2020-05-09 05:22:13,314 synthtool > Executing /home/kbuilder/.cache/synthtool/elixir-google-api/synth.py. On branch autosynth-vision nothing to commit, working tree clean 2020-05-09 05:22:13,657 synthtool > Cloning https://github.com/googleapis/elixir-google-api.git. 2020-05-09 05:22:14,106 synthtool > Running: docker run --rm -v/home/kbuilder/.cache/synthtool/elixir-google-api:/workspace -v/var/run/docker.sock:/var/run/docker.sock -e USER_GROUP=1000:1000 -w /workspace gcr.io/cloud-devrel-public-resources/elixir19 scripts/generate_client.sh Vision 2020-05-09 05:22:18,091 synthtool > No files in sources /home/kbuilder/.cache/synthtool/elixir-google-api/clients were copied. Does the source contain files? Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module> main() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main spec.loader.exec_module(synth_module) # type: ignore File "/tmpfs/src/github/synthtool/synthtool/metadata.py", line 180, in __exit__ write(self.metadata_file_path) File "/tmpfs/src/github/synthtool/synthtool/metadata.py", line 112, in write with open(outfile, "w") as fh: FileNotFoundError: [Errno 2] No such file or directory: 'clients/vision/synth.metadata' 2020-05-09 05:22:18 [ERROR] Synthesis failed 2020-05-09 05:22:18,120 autosynth > Synthesis failed Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 599, in <module> main() File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 471, in main return _inner_main(temp_dir) File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 549, in _inner_main ).synthesize(base_synth_log_path) File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 118, in synthesize synth_proc.check_returncode() # Raise an exception. File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode self.stderr) subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/vision/synth.metadata', 'synth.py', '--', 'Vision']' returned non-zero exit status 1. ``` Google internal developers can see the full log [here](https://sponge/11ff3741-9158-4831-8681-fff828f77e1a).
1.0
Synthesis failed for Vision - Hello! Autosynth couldn't regenerate Vision. :broken_heart: Here's the output from running `synth.py`: ``` 2020-05-09 05:22:11 [INFO] logs will be written to: /tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api 2020-05-09 05:22:11,441 autosynth > logs will be written to: /tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api Switched to branch 'autosynth-vision' 2020-05-09 05:22:13 [INFO] Running synthtool 2020-05-09 05:22:13,103 autosynth > Running synthtool 2020-05-09 05:22:13 [INFO] ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/vision/synth.metadata', 'synth.py', '--'] 2020-05-09 05:22:13,104 autosynth > ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/vision/synth.metadata', 'synth.py', '--'] 2020-05-09 05:22:13,314 synthtool > Executing /home/kbuilder/.cache/synthtool/elixir-google-api/synth.py. On branch autosynth-vision nothing to commit, working tree clean 2020-05-09 05:22:13,657 synthtool > Cloning https://github.com/googleapis/elixir-google-api.git. 2020-05-09 05:22:14,106 synthtool > Running: docker run --rm -v/home/kbuilder/.cache/synthtool/elixir-google-api:/workspace -v/var/run/docker.sock:/var/run/docker.sock -e USER_GROUP=1000:1000 -w /workspace gcr.io/cloud-devrel-public-resources/elixir19 scripts/generate_client.sh Vision 2020-05-09 05:22:18,091 synthtool > No files in sources /home/kbuilder/.cache/synthtool/elixir-google-api/clients were copied. Does the source contain files? Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module> main() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main spec.loader.exec_module(synth_module) # type: ignore File "/tmpfs/src/github/synthtool/synthtool/metadata.py", line 180, in __exit__ write(self.metadata_file_path) File "/tmpfs/src/github/synthtool/synthtool/metadata.py", line 112, in write with open(outfile, "w") as fh: FileNotFoundError: [Errno 2] No such file or directory: 'clients/vision/synth.metadata' 2020-05-09 05:22:18 [ERROR] Synthesis failed 2020-05-09 05:22:18,120 autosynth > Synthesis failed Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 599, in <module> main() File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 471, in main return _inner_main(temp_dir) File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 549, in _inner_main ).synthesize(base_synth_log_path) File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 118, in synthesize synth_proc.check_returncode() # Raise an exception. File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode self.stderr) subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/vision/synth.metadata', 'synth.py', '--', 'Vision']' returned non-zero exit status 1. ``` Google internal developers can see the full log [here](https://sponge/11ff3741-9158-4831-8681-fff828f77e1a).
non_infrastructure
synthesis failed for vision hello autosynth couldn t regenerate vision broken heart here s the output from running synth py logs will be written to tmpfs src github synthtool logs googleapis elixir google api autosynth logs will be written to tmpfs src github synthtool logs googleapis elixir google api switched to branch autosynth vision running synthtool autosynth running synthtool autosynth synthtool executing home kbuilder cache synthtool elixir google api synth py on branch autosynth vision nothing to commit working tree clean synthtool cloning synthtool running docker run rm v home kbuilder cache synthtool elixir google api workspace v var run docker sock var run docker sock e user group w workspace gcr io cloud devrel public resources scripts generate client sh vision synthtool no files in sources home kbuilder cache synthtool elixir google api clients were copied does the source contain files traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool synthtool main py line in main file tmpfs src github synthtool env lib site packages click core py line in call return self main args kwargs file tmpfs src github synthtool env lib site packages click core py line in main rv self invoke ctx file tmpfs src github synthtool env lib site packages click core py line in invoke return ctx invoke self callback ctx params file tmpfs src github synthtool env lib site packages click core py line in invoke return callback args kwargs file tmpfs src github synthtool synthtool main py line in main spec loader exec module synth module type ignore file tmpfs src github synthtool synthtool metadata py line in exit write self metadata file path file tmpfs src github synthtool synthtool metadata py line in write with open outfile w as fh filenotfounderror no such file or directory clients vision synth metadata synthesis failed autosynth synthesis failed traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool autosynth synth py line in main file tmpfs src github synthtool autosynth synth py line in main return inner main temp dir file tmpfs src github synthtool autosynth synth py line in inner main synthesize base synth log path file tmpfs src github synthtool autosynth synthesizer py line in synthesize synth proc check returncode raise an exception file home kbuilder pyenv versions lib subprocess py line in check returncode self stderr subprocess calledprocesserror command returned non zero exit status google internal developers can see the full log
0
160,282
6,085,827,064
IssuesEvent
2017-06-17 18:21:45
ReikaKalseki/Reika_Mods_Issues
https://api.github.com/repos/ReikaKalseki/Reika_Mods_Issues
closed
[Chromaticraft] 17c. Infinite Liquids via Teleportation Pump
Bug ChromatiCraft Exploit High Priority
Hi, i found an exploit for infinite fluids: an example for Liquid Chroma: Step 1: Place a Liquid Chroma Puddle in the world. Step 2: Register them by the teleportation Pump. Step 3: Collect the Chroma Puddle back in the Bucket. Step 4: Pump the registered Chroma via teleportation pump. Step 5: Profit greets
1.0
[Chromaticraft] 17c. Infinite Liquids via Teleportation Pump - Hi, i found an exploit for infinite fluids: an example for Liquid Chroma: Step 1: Place a Liquid Chroma Puddle in the world. Step 2: Register them by the teleportation Pump. Step 3: Collect the Chroma Puddle back in the Bucket. Step 4: Pump the registered Chroma via teleportation pump. Step 5: Profit greets
non_infrastructure
infinite liquids via teleportation pump hi i found an exploit for infinite fluids an example for liquid chroma step place a liquid chroma puddle in the world step register them by the teleportation pump step collect the chroma puddle back in the bucket step pump the registered chroma via teleportation pump step profit greets
0
19,796
13,458,495,178
IssuesEvent
2020-09-09 10:44:12
telerik/kendo-themes
https://api.github.com/repos/telerik/kendo-themes
closed
SUGGESTION: Add .browserlistrc for our apps to comply with
Enhancement infrastructure
Bootstrap provides a .browserlistrc at https://github.com/twbs/bootstrap/blob/v4.3.1/.browserslistrc ``` # https://github.com/browserslist/browserslist#readme >= 1% last 1 major version not dead Chrome >= 45 Firefox >= 38 Edge >= 12 Explorer >= 10 iOS >= 9 Safari >= 9 Android >= 4.4 Opera >= 30 ``` This makes it very clear what browser versions Bootstrap supports at any time. It would be nice to have a .browserlistrc to copy into our apps in order to make sure our apps do not pretend to support browsers that kendo-themes do not support.
1.0
SUGGESTION: Add .browserlistrc for our apps to comply with - Bootstrap provides a .browserlistrc at https://github.com/twbs/bootstrap/blob/v4.3.1/.browserslistrc ``` # https://github.com/browserslist/browserslist#readme >= 1% last 1 major version not dead Chrome >= 45 Firefox >= 38 Edge >= 12 Explorer >= 10 iOS >= 9 Safari >= 9 Android >= 4.4 Opera >= 30 ``` This makes it very clear what browser versions Bootstrap supports at any time. It would be nice to have a .browserlistrc to copy into our apps in order to make sure our apps do not pretend to support browsers that kendo-themes do not support.
infrastructure
suggestion add browserlistrc for our apps to comply with bootstrap provides a browserlistrc at last major version not dead chrome firefox edge explorer ios safari android opera this makes it very clear what browser versions bootstrap supports at any time it would be nice to have a browserlistrc to copy into our apps in order to make sure our apps do not pretend to support browsers that kendo themes do not support
1
449,974
31,879,500,444
IssuesEvent
2023-09-16 07:37:00
gak112/DearjobTesting2
https://api.github.com/repos/gak112/DearjobTesting2
closed
Bug ; DEAR JOB WEB ; Staffing Consultancy ; Home> Hot List >Add Hot List ;Error in Experience
documentation invalid
Action :- In experience place holder it is accepting more than 100 Expected Output :- It should accept more than 100 years Actual Output :- Accepting more than 100 yrs ![image](https://github.com/gak112/DearjobTesting2/assets/143584640/7ed14634-0697-4b7e-bfa2-327d93abfbe6)
1.0
Bug ; DEAR JOB WEB ; Staffing Consultancy ; Home> Hot List >Add Hot List ;Error in Experience - Action :- In experience place holder it is accepting more than 100 Expected Output :- It should accept more than 100 years Actual Output :- Accepting more than 100 yrs ![image](https://github.com/gak112/DearjobTesting2/assets/143584640/7ed14634-0697-4b7e-bfa2-327d93abfbe6)
non_infrastructure
bug dear job web staffing consultancy home hot list add hot list error in experience action in experience place holder it is accepting more than expected output it should accept more than years actual output accepting more than yrs
0
42,187
17,081,900,625
IssuesEvent
2021-07-08 06:50:20
ctripcorp/apollo
https://api.github.com/repos/ctripcorp/apollo
closed
当apollo服务端宕机后,不影响应用正常使用
area/client area/configservice kind/question stale
**你的特性请求和某个问题有关吗?请描述** 我们公司的apollo支持很多业务使用,有个业务访问量太大,数据量也大,导致server 宕机 这样全公司的net java服务都出错了,影响太大了 **清晰简洁地描述一下你希望的解决方案** 希望当server宕机后,因为服务器下载了服务端缓存(放在opt文件夹下的),这样可以优先使用本地的缓存运行,仅当服务端跟客户端通信连接正常 才进行更新本地缓存的操作
1.0
当apollo服务端宕机后,不影响应用正常使用 - **你的特性请求和某个问题有关吗?请描述** 我们公司的apollo支持很多业务使用,有个业务访问量太大,数据量也大,导致server 宕机 这样全公司的net java服务都出错了,影响太大了 **清晰简洁地描述一下你希望的解决方案** 希望当server宕机后,因为服务器下载了服务端缓存(放在opt文件夹下的),这样可以优先使用本地的缓存运行,仅当服务端跟客户端通信连接正常 才进行更新本地缓存的操作
non_infrastructure
当apollo服务端宕机后,不影响应用正常使用 你的特性请求和某个问题有关吗?请描述 我们公司的apollo支持很多业务使用,有个业务访问量太大,数据量也大,导致server 宕机 这样全公司的net java服务都出错了,影响太大了 清晰简洁地描述一下你希望的解决方案 希望当server宕机后,因为服务器下载了服务端缓存(放在opt文件夹下的),这样可以优先使用本地的缓存运行,仅当服务端跟客户端通信连接正常 才进行更新本地缓存的操作
0
21,366
14,541,224,116
IssuesEvent
2020-12-15 14:19:22
google/web-stories-wp
https://api.github.com/repos/google/web-stories-wp
closed
Karma: ensure Google Fonts are loaded for tests and snapshots
P2 Pod: WP & Infra Type: Infrastructure Type: Task
<!-- NOTE: For help requests, support questions, or general feedback, please use the WordPress.org forums instead: https://wordpress.org/support/plugin/web-stories/ --> ## Task Description Percy often fails because the web fonts haven't been loaded completely, showing everything in Times New Roman. Let's make this more robust.
1.0
Karma: ensure Google Fonts are loaded for tests and snapshots - <!-- NOTE: For help requests, support questions, or general feedback, please use the WordPress.org forums instead: https://wordpress.org/support/plugin/web-stories/ --> ## Task Description Percy often fails because the web fonts haven't been loaded completely, showing everything in Times New Roman. Let's make this more robust.
infrastructure
karma ensure google fonts are loaded for tests and snapshots task description percy often fails because the web fonts haven t been loaded completely showing everything in times new roman let s make this more robust
1
243,318
20,377,963,768
IssuesEvent
2022-02-21 17:34:48
weaveworks/tf-controller
https://api.github.com/repos/weaveworks/tf-controller
closed
Add a test case for writing output with dots in its name to a secret
kind/enhancement area/testing
Need a test case to make sure that outputs containing dots, like the following, in their names are allowed: ```yaml apiVersion: infra.contrib.fluxcd.io/v1alpha1 kind: Terraform metadata: name: master-key-tf namespace: app-01 spec: interval: 1h path: ./_artifacts/10-zz-terraform writeOutputsToSecret: name: age outputs: - age.agekey ```
1.0
Add a test case for writing output with dots in its name to a secret - Need a test case to make sure that outputs containing dots, like the following, in their names are allowed: ```yaml apiVersion: infra.contrib.fluxcd.io/v1alpha1 kind: Terraform metadata: name: master-key-tf namespace: app-01 spec: interval: 1h path: ./_artifacts/10-zz-terraform writeOutputsToSecret: name: age outputs: - age.agekey ```
non_infrastructure
add a test case for writing output with dots in its name to a secret need a test case to make sure that outputs containing dots like the following in their names are allowed yaml apiversion infra contrib fluxcd io kind terraform metadata name master key tf namespace app spec interval path artifacts zz terraform writeoutputstosecret name age outputs age agekey
0
232,196
25,565,421,526
IssuesEvent
2022-11-30 13:59:00
hygieia/hygieia-whitesource-collector
https://api.github.com/repos/hygieia/hygieia-whitesource-collector
closed
CVE-2020-14062 (High) detected in jackson-databind-2.8.11.3.jar - autoclosed
wontfix security vulnerability
## CVE-2020-14062 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.11.3.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.11.3/jackson-databind-2.8.11.3.jar</p> <p> Dependency Hierarchy: - core-3.15.42.jar (Root Library) - spring-boot-starter-web-1.5.22.RELEASE.jar - :x: **jackson-databind-2.8.11.3.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/hygieia/hygieia-whitesource-collector/commit/4b5ed1d2f3030d721692ff4f980e8d2467fde19b">4b5ed1d2f3030d721692ff4f980e8d2467fde19b</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.5 mishandles the interaction between serialization gadgets and typing, related to com.sun.org.apache.xalan.internal.lib.sql.JNDIConnectionPool (aka xalan2). <p>Publish Date: 2020-06-14 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-14062>CVE-2020-14062</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14062">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14062</a></p> <p>Release Date: 2020-06-14</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-14062 (High) detected in jackson-databind-2.8.11.3.jar - autoclosed - ## CVE-2020-14062 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.11.3.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.11.3/jackson-databind-2.8.11.3.jar</p> <p> Dependency Hierarchy: - core-3.15.42.jar (Root Library) - spring-boot-starter-web-1.5.22.RELEASE.jar - :x: **jackson-databind-2.8.11.3.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/hygieia/hygieia-whitesource-collector/commit/4b5ed1d2f3030d721692ff4f980e8d2467fde19b">4b5ed1d2f3030d721692ff4f980e8d2467fde19b</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.5 mishandles the interaction between serialization gadgets and typing, related to com.sun.org.apache.xalan.internal.lib.sql.JNDIConnectionPool (aka xalan2). <p>Publish Date: 2020-06-14 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-14062>CVE-2020-14062</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14062">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14062</a></p> <p>Release Date: 2020-06-14</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_infrastructure
cve high detected in jackson databind jar autoclosed cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy core jar root library spring boot starter web release jar x jackson databind jar vulnerable library found in head commit a href found in base branch main vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to com sun org apache xalan internal lib sql jndiconnectionpool aka publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with mend
0
4,126
4,822,665,965
IssuesEvent
2016-11-05 23:52:14
LOZORD/xanadu
https://api.github.com/repos/LOZORD/xanadu
opened
No slow tests!
enhancement infrastructure
None of the Mocha tests should be slow (yellow). We should try to remove any explicit `this.slow` calls, if possible.
1.0
No slow tests! - None of the Mocha tests should be slow (yellow). We should try to remove any explicit `this.slow` calls, if possible.
infrastructure
no slow tests none of the mocha tests should be slow yellow we should try to remove any explicit this slow calls if possible
1
28,541
23,323,408,254
IssuesEvent
2022-08-08 18:38:12
opensearch-project/k-NN
https://api.github.com/repos/opensearch-project/k-NN
opened
Add OSB workload that can run indexing and querying in parallel
Infrastructure
## Description One common benchmark question that arises for the plugin is how does indexing impact querying performance. OpenSearch benchmarks has the ability to run a workload that executes 2 tasks in parallel. We should add a new workload to [our extensions](https://github.com/opensearch-project/k-NN/tree/main/benchmarks/osb) that will allow users to benchmark plugin performance for a configurable indexing and querying throughput. The workload should be broken up into the following operations: 1. Create a configurable k-NN index. We should be able to create an index from a model or not. 2. Ingest a base set of documents into the index 3. Warmup the index for querying workload 4. In parallel, index a set of documents at a configurable throughput and run a set of queries at a configurable throughput. Further, we should be able to compare the numbers against existing benchmarks when there are no parallel operations going on. ## Links 1. [OSB Workload Schema](https://github.com/opensearch-project/opensearch-benchmark/blob/main/osbenchmark/resources/workload-schema.json)
1.0
Add OSB workload that can run indexing and querying in parallel - ## Description One common benchmark question that arises for the plugin is how does indexing impact querying performance. OpenSearch benchmarks has the ability to run a workload that executes 2 tasks in parallel. We should add a new workload to [our extensions](https://github.com/opensearch-project/k-NN/tree/main/benchmarks/osb) that will allow users to benchmark plugin performance for a configurable indexing and querying throughput. The workload should be broken up into the following operations: 1. Create a configurable k-NN index. We should be able to create an index from a model or not. 2. Ingest a base set of documents into the index 3. Warmup the index for querying workload 4. In parallel, index a set of documents at a configurable throughput and run a set of queries at a configurable throughput. Further, we should be able to compare the numbers against existing benchmarks when there are no parallel operations going on. ## Links 1. [OSB Workload Schema](https://github.com/opensearch-project/opensearch-benchmark/blob/main/osbenchmark/resources/workload-schema.json)
infrastructure
add osb workload that can run indexing and querying in parallel description one common benchmark question that arises for the plugin is how does indexing impact querying performance opensearch benchmarks has the ability to run a workload that executes tasks in parallel we should add a new workload to that will allow users to benchmark plugin performance for a configurable indexing and querying throughput the workload should be broken up into the following operations create a configurable k nn index we should be able to create an index from a model or not ingest a base set of documents into the index warmup the index for querying workload in parallel index a set of documents at a configurable throughput and run a set of queries at a configurable throughput further we should be able to compare the numbers against existing benchmarks when there are no parallel operations going on links
1
630,450
20,109,538,108
IssuesEvent
2022-02-07 13:54:33
googleapis/python-spanner-django
https://api.github.com/repos/googleapis/python-spanner-django
closed
Incompatible with google-cloud-spanner 3.12 ->> UserWarning: The `rowcount` property is non-operational | Cannot update/delete model objects
type: bug priority: p2 api: spanner
#### Environment details - Programming language: Python - OS: MacOS Big Sur 11.4 - Language runtime version: 3.8.9 - Package version: Django 3.2.2 and 3.2.9 with django-cloud-spanner 3.0.0 and google-cloud spanner 3.12.0 #### Steps to reproduce 1. Follow the "from scratch" documentation as outlined in the readme to the letter 2. Instantiate and save a model, e.g.: `a = MyModel.objects.create(name="testname")` 3. Try to delete the model with: `a.delete()` 4. This will trigger the following error: ``` /Users/me/.pyenv/versions/3.8.9/envs/spannertest/lib/python3.8/site-packages/django/db/backends/utils.py:22: UserWarning: The `rowcount` property is non-operational. Request resulting rows are streamed by the `fetch*()` methods and can't be counted before they are all streamed. cursor_attr = getattr(self.cursor, attr) Traceback (most recent call last): File "<console>", line 1, in <module> File "/Users/me/.pyenv/versions/3.8.9/envs/spannertest/lib/python3.8/site-packages/django/db/models/base.py", line 954, in delete return collector.delete() File "/Users/me/.pyenv/versions/3.8.9/envs/spannertest/lib/python3.8/site-packages/django/db/models/deletion.py", line 396, in delete count = sql.DeleteQuery(model).delete_batch([instance.pk], self.using) File "/Users/me/.pyenv/versions/3.8.9/envs/spannertest/lib/python3.8/site-packages/django/db/models/sql/subqueries.py", line 43, in delete_batch num_deleted += self.do_query(self.get_meta().db_table, self.where, using=using) TypeError: unsupported operand type(s) for +=: 'int' and 'NoneType' ``` A similar error will be raised when trying to update the model: `a.name = "new name" & a.save()`.
1.0
Incompatible with google-cloud-spanner 3.12 ->> UserWarning: The `rowcount` property is non-operational | Cannot update/delete model objects - #### Environment details - Programming language: Python - OS: MacOS Big Sur 11.4 - Language runtime version: 3.8.9 - Package version: Django 3.2.2 and 3.2.9 with django-cloud-spanner 3.0.0 and google-cloud spanner 3.12.0 #### Steps to reproduce 1. Follow the "from scratch" documentation as outlined in the readme to the letter 2. Instantiate and save a model, e.g.: `a = MyModel.objects.create(name="testname")` 3. Try to delete the model with: `a.delete()` 4. This will trigger the following error: ``` /Users/me/.pyenv/versions/3.8.9/envs/spannertest/lib/python3.8/site-packages/django/db/backends/utils.py:22: UserWarning: The `rowcount` property is non-operational. Request resulting rows are streamed by the `fetch*()` methods and can't be counted before they are all streamed. cursor_attr = getattr(self.cursor, attr) Traceback (most recent call last): File "<console>", line 1, in <module> File "/Users/me/.pyenv/versions/3.8.9/envs/spannertest/lib/python3.8/site-packages/django/db/models/base.py", line 954, in delete return collector.delete() File "/Users/me/.pyenv/versions/3.8.9/envs/spannertest/lib/python3.8/site-packages/django/db/models/deletion.py", line 396, in delete count = sql.DeleteQuery(model).delete_batch([instance.pk], self.using) File "/Users/me/.pyenv/versions/3.8.9/envs/spannertest/lib/python3.8/site-packages/django/db/models/sql/subqueries.py", line 43, in delete_batch num_deleted += self.do_query(self.get_meta().db_table, self.where, using=using) TypeError: unsupported operand type(s) for +=: 'int' and 'NoneType' ``` A similar error will be raised when trying to update the model: `a.name = "new name" & a.save()`.
non_infrastructure
incompatible with google cloud spanner userwarning the rowcount property is non operational cannot update delete model objects environment details programming language python os macos big sur language runtime version package version django and with django cloud spanner and google cloud spanner steps to reproduce follow the from scratch documentation as outlined in the readme to the letter instantiate and save a model e g a mymodel objects create name testname try to delete the model with a delete this will trigger the following error users me pyenv versions envs spannertest lib site packages django db backends utils py userwarning the rowcount property is non operational request resulting rows are streamed by the fetch methods and can t be counted before they are all streamed cursor attr getattr self cursor attr traceback most recent call last file line in file users me pyenv versions envs spannertest lib site packages django db models base py line in delete return collector delete file users me pyenv versions envs spannertest lib site packages django db models deletion py line in delete count sql deletequery model delete batch self using file users me pyenv versions envs spannertest lib site packages django db models sql subqueries py line in delete batch num deleted self do query self get meta db table self where using using typeerror unsupported operand type s for int and nonetype a similar error will be raised when trying to update the model a name new name a save
0
26,212
19,726,077,215
IssuesEvent
2022-01-13 20:06:16
department-of-veterans-affairs/va.gov-team
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
opened
[Vets-API] Perform load testing in EKS dev cluster
operations devops infrastructure eks
## Description Run load testing to make sure that the Vets-API is stable and performant in the EKS dev cluster. ## Technical notes - This may have been done already with EKS in general... but, it should now be done on the dev-api.va.gov application in the EKS - The load test should be equivalent to real-world production traffic volumes ## Tasks - [ ] _What things need to happen?_ ## Acceptance Criteria - [ ] Load test has been performed against dev-api.va.gov and results recorded
1.0
[Vets-API] Perform load testing in EKS dev cluster - ## Description Run load testing to make sure that the Vets-API is stable and performant in the EKS dev cluster. ## Technical notes - This may have been done already with EKS in general... but, it should now be done on the dev-api.va.gov application in the EKS - The load test should be equivalent to real-world production traffic volumes ## Tasks - [ ] _What things need to happen?_ ## Acceptance Criteria - [ ] Load test has been performed against dev-api.va.gov and results recorded
infrastructure
perform load testing in eks dev cluster description run load testing to make sure that the vets api is stable and performant in the eks dev cluster technical notes this may have been done already with eks in general but it should now be done on the dev api va gov application in the eks the load test should be equivalent to real world production traffic volumes tasks what things need to happen acceptance criteria load test has been performed against dev api va gov and results recorded
1
21,378
14,542,245,288
IssuesEvent
2020-12-15 15:29:34
robotology/QA
https://api.github.com/repos/robotology/QA
closed
Qt & Yarp + iCub failed to link on Windows 10
infrastructure software
@gregoire-pointeau commented on [Tue Jun 21 2016](https://github.com/robotology/yarp/issues/808) Hello, My installation was working on windows 10 until an automatic update last Friday (17 June). Since I have the following problem when I launch any GUI (`yarpmanager`, `iCubGui`...) I have the following error message: for `yarpmanager`: > `?setSelectionModel@QListWidget@@UAEXPAVQItemSelectionModel@@@Z process entry point not found in the library link library C:\Robot\yarp\build\bin\Release\yarpmanager.exe` for `iCubGui`: > `?getProcAddress@QOpenGLContext@@QBEP6AXXZPBD@Z process entry point not found in the library link library C:\Robot\robotology\Qt\5.7\msvc2013\bin\Qt5OpenGL.dll` I reinstalled Qt, re-pull everything, and rebuilt several times everything. My configuration: - Windows 10 - MVS 12 2013 - Cmake 3.5.2 - Qt 5.7 `Qt5_DIR: C:\Robot\robotology\Qt\5.7\msvc2013\lib\cmake` INCLUDE has: `C:\Robot\robotology\Qt\5.7\msvc2013\include` PATH has: `C:\Robot\robotology\Qt\5.7\msvc2013\lib` `C:\Robot\robotology\Qt\5.7\msvc2013\bin` Does anyone had a similar problem using windows 10? Thanks --- @gregoire-pointeau commented on [Tue Jul 26 2016](https://github.com/robotology/yarp/issues/808#issuecomment-235257339) Any update on it anyone ? --- @drdanz commented on [Tue Jul 26 2016](https://github.com/robotology/yarp/issues/808#issuecomment-235403730) @gregoire-pointeau I'm sorry, I'm not a Windows user, perhaps @randaz81, @pattacini or @mbrunettini saw something similar? Are you sure that you don't have more than one qt5 installation in your path? I've seen strange behaviours on windows with recent versions of CMake that include qt5 dlls in its path. Anyway it looks to me something related to your setup, not a bug in yarp, therefore I'm closing this, please reopen it if you find out that the bug is actually in YARP, or open a new one in the robotology/QA if you need more support with this issue.
1.0
Qt & Yarp + iCub failed to link on Windows 10 - @gregoire-pointeau commented on [Tue Jun 21 2016](https://github.com/robotology/yarp/issues/808) Hello, My installation was working on windows 10 until an automatic update last Friday (17 June). Since I have the following problem when I launch any GUI (`yarpmanager`, `iCubGui`...) I have the following error message: for `yarpmanager`: > `?setSelectionModel@QListWidget@@UAEXPAVQItemSelectionModel@@@Z process entry point not found in the library link library C:\Robot\yarp\build\bin\Release\yarpmanager.exe` for `iCubGui`: > `?getProcAddress@QOpenGLContext@@QBEP6AXXZPBD@Z process entry point not found in the library link library C:\Robot\robotology\Qt\5.7\msvc2013\bin\Qt5OpenGL.dll` I reinstalled Qt, re-pull everything, and rebuilt several times everything. My configuration: - Windows 10 - MVS 12 2013 - Cmake 3.5.2 - Qt 5.7 `Qt5_DIR: C:\Robot\robotology\Qt\5.7\msvc2013\lib\cmake` INCLUDE has: `C:\Robot\robotology\Qt\5.7\msvc2013\include` PATH has: `C:\Robot\robotology\Qt\5.7\msvc2013\lib` `C:\Robot\robotology\Qt\5.7\msvc2013\bin` Does anyone had a similar problem using windows 10? Thanks --- @gregoire-pointeau commented on [Tue Jul 26 2016](https://github.com/robotology/yarp/issues/808#issuecomment-235257339) Any update on it anyone ? --- @drdanz commented on [Tue Jul 26 2016](https://github.com/robotology/yarp/issues/808#issuecomment-235403730) @gregoire-pointeau I'm sorry, I'm not a Windows user, perhaps @randaz81, @pattacini or @mbrunettini saw something similar? Are you sure that you don't have more than one qt5 installation in your path? I've seen strange behaviours on windows with recent versions of CMake that include qt5 dlls in its path. Anyway it looks to me something related to your setup, not a bug in yarp, therefore I'm closing this, please reopen it if you find out that the bug is actually in YARP, or open a new one in the robotology/QA if you need more support with this issue.
infrastructure
qt yarp icub failed to link on windows gregoire pointeau commented on hello my installation was working on windows until an automatic update last friday june since i have the following problem when i launch any gui yarpmanager icubgui i have the following error message for yarpmanager setselectionmodel qlistwidget uaexpavqitemselectionmodel z process entry point not found in the library link library c robot yarp build bin release yarpmanager exe for icubgui getprocaddress qopenglcontext z process entry point not found in the library link library c robot robotology qt bin dll i reinstalled qt re pull everything and rebuilt several times everything my configuration windows mvs cmake qt dir c robot robotology qt lib cmake include has c robot robotology qt include path has c robot robotology qt lib c robot robotology qt bin does anyone had a similar problem using windows thanks gregoire pointeau commented on any update on it anyone drdanz commented on gregoire pointeau i m sorry i m not a windows user perhaps pattacini or mbrunettini saw something similar are you sure that you don t have more than one installation in your path i ve seen strange behaviours on windows with recent versions of cmake that include dlls in its path anyway it looks to me something related to your setup not a bug in yarp therefore i m closing this please reopen it if you find out that the bug is actually in yarp or open a new one in the robotology qa if you need more support with this issue
1
37,039
9,942,177,933
IssuesEvent
2019-07-03 13:22:07
gpac/gpac
https://api.github.com/repos/gpac/gpac
closed
Support OpenJPEG 2
build feature-request player (mp4client/osmo)
Debian bug: https://bugs.debian.org/826814 OpenJPEG 1 is about to be removed from Debian so the OpenJPEG code in GPAC needs to be ported to OpenJPEG 2, or the JPEG2000 reader will have to be disabled in Debian (and probably other downstreams when they start removing OpenJPEG 1).
1.0
Support OpenJPEG 2 - Debian bug: https://bugs.debian.org/826814 OpenJPEG 1 is about to be removed from Debian so the OpenJPEG code in GPAC needs to be ported to OpenJPEG 2, or the JPEG2000 reader will have to be disabled in Debian (and probably other downstreams when they start removing OpenJPEG 1).
non_infrastructure
support openjpeg debian bug openjpeg is about to be removed from debian so the openjpeg code in gpac needs to be ported to openjpeg or the reader will have to be disabled in debian and probably other downstreams when they start removing openjpeg
0
25,072
18,075,603,801
IssuesEvent
2021-09-21 09:31:46
fremtind/jokul
https://api.github.com/repos/fremtind/jokul
closed
Bygddtiden på portalen kryper mot 2 minutter
👷‍♂️ CI and deployment 🚇 infrastructure 👽portal github_actions
**Feilbeskrivelse** Siden portalen er grunnsteinen i alt vi lager, så begynner det å bli litt ubehagelig lang ventetid, både i utvikling og på CI-serveren. **Forventet oppførsel** Vi burde være litt raskere. Vi kan gjøre statiske optimaliseringer av bildene, isteden for å bruke sharp-transformeren for bildene, det burde shave vesentlige deler av bygget. Vi kan splitte byggene, så vi kan bygge pakkene våre i en action, for å tilgjengeliggjøre assetene for andre actions, dermed trenger ikke portalbygge å bygge noe annet enn portalen.
1.0
Bygddtiden på portalen kryper mot 2 minutter - **Feilbeskrivelse** Siden portalen er grunnsteinen i alt vi lager, så begynner det å bli litt ubehagelig lang ventetid, både i utvikling og på CI-serveren. **Forventet oppførsel** Vi burde være litt raskere. Vi kan gjøre statiske optimaliseringer av bildene, isteden for å bruke sharp-transformeren for bildene, det burde shave vesentlige deler av bygget. Vi kan splitte byggene, så vi kan bygge pakkene våre i en action, for å tilgjengeliggjøre assetene for andre actions, dermed trenger ikke portalbygge å bygge noe annet enn portalen.
infrastructure
bygddtiden på portalen kryper mot minutter feilbeskrivelse siden portalen er grunnsteinen i alt vi lager så begynner det å bli litt ubehagelig lang ventetid både i utvikling og på ci serveren forventet oppførsel vi burde være litt raskere vi kan gjøre statiske optimaliseringer av bildene isteden for å bruke sharp transformeren for bildene det burde shave vesentlige deler av bygget vi kan splitte byggene så vi kan bygge pakkene våre i en action for å tilgjengeliggjøre assetene for andre actions dermed trenger ikke portalbygge å bygge noe annet enn portalen
1
End of preview. Expand in Data Studio

Dataset Card for "binary-10IQR-infrastructure"

More Information needed

Downloads last month
17

Collection including karths/binary-10IQR-infrastructure