Dataset Viewer
Auto-converted to Parquet
Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
73
repo_url
stringlengths
36
102
action
stringclasses
3 values
title
stringlengths
1
535
labels
stringlengths
4
356
body
stringlengths
4
178k
index
stringclasses
7 values
text_combine
stringlengths
96
178k
label
stringclasses
2 values
text
stringlengths
96
174k
binary_label
int64
0
1
2,550
12,264,080,325
IssuesEvent
2020-05-07 03:04:50
bandprotocol/bandchain
https://api.github.com/repos/bandprotocol/bandchain
closed
Stress test Wenchang operations
automation chain
Using BigDipper as the base explorer, test interacting with Wenchang testnet and report. - [x] Send money - [x] Spam sending money from 100 accounts to the network concurrently - [x] Delegate - [x] Withdraw delegation - [x] Double sign and get jailed - [x] Apply for validator then keep it down forever (and verify that it will get slashed and jail in 1 day)f - [x] All proposal related messages
1.0
Stress test Wenchang operations - Using BigDipper as the base explorer, test interacting with Wenchang testnet and report. - [x] Send money - [x] Spam sending money from 100 accounts to the network concurrently - [x] Delegate - [x] Withdraw delegation - [x] Double sign and get jailed - [x] Apply for validator then keep it down forever (and verify that it will get slashed and jail in 1 day)f - [x] All proposal related messages
automation
stress test wenchang operations using bigdipper as the base explorer test interacting with wenchang testnet and report send money spam sending money from accounts to the network concurrently delegate withdraw delegation double sign and get jailed apply for validator then keep it down forever and verify that it will get slashed and jail in day f all proposal related messages
1
86,261
3,704,395,261
IssuesEvent
2016-02-29 23:59:22
SpeedCurve-Metrics/SpeedCurve
https://api.github.com/repos/SpeedCurve-Metrics/SpeedCurve
closed
[Benchmark] Filmstrip not refreshed when switching between templates
priority medium status accepted type bug
In the "Benchmark" section, when switching between templates, the filmstrip view is greyed and not refreshed. Reloading the page refreshes the filmstrip but is annoying :). <img width="1271" alt="screen shot 2015-12-04 at 10 46 28 am" src="https://cloud.githubusercontent.com/assets/2169585/11586580/655f581a-9a74-11e5-8d87-7ab1a7e3bfad.png">
1.0
[Benchmark] Filmstrip not refreshed when switching between templates - In the "Benchmark" section, when switching between templates, the filmstrip view is greyed and not refreshed. Reloading the page refreshes the filmstrip but is annoying :). <img width="1271" alt="screen shot 2015-12-04 at 10 46 28 am" src="https://cloud.githubusercontent.com/assets/2169585/11586580/655f581a-9a74-11e5-8d87-7ab1a7e3bfad.png">
non_automation
filmstrip not refreshed when switching between templates in the benchmark section when switching between templates the filmstrip view is greyed and not refreshed reloading the page refreshes the filmstrip but is annoying img width alt screen shot at am src
0
113,274
17,117,946,552
IssuesEvent
2021-07-11 18:51:59
turkdevops/design-language-website
https://api.github.com/repos/turkdevops/design-language-website
opened
CVE-2020-28469 (High) detected in glob-parent-3.1.0.tgz, glob-parent-5.1.1.tgz
security vulnerability
## CVE-2020-28469 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>glob-parent-3.1.0.tgz</b>, <b>glob-parent-5.1.1.tgz</b></p></summary> <p> <details><summary><b>glob-parent-3.1.0.tgz</b></p></summary> <p>Strips glob magic from a string to provide the parent directory path</p> <p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz</a></p> <p>Path to dependency file: design-language-website/package.json</p> <p>Path to vulnerable library: design-language-website/node_modules/glob-parent</p> <p> Dependency Hierarchy: - gatsby-2.32.12.tgz (Root Library) - webpack-dev-server-3.11.2.tgz - chokidar-2.1.8.tgz - :x: **glob-parent-3.1.0.tgz** (Vulnerable Library) </details> <details><summary><b>glob-parent-5.1.1.tgz</b></p></summary> <p>Extract the non-magic parent path from a glob string.</p> <p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz</a></p> <p>Path to dependency file: design-language-website/package.json</p> <p>Path to vulnerable library: design-language-website/node_modules/glob-parent</p> <p> Dependency Hierarchy: - eslint-7.10.0.tgz (Root Library) - :x: **glob-parent-5.1.1.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/turkdevops/design-language-website/commit/187b6c70cc572cc46890f19fe80fcaddc53857c4">187b6c70cc572cc46890f19fe80fcaddc53857c4</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> This affects the package glob-parent before 5.1.2. The enclosure regex used to check for strings ending in enclosure containing path separator. <p>Publish Date: 2021-06-03 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28469>CVE-2020-28469</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469</a></p> <p>Release Date: 2021-06-03</p> <p>Fix Resolution: glob-parent - 5.1.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-28469 (High) detected in glob-parent-3.1.0.tgz, glob-parent-5.1.1.tgz - ## CVE-2020-28469 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>glob-parent-3.1.0.tgz</b>, <b>glob-parent-5.1.1.tgz</b></p></summary> <p> <details><summary><b>glob-parent-3.1.0.tgz</b></p></summary> <p>Strips glob magic from a string to provide the parent directory path</p> <p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz</a></p> <p>Path to dependency file: design-language-website/package.json</p> <p>Path to vulnerable library: design-language-website/node_modules/glob-parent</p> <p> Dependency Hierarchy: - gatsby-2.32.12.tgz (Root Library) - webpack-dev-server-3.11.2.tgz - chokidar-2.1.8.tgz - :x: **glob-parent-3.1.0.tgz** (Vulnerable Library) </details> <details><summary><b>glob-parent-5.1.1.tgz</b></p></summary> <p>Extract the non-magic parent path from a glob string.</p> <p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz</a></p> <p>Path to dependency file: design-language-website/package.json</p> <p>Path to vulnerable library: design-language-website/node_modules/glob-parent</p> <p> Dependency Hierarchy: - eslint-7.10.0.tgz (Root Library) - :x: **glob-parent-5.1.1.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/turkdevops/design-language-website/commit/187b6c70cc572cc46890f19fe80fcaddc53857c4">187b6c70cc572cc46890f19fe80fcaddc53857c4</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> This affects the package glob-parent before 5.1.2. The enclosure regex used to check for strings ending in enclosure containing path separator. <p>Publish Date: 2021-06-03 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28469>CVE-2020-28469</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469</a></p> <p>Release Date: 2021-06-03</p> <p>Fix Resolution: glob-parent - 5.1.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_automation
cve high detected in glob parent tgz glob parent tgz cve high severity vulnerability vulnerable libraries glob parent tgz glob parent tgz glob parent tgz strips glob magic from a string to provide the parent directory path library home page a href path to dependency file design language website package json path to vulnerable library design language website node modules glob parent dependency hierarchy gatsby tgz root library webpack dev server tgz chokidar tgz x glob parent tgz vulnerable library glob parent tgz extract the non magic parent path from a glob string library home page a href path to dependency file design language website package json path to vulnerable library design language website node modules glob parent dependency hierarchy eslint tgz root library x glob parent tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects the package glob parent before the enclosure regex used to check for strings ending in enclosure containing path separator publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution glob parent step up your open source security game with whitesource
0
2,847
12,702,212,671
IssuesEvent
2020-06-22 19:43:53
submariner-io/submariner
https://api.github.com/repos/submariner-io/submariner
closed
"report-dir" argument can be removed (Ginkgo has --reportFile option)
automation enhancement
"report-dir" argument for specifying junit tests output directory - can be removed, including any references that uses it (also in Docs): https://github.com/submariner-io/submariner/blob/06332b91b193c0ab362e7f0a96cd715b8556acd5/test/e2e/framework/test_context.go#L40 Ginkgo has this feature already, for example --ginkgo.reportFile ${WORKDIR}/e2e_junit_result.xml
1.0
"report-dir" argument can be removed (Ginkgo has --reportFile option) - "report-dir" argument for specifying junit tests output directory - can be removed, including any references that uses it (also in Docs): https://github.com/submariner-io/submariner/blob/06332b91b193c0ab362e7f0a96cd715b8556acd5/test/e2e/framework/test_context.go#L40 Ginkgo has this feature already, for example --ginkgo.reportFile ${WORKDIR}/e2e_junit_result.xml
automation
report dir argument can be removed ginkgo has reportfile option report dir argument for specifying junit tests output directory can be removed including any references that uses it also in docs ginkgo has this feature already for example ginkgo reportfile workdir junit result xml
1
1,571
10,344,432,220
IssuesEvent
2019-09-04 11:12:47
elastic/apm-server
https://api.github.com/repos/elastic/apm-server
closed
[Automation][apm-ci] Reorder parallel execution of stages
automation ci enhancement
Let's run more parallel stages to get a quick cycle and reduce the waste of time when running in sequential stages. Besides, let's ensure the windows stage doesn't populate its failures to the pipeline but the stage. A pipeline is a set of stages. ![image](https://user-images.githubusercontent.com/2871786/63518458-fb5acf00-c4e8-11e9-8655-2c9a5bf81bac.png)
1.0
[Automation][apm-ci] Reorder parallel execution of stages - Let's run more parallel stages to get a quick cycle and reduce the waste of time when running in sequential stages. Besides, let's ensure the windows stage doesn't populate its failures to the pipeline but the stage. A pipeline is a set of stages. ![image](https://user-images.githubusercontent.com/2871786/63518458-fb5acf00-c4e8-11e9-8655-2c9a5bf81bac.png)
automation
reorder parallel execution of stages let s run more parallel stages to get a quick cycle and reduce the waste of time when running in sequential stages besides let s ensure the windows stage doesn t populate its failures to the pipeline but the stage a pipeline is a set of stages
1
8,268
26,586,423,169
IssuesEvent
2023-01-23 02:03:33
Project-Herophilus/idaas-connect-automation
https://api.github.com/repos/Project-Herophilus/idaas-connect-automation
closed
Deploying More iDaaS Connect Sub Modules
automation cloud native
For the next round of deployments lets deploy the following iDaaS-Connect submodules: - FHIR - EDI - Third-Party - Cloud - CMS Interoperability
1.0
Deploying More iDaaS Connect Sub Modules - For the next round of deployments lets deploy the following iDaaS-Connect submodules: - FHIR - EDI - Third-Party - Cloud - CMS Interoperability
automation
deploying more idaas connect sub modules for the next round of deployments lets deploy the following idaas connect submodules fhir edi third party cloud cms interoperability
1
9,875
7,021,923,945
IssuesEvent
2017-12-22 08:01:58
Elgg/Elgg
https://api.github.com/repos/Elgg/Elgg
closed
Saving metadata and all changes automatically in destructor as default policy. (Trac #4597)
engine feature performance
_Original ticket http://trac.elgg.org/ticket/4597 on 42465681-08-10 by trac user srokap, assigned to unknown._ Elgg version: 1.8 Previously discussed here: https://docs.google.com/document/d/1NrxIj4YOTjNbeXDGW3tpz2lNvaRwL2NDPBNd7TgRfFk/edit?disco=AAAAAEr6svk This is actually a bit of logic change. Motivation is to reduce writing calls as much as possible to make life easier for deployments with single master and multiple read replicas. This change allows us to hopefully make single call to DB. We also may change metadata multiple times (increment?) without additional cost - we save final version. It's tricky because we may sometimes want to make immediate write, but i think this could be made by some explicit call (ElggEntity->save(params)?) instead of default policy. I also remember Cash speaking something about making writes to DB as late as possible, it would follow the same path. We might consider saving metadata and all changes automatically in destructor. We tried such concept successfully. Note that also some related bugs were fixed in PHP: https://bugs.php.net/bug.php?id=30210
True
Saving metadata and all changes automatically in destructor as default policy. (Trac #4597) - _Original ticket http://trac.elgg.org/ticket/4597 on 42465681-08-10 by trac user srokap, assigned to unknown._ Elgg version: 1.8 Previously discussed here: https://docs.google.com/document/d/1NrxIj4YOTjNbeXDGW3tpz2lNvaRwL2NDPBNd7TgRfFk/edit?disco=AAAAAEr6svk This is actually a bit of logic change. Motivation is to reduce writing calls as much as possible to make life easier for deployments with single master and multiple read replicas. This change allows us to hopefully make single call to DB. We also may change metadata multiple times (increment?) without additional cost - we save final version. It's tricky because we may sometimes want to make immediate write, but i think this could be made by some explicit call (ElggEntity->save(params)?) instead of default policy. I also remember Cash speaking something about making writes to DB as late as possible, it would follow the same path. We might consider saving metadata and all changes automatically in destructor. We tried such concept successfully. Note that also some related bugs were fixed in PHP: https://bugs.php.net/bug.php?id=30210
non_automation
saving metadata and all changes automatically in destructor as default policy trac original ticket on by trac user srokap assigned to unknown elgg version previously discussed here this is actually a bit of logic change motivation is to reduce writing calls as much as possible to make life easier for deployments with single master and multiple read replicas this change allows us to hopefully make single call to db we also may change metadata multiple times increment without additional cost we save final version it s tricky because we may sometimes want to make immediate write but i think this could be made by some explicit call elggentity save params instead of default policy i also remember cash speaking something about making writes to db as late as possible it would follow the same path we might consider saving metadata and all changes automatically in destructor we tried such concept successfully note that also some related bugs were fixed in php
0
90,936
10,703,811,015
IssuesEvent
2019-10-24 10:19:11
theodo/falco
https://api.github.com/repos/theodo/falco
closed
Docs repository should be migrated to this repo
documentation
In order to make Docs edits / PR more easy, and to keep a single repo to keep track of doc issues/PRs, the Docs repo (currently at https://github.com/theodo/getfal.co) should be migrated under a `docs/` folder in this very repo.
1.0
Docs repository should be migrated to this repo - In order to make Docs edits / PR more easy, and to keep a single repo to keep track of doc issues/PRs, the Docs repo (currently at https://github.com/theodo/getfal.co) should be migrated under a `docs/` folder in this very repo.
non_automation
docs repository should be migrated to this repo in order to make docs edits pr more easy and to keep a single repo to keep track of doc issues prs the docs repo currently at should be migrated under a docs folder in this very repo
0
8,829
27,172,304,905
IssuesEvent
2023-02-17 20:39:22
OneDrive/onedrive-api-docs
https://api.github.com/repos/OneDrive/onedrive-api-docs
closed
Concurrent createUploadSession requests failing
type:bug status:backlogged area:Throttling automation:Closed
#749 ## Category - [ ] Question - [ ] Documentation issue - [X] Bug #### Expected or Desired Behavior I have been using `createUploadSessions` in SPO for months now and it has worked perfectly for uploading a large chunk of files. What I normally do is that I spin up 40 concurrent requests, start uploading the file chunks, and start new sessions once any of the previous ones is finished. This has worked fine until now, the sessions were created, chunks were created and finally the files were created in OneDrive. #### Observed Behavior What I'm seeing now that I receive an `invalidRequest` response after creating a bunch of requests and it seems like only a handful of files will get uploaded completely. I can make it work by only creating one session, finishing it, and creating another session. However, this is considerably slower that what it used to be when I was able to upload multiple files concurrently. ``` method: 'POST', path: '/sites/root/drive/root:%2F<snip>.pdf:/createUploadSession', responseBody: '{"error":{"code":"invalidRequest","message":"Invalid request","innerError":{"date":"2020-09-06T14:53:11","request-id":"dff3cfa9-54e8-4eb5-b108-bf0dec0b04e6"}}}' ``` If this is a new rate limit being applied, I believe the error code should be changed to something more meaningful or understandable. #### Steps to Reproduce Create a bunch of upload sessions concurrently and start uploading chunks. The specific code I'm using is located here: https://github.com/turist-cloud/ship/tree/master/packages/ship-board - `src/upload-files.ts` - `src/fetch-graph-api.ts` [ ]: http://aka.ms/onedrive-api-issues [x]: http://aka.ms/onedrive-api-issues
1.0
Concurrent createUploadSession requests failing - #749 ## Category - [ ] Question - [ ] Documentation issue - [X] Bug #### Expected or Desired Behavior I have been using `createUploadSessions` in SPO for months now and it has worked perfectly for uploading a large chunk of files. What I normally do is that I spin up 40 concurrent requests, start uploading the file chunks, and start new sessions once any of the previous ones is finished. This has worked fine until now, the sessions were created, chunks were created and finally the files were created in OneDrive. #### Observed Behavior What I'm seeing now that I receive an `invalidRequest` response after creating a bunch of requests and it seems like only a handful of files will get uploaded completely. I can make it work by only creating one session, finishing it, and creating another session. However, this is considerably slower that what it used to be when I was able to upload multiple files concurrently. ``` method: 'POST', path: '/sites/root/drive/root:%2F<snip>.pdf:/createUploadSession', responseBody: '{"error":{"code":"invalidRequest","message":"Invalid request","innerError":{"date":"2020-09-06T14:53:11","request-id":"dff3cfa9-54e8-4eb5-b108-bf0dec0b04e6"}}}' ``` If this is a new rate limit being applied, I believe the error code should be changed to something more meaningful or understandable. #### Steps to Reproduce Create a bunch of upload sessions concurrently and start uploading chunks. The specific code I'm using is located here: https://github.com/turist-cloud/ship/tree/master/packages/ship-board - `src/upload-files.ts` - `src/fetch-graph-api.ts` [ ]: http://aka.ms/onedrive-api-issues [x]: http://aka.ms/onedrive-api-issues
automation
concurrent createuploadsession requests failing category question documentation issue bug expected or desired behavior i have been using createuploadsessions in spo for months now and it has worked perfectly for uploading a large chunk of files what i normally do is that i spin up concurrent requests start uploading the file chunks and start new sessions once any of the previous ones is finished this has worked fine until now the sessions were created chunks were created and finally the files were created in onedrive observed behavior what i m seeing now that i receive an invalidrequest response after creating a bunch of requests and it seems like only a handful of files will get uploaded completely i can make it work by only creating one session finishing it and creating another session however this is considerably slower that what it used to be when i was able to upload multiple files concurrently method post path sites root drive root pdf createuploadsession responsebody error code invalidrequest message invalid request innererror date request id if this is a new rate limit being applied i believe the error code should be changed to something more meaningful or understandable steps to reproduce create a bunch of upload sessions concurrently and start uploading chunks the specific code i m using is located here src upload files ts src fetch graph api ts
1
334,207
24,408,612,613
IssuesEvent
2022-10-05 10:13:14
insightsengineering/tern.mmrm
https://api.github.com/repos/insightsengineering/tern.mmrm
closed
Clean up README
documentation good first issue SP1 high priority
To do: - [x] Update according to major refactoring - [x] Explain clearly how the `mmrm` and `tern.mmrm` packages relate to each other - [x] also give guidance when to use which
1.0
Clean up README - To do: - [x] Update according to major refactoring - [x] Explain clearly how the `mmrm` and `tern.mmrm` packages relate to each other - [x] also give guidance when to use which
non_automation
clean up readme to do update according to major refactoring explain clearly how the mmrm and tern mmrm packages relate to each other also give guidance when to use which
0
169,516
13,150,174,823
IssuesEvent
2020-08-09 09:57:49
Rocologo/MobHunting
https://api.github.com/repos/Rocologo/MobHunting
closed
Error on /mh acheivements
Fixed - To be tested
When I do /mh acheivements in game I get this console error: https://paste.gg/p/Momshroom/24caaca7b64143b89b35d9148c211b05 MobHunting version 7.5.0 Paper 370 (1.15.2)
1.0
Error on /mh acheivements - When I do /mh acheivements in game I get this console error: https://paste.gg/p/Momshroom/24caaca7b64143b89b35d9148c211b05 MobHunting version 7.5.0 Paper 370 (1.15.2)
non_automation
error on mh acheivements when i do mh acheivements in game i get this console error mobhunting version paper
0
1,318
9,905,481,353
IssuesEvent
2019-06-27 11:42:44
elastic/apm-server
https://api.github.com/repos/elastic/apm-server
closed
Deal with failing ci test for saved objects in Kibana
[zube]: In Review automation
A ci check called _Check Kibana Object updated_ is run on every PR and on push to master. This test runs a command in APM Server to create the Kibana index pattern based on the ES template, and then checks if the created index pattern is in sync with the one bundled in Kibana for APM. It test fails on following occasions: - updating libbeat changes inherited fields for the apm-server leading to changes in the Kibana index pattern - changing fields directly in apm server leading to changes in the Kibana index pattern - changes in Kibana touching the stored objects (e.g. moving the files around). Since field changes requires a PR in APM Server and Kibana to be merged at the same time, a lot of PRs fail related to this test, although not directly related. We should discuss how to improve this situation on a CI level. A few options: (1) In case only this stage fails we could mark the build as instable. (2) Run the test as a separate check outside of the `pr-merge`. (3) Trigger the test only if something in `_meta/fields.common.yml` or in `_beats/libbeat/_meta` changed and on pushes to release branches and master. I suggest to apply option (3), and maybe also option (2) to give a better overview on the PR what is failing.
1.0
Deal with failing ci test for saved objects in Kibana - A ci check called _Check Kibana Object updated_ is run on every PR and on push to master. This test runs a command in APM Server to create the Kibana index pattern based on the ES template, and then checks if the created index pattern is in sync with the one bundled in Kibana for APM. It test fails on following occasions: - updating libbeat changes inherited fields for the apm-server leading to changes in the Kibana index pattern - changing fields directly in apm server leading to changes in the Kibana index pattern - changes in Kibana touching the stored objects (e.g. moving the files around). Since field changes requires a PR in APM Server and Kibana to be merged at the same time, a lot of PRs fail related to this test, although not directly related. We should discuss how to improve this situation on a CI level. A few options: (1) In case only this stage fails we could mark the build as instable. (2) Run the test as a separate check outside of the `pr-merge`. (3) Trigger the test only if something in `_meta/fields.common.yml` or in `_beats/libbeat/_meta` changed and on pushes to release branches and master. I suggest to apply option (3), and maybe also option (2) to give a better overview on the PR what is failing.
automation
deal with failing ci test for saved objects in kibana a ci check called check kibana object updated is run on every pr and on push to master this test runs a command in apm server to create the kibana index pattern based on the es template and then checks if the created index pattern is in sync with the one bundled in kibana for apm it test fails on following occasions updating libbeat changes inherited fields for the apm server leading to changes in the kibana index pattern changing fields directly in apm server leading to changes in the kibana index pattern changes in kibana touching the stored objects e g moving the files around since field changes requires a pr in apm server and kibana to be merged at the same time a lot of prs fail related to this test although not directly related we should discuss how to improve this situation on a ci level a few options in case only this stage fails we could mark the build as instable run the test as a separate check outside of the pr merge trigger the test only if something in meta fields common yml or in beats libbeat meta changed and on pushes to release branches and master i suggest to apply option and maybe also option to give a better overview on the pr what is failing
1
74,831
3,448,883,569
IssuesEvent
2015-12-16 10:46:45
weaveworks/weave
https://api.github.com/repos/weaveworks/weave
closed
work with dockers on domain sockets other than unix:///var/run/docker
chore [component/proxy] [component/router] {priority/high}
`weave launch` is not detecting docker socket if `DOCKER_HOST` is set to non-default unix socket Docker daemon is listening on `unix:///var/run/docker-real.sock` and `$DOCKER_HOST=unix:///var/run/docker-real.sock` `docker` commands works fine as expected. But `weave lauch` returns `Cannot connect to the Docker daemon. Is 'docker -d' running on this host?` **_Note:_** _This is to achieve something similar here https://github.com/rancher/rancher/issues/2398 to integrate weave into Rancher_
1.0
work with dockers on domain sockets other than unix:///var/run/docker - `weave launch` is not detecting docker socket if `DOCKER_HOST` is set to non-default unix socket Docker daemon is listening on `unix:///var/run/docker-real.sock` and `$DOCKER_HOST=unix:///var/run/docker-real.sock` `docker` commands works fine as expected. But `weave lauch` returns `Cannot connect to the Docker daemon. Is 'docker -d' running on this host?` **_Note:_** _This is to achieve something similar here https://github.com/rancher/rancher/issues/2398 to integrate weave into Rancher_
non_automation
work with dockers on domain sockets other than unix var run docker weave launch is not detecting docker socket if docker host is set to non default unix socket docker daemon is listening on unix var run docker real sock and docker host unix var run docker real sock docker commands works fine as expected but weave lauch returns cannot connect to the docker daemon is docker d running on this host note this is to achieve something similar here to integrate weave into rancher
0
415
6,304,022,138
IssuesEvent
2017-07-21 15:00:29
blackbaud/skyux2
https://api.github.com/repos/blackbaud/skyux2
closed
Run skyux visual tests through a skyux page
automation
Currently we are running our visual regression tests by using webpack to serve some component fixtures up locally, and then use the local Browserstack tunnel to test using multiple browsers. This has a couple of drawbacks: - The Browserstack local tunnel can be flakey and disconnect randomly at times - Serving up our files with webpack doesn't allow us to have as many tests running in parallel, because they start slowing down to the point of failure as we add more. - Our visual tests are not being run in a environment similar to our users (SKY UX host/builder/etc) To solve this, we should find a way to build our visual tests as a SKY UX app, which our visual tests will then hit remotely.
1.0
Run skyux visual tests through a skyux page - Currently we are running our visual regression tests by using webpack to serve some component fixtures up locally, and then use the local Browserstack tunnel to test using multiple browsers. This has a couple of drawbacks: - The Browserstack local tunnel can be flakey and disconnect randomly at times - Serving up our files with webpack doesn't allow us to have as many tests running in parallel, because they start slowing down to the point of failure as we add more. - Our visual tests are not being run in a environment similar to our users (SKY UX host/builder/etc) To solve this, we should find a way to build our visual tests as a SKY UX app, which our visual tests will then hit remotely.
automation
run skyux visual tests through a skyux page currently we are running our visual regression tests by using webpack to serve some component fixtures up locally and then use the local browserstack tunnel to test using multiple browsers this has a couple of drawbacks the browserstack local tunnel can be flakey and disconnect randomly at times serving up our files with webpack doesn t allow us to have as many tests running in parallel because they start slowing down to the point of failure as we add more our visual tests are not being run in a environment similar to our users sky ux host builder etc to solve this we should find a way to build our visual tests as a sky ux app which our visual tests will then hit remotely
1
110,837
24,015,635,194
IssuesEvent
2022-09-15 00:10:46
qhy040404/Library-One-Tap-Android
https://api.github.com/repos/qhy040404/Library-One-Tap-Android
closed
Rewrite AboutActivity to use partial chrome
enhancement large code low priority UI / UX external
### Enhancement propose Better UX ### Solution ![Screenshot_2022-09-15-00-20-06-760_com github android](https://user-images.githubusercontent.com/45379733/190209109-8119bf53-87b0-46b1-9924-722b13a9cc0e.jpg) ### Additional info _No response_
1.0
Rewrite AboutActivity to use partial chrome - ### Enhancement propose Better UX ### Solution ![Screenshot_2022-09-15-00-20-06-760_com github android](https://user-images.githubusercontent.com/45379733/190209109-8119bf53-87b0-46b1-9924-722b13a9cc0e.jpg) ### Additional info _No response_
non_automation
rewrite aboutactivity to use partial chrome enhancement propose better ux solution additional info no response
0
3,469
13,790,468,198
IssuesEvent
2020-10-09 10:28:35
eventespresso/barista
https://api.github.com/repos/eventespresso/barista
closed
Rename ALL `barista-prod` Branches to `barista`
C: automation & deployment ⚙️ D: Packages 📦 P3: med priority 😐 T: task 🧹
Originally we thought there might also be the need for other barista branches like `barista-dev` in other repos but that doesn't look to be the case now so let's just simplify the naming for now (cuz I'm a lazy typist and that extra `-prod` is an unacceptable burden)
1.0
Rename ALL `barista-prod` Branches to `barista` - Originally we thought there might also be the need for other barista branches like `barista-dev` in other repos but that doesn't look to be the case now so let's just simplify the naming for now (cuz I'm a lazy typist and that extra `-prod` is an unacceptable burden)
automation
rename all barista prod branches to barista originally we thought there might also be the need for other barista branches like barista dev in other repos but that doesn t look to be the case now so let s just simplify the naming for now cuz i m a lazy typist and that extra prod is an unacceptable burden
1
90,451
15,856,158,066
IssuesEvent
2021-04-08 01:39:53
heholek/practical-aspnetcore
https://api.github.com/repos/heholek/practical-aspnetcore
opened
CVE-2019-0564 (High) detected in microsoft.aspnetcore.app.2.1.1.nupkg
security vulnerability
## CVE-2019-0564 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>microsoft.aspnetcore.app.2.1.1.nupkg</b></p></summary> <p>Microsoft.AspNetCore.App</p> <p>Library home page: <a href="https://api.nuget.org/packages/microsoft.aspnetcore.app.2.1.1.nupkg">https://api.nuget.org/packages/microsoft.aspnetcore.app.2.1.1.nupkg</a></p> <p>Path to dependency file: practical-aspnetcore/projects/localization-5/localization-5.csproj</p> <p>Path to vulnerable library: practical-aspnetcore/projects/localization-5/localization-5.csproj,practical-aspnetcore/projects/localization-6/localization-6.csproj</p> <p> Dependency Hierarchy: - :x: **microsoft.aspnetcore.app.2.1.1.nupkg** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A denial of service vulnerability exists when ASP.NET Core improperly handles web requests, aka "ASP.NET Core Denial of Service Vulnerability." This affects ASP.NET Core 2.1. This CVE ID is unique from CVE-2019-0548. <p>Publish Date: 2019-01-08 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0564>CVE-2019-0564</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/aspnet/Announcements/issues/334">https://github.com/aspnet/Announcements/issues/334</a></p> <p>Release Date: 2019-01-08</p> <p>Fix Resolution: Microsoft.AspNetCore.WebSockets - 2.1.7,2.2.1;Microsoft.AspNetCore.Server.Kestrel.Core - 2.1.7;System.Net.WebSockets.WebSocketProtocol - 4.5.3;Microsoft.NETCore.App - 2.1.7,2.2.1;Microsoft.AspNetCore.App - 2.1.7,2.2.1;Microsoft.AspNetCore.All - 2.1.7,2.2.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-0564 (High) detected in microsoft.aspnetcore.app.2.1.1.nupkg - ## CVE-2019-0564 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>microsoft.aspnetcore.app.2.1.1.nupkg</b></p></summary> <p>Microsoft.AspNetCore.App</p> <p>Library home page: <a href="https://api.nuget.org/packages/microsoft.aspnetcore.app.2.1.1.nupkg">https://api.nuget.org/packages/microsoft.aspnetcore.app.2.1.1.nupkg</a></p> <p>Path to dependency file: practical-aspnetcore/projects/localization-5/localization-5.csproj</p> <p>Path to vulnerable library: practical-aspnetcore/projects/localization-5/localization-5.csproj,practical-aspnetcore/projects/localization-6/localization-6.csproj</p> <p> Dependency Hierarchy: - :x: **microsoft.aspnetcore.app.2.1.1.nupkg** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A denial of service vulnerability exists when ASP.NET Core improperly handles web requests, aka "ASP.NET Core Denial of Service Vulnerability." This affects ASP.NET Core 2.1. This CVE ID is unique from CVE-2019-0548. <p>Publish Date: 2019-01-08 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0564>CVE-2019-0564</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/aspnet/Announcements/issues/334">https://github.com/aspnet/Announcements/issues/334</a></p> <p>Release Date: 2019-01-08</p> <p>Fix Resolution: Microsoft.AspNetCore.WebSockets - 2.1.7,2.2.1;Microsoft.AspNetCore.Server.Kestrel.Core - 2.1.7;System.Net.WebSockets.WebSocketProtocol - 4.5.3;Microsoft.NETCore.App - 2.1.7,2.2.1;Microsoft.AspNetCore.App - 2.1.7,2.2.1;Microsoft.AspNetCore.All - 2.1.7,2.2.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_automation
cve high detected in microsoft aspnetcore app nupkg cve high severity vulnerability vulnerable library microsoft aspnetcore app nupkg microsoft aspnetcore app library home page a href path to dependency file practical aspnetcore projects localization localization csproj path to vulnerable library practical aspnetcore projects localization localization csproj practical aspnetcore projects localization localization csproj dependency hierarchy x microsoft aspnetcore app nupkg vulnerable library vulnerability details a denial of service vulnerability exists when asp net core improperly handles web requests aka asp net core denial of service vulnerability this affects asp net core this cve id is unique from cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution microsoft aspnetcore websockets microsoft aspnetcore server kestrel core system net websockets websocketprotocol microsoft netcore app microsoft aspnetcore app microsoft aspnetcore all step up your open source security game with whitesource
0
61,037
14,599,420,677
IssuesEvent
2020-12-21 04:08:27
doamatto/phone-passcode-gen
https://api.github.com/repos/doamatto/phone-passcode-gen
closed
CVE-2019-6284 (Medium) detected in opennmsopennms-source-26.0.0-1, node-sass-4.14.1.tgz
security vulnerability
## CVE-2019-6284 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>opennmsopennms-source-26.0.0-1</b>, <b>node-sass-4.14.1.tgz</b></p></summary> <p> <details><summary><b>node-sass-4.14.1.tgz</b></p></summary> <p>Wrapper around libsass</p> <p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz</a></p> <p>Path to dependency file: phone-passcode-gen/package.json</p> <p>Path to vulnerable library: phone-passcode-gen/node_modules/gulp-sass/node_modules/node-sass/package.json</p> <p> Dependency Hierarchy: - gulp-sass-4.1.0.tgz (Root Library) - :x: **node-sass-4.14.1.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/doamatto/phone-passcode-gen/commit/9ddf2695e14fb4e1ed3b0dcbb49693b394383c4e">9ddf2695e14fb4e1ed3b0dcbb49693b394383c4e</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In LibSass 3.5.5, a heap-based buffer over-read exists in Sass::Prelexer::alternatives in prelexer.hpp. <p>Publish Date: 2019-01-14 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-6284>CVE-2019-6284</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284</a></p> <p>Release Date: 2019-08-06</p> <p>Fix Resolution: LibSass - 3.6.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-6284 (Medium) detected in opennmsopennms-source-26.0.0-1, node-sass-4.14.1.tgz - ## CVE-2019-6284 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>opennmsopennms-source-26.0.0-1</b>, <b>node-sass-4.14.1.tgz</b></p></summary> <p> <details><summary><b>node-sass-4.14.1.tgz</b></p></summary> <p>Wrapper around libsass</p> <p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz</a></p> <p>Path to dependency file: phone-passcode-gen/package.json</p> <p>Path to vulnerable library: phone-passcode-gen/node_modules/gulp-sass/node_modules/node-sass/package.json</p> <p> Dependency Hierarchy: - gulp-sass-4.1.0.tgz (Root Library) - :x: **node-sass-4.14.1.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/doamatto/phone-passcode-gen/commit/9ddf2695e14fb4e1ed3b0dcbb49693b394383c4e">9ddf2695e14fb4e1ed3b0dcbb49693b394383c4e</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In LibSass 3.5.5, a heap-based buffer over-read exists in Sass::Prelexer::alternatives in prelexer.hpp. <p>Publish Date: 2019-01-14 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-6284>CVE-2019-6284</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284</a></p> <p>Release Date: 2019-08-06</p> <p>Fix Resolution: LibSass - 3.6.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_automation
cve medium detected in opennmsopennms source node sass tgz cve medium severity vulnerability vulnerable libraries opennmsopennms source node sass tgz node sass tgz wrapper around libsass library home page a href path to dependency file phone passcode gen package json path to vulnerable library phone passcode gen node modules gulp sass node modules node sass package json dependency hierarchy gulp sass tgz root library x node sass tgz vulnerable library found in head commit a href found in base branch master vulnerability details in libsass a heap based buffer over read exists in sass prelexer alternatives in prelexer hpp publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass step up your open source security game with whitesource
0
1,919
11,097,189,215
IssuesEvent
2019-12-16 12:51:02
wazuh/wazuh-qa
https://api.github.com/repos/wazuh/wazuh-qa
opened
FIM v2.0: Analysisd Integration tests: Error messages
automation component/fim
## Description This issue covers the integration test for bad formated messages handling by analysisd. We will treat analysisd as a black box that receives integrity events by its input Unix socket, checking that the correct output is forwarded to the desired socket (simulating Wazuh DB). Twelve use cases have been defined to check that the FIM event messages are handled properly. These cases should be implemented in the same test. - [ ] No `timestamp` in a FIM scan message. - [ ] No `type` in a FIM message - [ ] Empty `type` in an event message. - [ ] Incorrect `type` in an event message. - [ ] The JSON in a DB sync message cannot be parsed. - [ ] The item `component` cannot be parsed as a string in a DB sync message. - [ ] The item `type` cannot be parsed as a string in a DB sync message. - [ ] The item `type` is unknown in a DB sync message. - [ ] No `data` field in a DB sync message. **Input location** The input location for all checks is the analysisd socket: `/var/ossec/queue/ossec/queue` **Output location** The output location for all checks is `ossec.log` file: `/var/ossec/logs/ossec.log` ## No `timestamp` in a FIM scan message **Input message**: `8:[001] (vm-ubuntu-agent) 192.168.57.2->syscheck:{"type":"scan_end","data":{}}` **Output message**: `No such member \"timestamp\" in FIM scan info event.` ## No `type` in a FIM message **Input message**: `8:[001] (vm-ubuntu-agent) 192.168.57.2->syscheck:{"data":{"timestamp":1575442712}}` **Output message**: `Invalid FIM event` ## Empty `type` in an event message **Input message**: `8:[001] (vm-ubuntu-agent) 192.168.57.2->syscheck:{"type":"event","data":{"path":"/home/test/file","mode":"real-time","type":"NULL","timestamp":1575421671,"attributes":{"type":"file","size":5,"perm":"rw-r--r--","uid":"0","gid":"0","user_name":"root","group_name":"root","inode":125,"mtime":1575421671,"hash_md5":"7be8ec9774fc128d067782134fbc37eb","hash_sha1":"fb2eae5ad4a1116a536c16147e2cd7ae2c2cceb7","hash_sha256":"ab7d3920a57dca347cc8a62ad2c6c61ff8d0aa6d8e974e6a4803686532e980b7","checksum":"00eaef78d06924374cb291957a1f63e224d76320"},"changed_attributes":["size","mtime","md5","sha1","sha256"],"old_attributes":{"type":"file","size":18,"perm":"rw-r--r--","uid":"0","gid":"0","user_name":"root","group_name":"root","inode":125,"mtime":1575416596,"hash_md5":"a3ee12884966cb2512805d2500361913","hash_sha1":"e6e8a61093715af1e4f2a3c0618ce014f0d94fde","hash_sha256":"79abb1429c39589bb7a923abe0fe076268f38d3bffb40909490b530f109de85a","checksum":"a02381378af3739e81bea813c1ff6e3d0027498d"}}} ` **Output message**: `No member 'type' in Syscheck JSON payload` ## Incorrect event `type` in an event message **Input message**: `8:[001] (vm-ubuntu-agent) 192.168.57.2->syscheck:{"type":"event","data":{"path":"/home/test/file","mode":"real-time","type":"other","timestamp":1575421671,"attributes":{"type":"file","size":5,"perm":"rw-r--r--","uid":"0","gid":"0","user_name":"root","group_name":"root","inode":125,"mtime":1575421671,"hash_md5":"7be8ec9774fc128d067782134fbc37eb","hash_sha1":"fb2eae5ad4a1116a536c16147e2cd7ae2c2cceb7","hash_sha256":"ab7d3920a57dca347cc8a62ad2c6c61ff8d0aa6d8e974e6a4803686532e980b7","checksum":"00eaef78d06924374cb291957a1f63e224d76320"},"changed_attributes":["size","mtime","md5","sha1","sha256"],"old_attributes":{"type":"file","size":18,"perm":"rw-r--r--","uid":"0","gid":"0","user_name":"root","group_name":"root","inode":125,"mtime":1575416596,"hash_md5":"a3ee12884966cb2512805d2500361913","hash_sha1":"e6e8a61093715af1e4f2a3c0618ce014f0d94fde","hash_sha256":"79abb1429c39589bb7a923abe0fe076268f38d3bffb40909490b530f109de85a","checksum":"a02381378af3739e81bea813c1ff6e3d0027498d"}}} ` **Output message**: `Invalid 'type' value 'incorrect_value' in JSON payload.` ## The JSON in a DB sync message cannot be parsed **Input message**: `5:[001] (vm-test-agent) 192.168.57.2->syscheck:{{"component":"syscheck","type":"integrity_check_global","data":{"id": 1575421330,"begin":"/home/test/file","end":"/home/test/file2","checksum":"6bdaf5656029544cf0d08e7c4f4feceb0c45853c"}} ` **Output message**: `dbsync: Cannot parse JSON: %s", lf->log` ## The item `component` cannot be parsed as a string in a DB sync message **Input message**: `5:[001] (vm-test-agent) 192.168.57.2->syscheck:{"type":"integrity_check_global","data":{"id": 1575421330,"begin":"/home/test/file","end":"/home/test/file2","checksum":"6bdaf5656029544cf0d08e7c4f4feceb0c45853c"}}` **Output message**: `dbsync: Corrupt message: cannot get component member.` ## The item `type` cannot be parsed as a string in a DB sync message **Input message**: `5:[001] (vm-test-agent) 192.168.57.2->syscheck:{"component":"syscheck","data":{"id": 1575421330,"begin":"/home/test/file","end":"/home/test/file2","checksum":"6bdaf5656029544cf0d08e7c4f4feceb0c45853c"}} ` **Output message**: `dbsync: Corrupt message: cannot get type member.` ## No `data` field in a DB sync message **Input message**: `5:[001] (vm-test-agent) 192.168.57.2->syscheck:{"component":"syscheck","type":"integrity_check_global","":{"id": 1575421330,"begin":"/home/test/file","end":"/home/test/file2","checksum":"6bdaf5656029544cf0d08e7c4f4feceb0c45853c"}} ` **Output message**: `dbsync: Corrupt message: cannot get data member.`
1.0
FIM v2.0: Analysisd Integration tests: Error messages - ## Description This issue covers the integration test for bad formated messages handling by analysisd. We will treat analysisd as a black box that receives integrity events by its input Unix socket, checking that the correct output is forwarded to the desired socket (simulating Wazuh DB). Twelve use cases have been defined to check that the FIM event messages are handled properly. These cases should be implemented in the same test. - [ ] No `timestamp` in a FIM scan message. - [ ] No `type` in a FIM message - [ ] Empty `type` in an event message. - [ ] Incorrect `type` in an event message. - [ ] The JSON in a DB sync message cannot be parsed. - [ ] The item `component` cannot be parsed as a string in a DB sync message. - [ ] The item `type` cannot be parsed as a string in a DB sync message. - [ ] The item `type` is unknown in a DB sync message. - [ ] No `data` field in a DB sync message. **Input location** The input location for all checks is the analysisd socket: `/var/ossec/queue/ossec/queue` **Output location** The output location for all checks is `ossec.log` file: `/var/ossec/logs/ossec.log` ## No `timestamp` in a FIM scan message **Input message**: `8:[001] (vm-ubuntu-agent) 192.168.57.2->syscheck:{"type":"scan_end","data":{}}` **Output message**: `No such member \"timestamp\" in FIM scan info event.` ## No `type` in a FIM message **Input message**: `8:[001] (vm-ubuntu-agent) 192.168.57.2->syscheck:{"data":{"timestamp":1575442712}}` **Output message**: `Invalid FIM event` ## Empty `type` in an event message **Input message**: `8:[001] (vm-ubuntu-agent) 192.168.57.2->syscheck:{"type":"event","data":{"path":"/home/test/file","mode":"real-time","type":"NULL","timestamp":1575421671,"attributes":{"type":"file","size":5,"perm":"rw-r--r--","uid":"0","gid":"0","user_name":"root","group_name":"root","inode":125,"mtime":1575421671,"hash_md5":"7be8ec9774fc128d067782134fbc37eb","hash_sha1":"fb2eae5ad4a1116a536c16147e2cd7ae2c2cceb7","hash_sha256":"ab7d3920a57dca347cc8a62ad2c6c61ff8d0aa6d8e974e6a4803686532e980b7","checksum":"00eaef78d06924374cb291957a1f63e224d76320"},"changed_attributes":["size","mtime","md5","sha1","sha256"],"old_attributes":{"type":"file","size":18,"perm":"rw-r--r--","uid":"0","gid":"0","user_name":"root","group_name":"root","inode":125,"mtime":1575416596,"hash_md5":"a3ee12884966cb2512805d2500361913","hash_sha1":"e6e8a61093715af1e4f2a3c0618ce014f0d94fde","hash_sha256":"79abb1429c39589bb7a923abe0fe076268f38d3bffb40909490b530f109de85a","checksum":"a02381378af3739e81bea813c1ff6e3d0027498d"}}} ` **Output message**: `No member 'type' in Syscheck JSON payload` ## Incorrect event `type` in an event message **Input message**: `8:[001] (vm-ubuntu-agent) 192.168.57.2->syscheck:{"type":"event","data":{"path":"/home/test/file","mode":"real-time","type":"other","timestamp":1575421671,"attributes":{"type":"file","size":5,"perm":"rw-r--r--","uid":"0","gid":"0","user_name":"root","group_name":"root","inode":125,"mtime":1575421671,"hash_md5":"7be8ec9774fc128d067782134fbc37eb","hash_sha1":"fb2eae5ad4a1116a536c16147e2cd7ae2c2cceb7","hash_sha256":"ab7d3920a57dca347cc8a62ad2c6c61ff8d0aa6d8e974e6a4803686532e980b7","checksum":"00eaef78d06924374cb291957a1f63e224d76320"},"changed_attributes":["size","mtime","md5","sha1","sha256"],"old_attributes":{"type":"file","size":18,"perm":"rw-r--r--","uid":"0","gid":"0","user_name":"root","group_name":"root","inode":125,"mtime":1575416596,"hash_md5":"a3ee12884966cb2512805d2500361913","hash_sha1":"e6e8a61093715af1e4f2a3c0618ce014f0d94fde","hash_sha256":"79abb1429c39589bb7a923abe0fe076268f38d3bffb40909490b530f109de85a","checksum":"a02381378af3739e81bea813c1ff6e3d0027498d"}}} ` **Output message**: `Invalid 'type' value 'incorrect_value' in JSON payload.` ## The JSON in a DB sync message cannot be parsed **Input message**: `5:[001] (vm-test-agent) 192.168.57.2->syscheck:{{"component":"syscheck","type":"integrity_check_global","data":{"id": 1575421330,"begin":"/home/test/file","end":"/home/test/file2","checksum":"6bdaf5656029544cf0d08e7c4f4feceb0c45853c"}} ` **Output message**: `dbsync: Cannot parse JSON: %s", lf->log` ## The item `component` cannot be parsed as a string in a DB sync message **Input message**: `5:[001] (vm-test-agent) 192.168.57.2->syscheck:{"type":"integrity_check_global","data":{"id": 1575421330,"begin":"/home/test/file","end":"/home/test/file2","checksum":"6bdaf5656029544cf0d08e7c4f4feceb0c45853c"}}` **Output message**: `dbsync: Corrupt message: cannot get component member.` ## The item `type` cannot be parsed as a string in a DB sync message **Input message**: `5:[001] (vm-test-agent) 192.168.57.2->syscheck:{"component":"syscheck","data":{"id": 1575421330,"begin":"/home/test/file","end":"/home/test/file2","checksum":"6bdaf5656029544cf0d08e7c4f4feceb0c45853c"}} ` **Output message**: `dbsync: Corrupt message: cannot get type member.` ## No `data` field in a DB sync message **Input message**: `5:[001] (vm-test-agent) 192.168.57.2->syscheck:{"component":"syscheck","type":"integrity_check_global","":{"id": 1575421330,"begin":"/home/test/file","end":"/home/test/file2","checksum":"6bdaf5656029544cf0d08e7c4f4feceb0c45853c"}} ` **Output message**: `dbsync: Corrupt message: cannot get data member.`
automation
fim analysisd integration tests error messages description this issue covers the integration test for bad formated messages handling by analysisd we will treat analysisd as a black box that receives integrity events by its input unix socket checking that the correct output is forwarded to the desired socket simulating wazuh db twelve use cases have been defined to check that the fim event messages are handled properly these cases should be implemented in the same test no timestamp in a fim scan message no type in a fim message empty type in an event message incorrect type in an event message the json in a db sync message cannot be parsed the item component cannot be parsed as a string in a db sync message the item type cannot be parsed as a string in a db sync message the item type is unknown in a db sync message no data field in a db sync message input location the input location for all checks is the analysisd socket var ossec queue ossec queue output location the output location for all checks is ossec log file var ossec logs ossec log no timestamp in a fim scan message input message vm ubuntu agent syscheck type scan end data output message no such member timestamp in fim scan info event no type in a fim message input message vm ubuntu agent syscheck data timestamp output message invalid fim event empty type in an event message input message vm ubuntu agent syscheck type event data path home test file mode real time type null timestamp attributes type file size perm rw r r uid gid user name root group name root inode mtime hash hash hash checksum changed attributes old attributes type file size perm rw r r uid gid user name root group name root inode mtime hash hash hash checksum output message no member type in syscheck json payload incorrect event type in an event message input message vm ubuntu agent syscheck type event data path home test file mode real time type other timestamp attributes type file size perm rw r r uid gid user name root group name root inode mtime hash hash hash checksum changed attributes old attributes type file size perm rw r r uid gid user name root group name root inode mtime hash hash hash checksum output message invalid type value incorrect value in json payload the json in a db sync message cannot be parsed input message vm test agent syscheck component syscheck type integrity check global data id begin home test file end home test checksum output message dbsync cannot parse json s lf log the item component cannot be parsed as a string in a db sync message input message vm test agent syscheck type integrity check global data id begin home test file end home test checksum output message dbsync corrupt message cannot get component member the item type cannot be parsed as a string in a db sync message input message vm test agent syscheck component syscheck data id begin home test file end home test checksum output message dbsync corrupt message cannot get type member no data field in a db sync message input message vm test agent syscheck component syscheck type integrity check global id begin home test file end home test checksum output message dbsync corrupt message cannot get data member
1
594,990
18,058,638,619
IssuesEvent
2021-09-20 11:30:17
ita-social-projects/horondi_client_fe
https://api.github.com/repos/ita-social-projects/horondi_client_fe
closed
[News] 403 Forbidden error message shown
bug priority: medium cline part
**Environment:** macOS Big Sur 11.4, Firefox 89.0 **Reproducible:** always **Build found:** 44d1c1b **Pre-conditions:** 1. Go to https://horondi-front-staging.azurewebsites.net/ as a user 2. Open the console **Description** **Steps to reproduce:** 1. Go to the News page 2. Pay attention to error message in the console **Actual result:** '403 Forbidden' error message shown on the News page. **Expected result:** The user should get all the information from the News page. <img width="1440" alt="403 Forbidden" src="https://user-images.githubusercontent.com/62054774/121446562-716d4600-c99c-11eb-881f-90a3046e251f.png"> [User story] #50 Ad-hoc
1.0
[News] 403 Forbidden error message shown - **Environment:** macOS Big Sur 11.4, Firefox 89.0 **Reproducible:** always **Build found:** 44d1c1b **Pre-conditions:** 1. Go to https://horondi-front-staging.azurewebsites.net/ as a user 2. Open the console **Description** **Steps to reproduce:** 1. Go to the News page 2. Pay attention to error message in the console **Actual result:** '403 Forbidden' error message shown on the News page. **Expected result:** The user should get all the information from the News page. <img width="1440" alt="403 Forbidden" src="https://user-images.githubusercontent.com/62054774/121446562-716d4600-c99c-11eb-881f-90a3046e251f.png"> [User story] #50 Ad-hoc
non_automation
forbidden error message shown environment macos big sur firefox reproducible always build found pre conditions go to as a user open the console description steps to reproduce go to the news page pay attention to error message in the console actual result forbidden error message shown on the news page expected result the user should get all the information from the news page img width alt forbidden src ad hoc
0
133,527
12,543,554,587
IssuesEvent
2020-06-05 15:44:38
databrokerglobal/dxc
https://api.github.com/repos/databrokerglobal/dxc
closed
Make demo environment on Heroku for JTech
Priority: Medium documentation enhancement
1. Make a separate branch where we remove the local directory checking for demo purposes 2. Deploy on Heroku
1.0
Make demo environment on Heroku for JTech - 1. Make a separate branch where we remove the local directory checking for demo purposes 2. Deploy on Heroku
non_automation
make demo environment on heroku for jtech make a separate branch where we remove the local directory checking for demo purposes deploy on heroku
0
36,516
7,976,290,756
IssuesEvent
2018-07-17 12:13:23
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
closed
FileUpload: auto upload bug
defect
If user selects many files to upload and during upload process presses `x` to remove file while it's not uploaded yet - any files after it are not uploaded and javascript errors occurre: ``` TypeError: g.row is null fileupload.js.xhtml:1:24907 TypeError: a is null fileupload.js.xhtml:1:29116 ``` After this error upload doesn't work anymore. I noticed, then auto upload is off - `x` buttons are disabled when upload begins, but not in auto upload mode. `auto="false"`: ![auto-false](https://cloud.githubusercontent.com/assets/19680461/16116809/a1b290c8-33d7-11e6-948c-4e8a7957e451.JPG) `auto="true"`: ![auto-true](https://cloud.githubusercontent.com/assets/19680461/16116810/a1ddd2d8-33d7-11e6-82fb-fe726eab9de1.JPG)
1.0
FileUpload: auto upload bug - If user selects many files to upload and during upload process presses `x` to remove file while it's not uploaded yet - any files after it are not uploaded and javascript errors occurre: ``` TypeError: g.row is null fileupload.js.xhtml:1:24907 TypeError: a is null fileupload.js.xhtml:1:29116 ``` After this error upload doesn't work anymore. I noticed, then auto upload is off - `x` buttons are disabled when upload begins, but not in auto upload mode. `auto="false"`: ![auto-false](https://cloud.githubusercontent.com/assets/19680461/16116809/a1b290c8-33d7-11e6-948c-4e8a7957e451.JPG) `auto="true"`: ![auto-true](https://cloud.githubusercontent.com/assets/19680461/16116810/a1ddd2d8-33d7-11e6-82fb-fe726eab9de1.JPG)
non_automation
fileupload auto upload bug if user selects many files to upload and during upload process presses x to remove file while it s not uploaded yet any files after it are not uploaded and javascript errors occurre typeerror g row is null fileupload js xhtml typeerror a is null fileupload js xhtml after this error upload doesn t work anymore i noticed then auto upload is off x buttons are disabled when upload begins but not in auto upload mode auto false auto true
0
3,092
13,063,544,294
IssuesEvent
2020-07-30 16:41:29
elastic/apm-integration-testing
https://api.github.com/repos/elastic/apm-integration-testing
closed
--no-XXXXbeat options does not disable beats when you use it with --all
[zube]: Backlog automation subtask
If you run the following command the docker-compose file will have beats for running and it should not `scripts/compose.py start master --no-kibana --no-heartbeat --no-metricbeat --no-filebeat --all` related to https://github.com/elastic/apm-integration-testing/pull/476
1.0
--no-XXXXbeat options does not disable beats when you use it with --all - If you run the following command the docker-compose file will have beats for running and it should not `scripts/compose.py start master --no-kibana --no-heartbeat --no-metricbeat --no-filebeat --all` related to https://github.com/elastic/apm-integration-testing/pull/476
automation
no xxxxbeat options does not disable beats when you use it with all if you run the following command the docker compose file will have beats for running and it should not scripts compose py start master no kibana no heartbeat no metricbeat no filebeat all related to
1
72,025
18,975,887,470
IssuesEvent
2021-11-20 01:28:16
orbeon/orbeon-forms
https://api.github.com/repos/orbeon/orbeon-forms
opened
Delete publish form definition improvements
Module: Form Runner Module: Form Builder
Following #3597, some improvements would be welcome: - Admin page: ability to delete all existing data - Form Builder: when publishing a form definition, it would be nice to tell user if there is no published form definition BUT there exists data (if the form definition has been deleted), as that data might be incompatible
1.0
Delete publish form definition improvements - Following #3597, some improvements would be welcome: - Admin page: ability to delete all existing data - Form Builder: when publishing a form definition, it would be nice to tell user if there is no published form definition BUT there exists data (if the form definition has been deleted), as that data might be incompatible
non_automation
delete publish form definition improvements following some improvements would be welcome admin page ability to delete all existing data form builder when publishing a form definition it would be nice to tell user if there is no published form definition but there exists data if the form definition has been deleted as that data might be incompatible
0
324,499
9,904,702,201
IssuesEvent
2019-06-27 09:45:50
kubernetes-sigs/cluster-api-provider-gcp
https://api.github.com/repos/kubernetes-sigs/cluster-api-provider-gcp
closed
[FR] authentication with GCP
lifecycle/rotten priority/important-soon
Currently the authentication is done via cloud service account. Allow authentication similar to that in https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/gce/gce.go#L305
1.0
[FR] authentication with GCP - Currently the authentication is done via cloud service account. Allow authentication similar to that in https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/gce/gce.go#L305
non_automation
authentication with gcp currently the authentication is done via cloud service account allow authentication similar to that in
0
272,326
29,795,008,577
IssuesEvent
2023-06-16 01:03:48
billmcchesney1/hadoop
https://api.github.com/repos/billmcchesney1/hadoop
closed
CVE-2020-11023 (Medium) detected in multiple libraries - autoclosed
Mend: dependency security vulnerability
## CVE-2020-11023 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-3.3.1.js</b>, <b>jquery-3.4.1.min.js</b>, <b>jquery-3.3.1.tgz</b>, <b>jquery-1.8.1.min.js</b></p></summary> <p> <details><summary><b>jquery-3.3.1.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js</a></p> <p>Path to dependency file: /hadoop-tools/hadoop-sls/src/main/html/showSimulationTrace.html</p> <p>Path to vulnerable library: /hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/jquery.js,/hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/jquery.js</p> <p> Dependency Hierarchy: - :x: **jquery-3.3.1.js** (Vulnerable Library) </details> <details><summary><b>jquery-3.4.1.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js</a></p> <p>Path to vulnerable library: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/classes/webapps/static/jquery/jquery-3.4.1.min.js,/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery-3.4.1.min.js,/hadoop-hdfs-project/hadoop-hdfs/target/webapps/static/jquery-3.4.1.min.js,/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/jquery-3.4.1.min.js,/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/webapps/static/jquery-3.4.1.min.js</p> <p> Dependency Hierarchy: - :x: **jquery-3.4.1.min.js** (Vulnerable Library) </details> <details><summary><b>jquery-3.3.1.tgz</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://registry.npmjs.org/jquery/-/jquery-3.3.1.tgz">https://registry.npmjs.org/jquery/-/jquery-3.3.1.tgz</a></p> <p>Path to dependency file: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/package.json</p> <p>Path to vulnerable library: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/node_modules/jquery/package.json</p> <p> Dependency Hierarchy: - :x: **jquery-3.3.1.tgz** (Vulnerable Library) </details> <details><summary><b>jquery-1.8.1.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p> <p>Path to dependency file: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/redeyed/examples/browser/index.html</p> <p>Path to vulnerable library: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/redeyed/examples/browser/index.html,/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/bower/lib/node_modules/redeyed/examples/browser/index.html</p> <p> Dependency Hierarchy: - :x: **jquery-1.8.1.min.js** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/hadoop/commit/6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a">6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a</a></p> <p>Found in base branch: <b>trunk</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0. <p>Publish Date: 2020-04-29 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-11023>CVE-2020-11023</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p> <p>Release Date: 2020-04-29</p> <p>Fix Resolution: 3.5.0</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
True
CVE-2020-11023 (Medium) detected in multiple libraries - autoclosed - ## CVE-2020-11023 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-3.3.1.js</b>, <b>jquery-3.4.1.min.js</b>, <b>jquery-3.3.1.tgz</b>, <b>jquery-1.8.1.min.js</b></p></summary> <p> <details><summary><b>jquery-3.3.1.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js</a></p> <p>Path to dependency file: /hadoop-tools/hadoop-sls/src/main/html/showSimulationTrace.html</p> <p>Path to vulnerable library: /hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/jquery.js,/hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/jquery.js</p> <p> Dependency Hierarchy: - :x: **jquery-3.3.1.js** (Vulnerable Library) </details> <details><summary><b>jquery-3.4.1.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js</a></p> <p>Path to vulnerable library: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/classes/webapps/static/jquery/jquery-3.4.1.min.js,/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery-3.4.1.min.js,/hadoop-hdfs-project/hadoop-hdfs/target/webapps/static/jquery-3.4.1.min.js,/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/jquery-3.4.1.min.js,/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/webapps/static/jquery-3.4.1.min.js</p> <p> Dependency Hierarchy: - :x: **jquery-3.4.1.min.js** (Vulnerable Library) </details> <details><summary><b>jquery-3.3.1.tgz</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://registry.npmjs.org/jquery/-/jquery-3.3.1.tgz">https://registry.npmjs.org/jquery/-/jquery-3.3.1.tgz</a></p> <p>Path to dependency file: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/package.json</p> <p>Path to vulnerable library: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/node_modules/jquery/package.json</p> <p> Dependency Hierarchy: - :x: **jquery-3.3.1.tgz** (Vulnerable Library) </details> <details><summary><b>jquery-1.8.1.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p> <p>Path to dependency file: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/redeyed/examples/browser/index.html</p> <p>Path to vulnerable library: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/redeyed/examples/browser/index.html,/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/bower/lib/node_modules/redeyed/examples/browser/index.html</p> <p> Dependency Hierarchy: - :x: **jquery-1.8.1.min.js** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/hadoop/commit/6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a">6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a</a></p> <p>Found in base branch: <b>trunk</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0. <p>Publish Date: 2020-04-29 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-11023>CVE-2020-11023</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p> <p>Release Date: 2020-04-29</p> <p>Fix Resolution: 3.5.0</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
non_automation
cve medium detected in multiple libraries autoclosed cve medium severity vulnerability vulnerable libraries jquery js jquery min js jquery tgz jquery min js jquery js javascript library for dom operations library home page a href path to dependency file hadoop tools hadoop sls src main html showsimulationtrace html path to vulnerable library hadoop tools hadoop sls src main html js thirdparty jquery js hadoop tools hadoop sls src main html js thirdparty jquery js dependency hierarchy x jquery js vulnerable library jquery min js javascript library for dom operations library home page a href path to vulnerable library hadoop yarn project hadoop yarn hadoop yarn common target classes webapps static jquery jquery min js hadoop hdfs project hadoop hdfs src main webapps static jquery min js hadoop hdfs project hadoop hdfs target webapps static jquery min js hadoop yarn project hadoop yarn hadoop yarn common src main resources webapps static jquery jquery min js hadoop hdfs project hadoop hdfs target test classes webapps static jquery min js dependency hierarchy x jquery min js vulnerable library jquery tgz javascript library for dom operations library home page a href path to dependency file hadoop yarn project hadoop yarn hadoop yarn applications hadoop yarn applications catalog hadoop yarn applications catalog webapp package json path to vulnerable library hadoop yarn project hadoop yarn hadoop yarn applications hadoop yarn applications catalog hadoop yarn applications catalog webapp node modules jquery package json dependency hierarchy x jquery tgz vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules redeyed examples browser index html path to vulnerable library hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules redeyed examples browser index html hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules bower lib node modules redeyed examples browser index html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch trunk vulnerability details in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution check this box to open an automated fix pr
0
8,713
27,172,157,287
IssuesEvent
2023-02-17 20:30:30
OneDrive/onedrive-api-docs
https://api.github.com/repos/OneDrive/onedrive-api-docs
closed
Permission Denied when using Graph API service to call Sharepoint with an Azure AD Guest account
type:bug status:investigating automation:Closed
My app is using Azure AD as an entry point to access both Sharepoint and website. Good Case Scenario: I login as an AD user, the app runs as it should. I can use both Graph Api and PNP SP to retrieve data from Sharepoint. Issue: If an external user (i.e. gmail, yahoo accounts) is used, the Graph Api throws permission denied error. I added the account on both the Azure AD and added it to the Sharepoint users. If I login to Sharepoint manually as an external user, the site will run perfectly fine. My guess is that the token that Graph API uses does not have the correct permissions to consume Sharepoint services. Can you please help? #### Category - [ ] Question - [ ] Documentation issue - [x] Bug
1.0
Permission Denied when using Graph API service to call Sharepoint with an Azure AD Guest account - My app is using Azure AD as an entry point to access both Sharepoint and website. Good Case Scenario: I login as an AD user, the app runs as it should. I can use both Graph Api and PNP SP to retrieve data from Sharepoint. Issue: If an external user (i.e. gmail, yahoo accounts) is used, the Graph Api throws permission denied error. I added the account on both the Azure AD and added it to the Sharepoint users. If I login to Sharepoint manually as an external user, the site will run perfectly fine. My guess is that the token that Graph API uses does not have the correct permissions to consume Sharepoint services. Can you please help? #### Category - [ ] Question - [ ] Documentation issue - [x] Bug
automation
permission denied when using graph api service to call sharepoint with an azure ad guest account my app is using azure ad as an entry point to access both sharepoint and website good case scenario i login as an ad user the app runs as it should i can use both graph api and pnp sp to retrieve data from sharepoint issue if an external user i e gmail yahoo accounts is used the graph api throws permission denied error i added the account on both the azure ad and added it to the sharepoint users if i login to sharepoint manually as an external user the site will run perfectly fine my guess is that the token that graph api uses does not have the correct permissions to consume sharepoint services can you please help category question documentation issue bug
1
1,799
10,789,898,892
IssuesEvent
2019-11-05 12:59:12
spacemeshos/go-spacemesh
https://api.github.com/repos/spacemeshos/go-spacemesh
closed
persist database to volume storage in k8s
Recovery & Shutdown TN-1.0 automation
# Overview / Motivation Our pods in k8s running spacemesh allocate files for the database, this database keeps growing (it is the mesh), k8s treats this storage as part of the pod memory, means if we have limits on memory we'll eventually reach them no matter what. we need to attach a persistent storage volume to the pod and save the database there. # The Task TODO: Clearly describe the issue requirements here... # Implementation Notes TODO: Add links to relevant resources, specs, related issues, etc... # Contribution Guidelines Important: Issue assignment to developers will be by the order of their application and proficiency level according to the tasks complexity. We will not assign tasks to developers who have'nt introduced themselves on our Gitter [dev channel](https://gitter.im/spacemesh-os/Lobby) 1. Introduce yourself on go-spacemesh [dev chat channel](https://gitter.im/spacemesh-os/Lobby) - ask our team any question you may have about this task 2. Fork branch `develop` to your own repo and work in your repo 3. You must document all methods, enums and types with [godoc comments](https://blog.golang.org/godoc-documenting-go-code) 4. You must write go unit tests for all types and methods when submitting a component, and integration tests if you submit a feature 5. When ready for code review, submit a PR from your repo back to branch `develop` 6. Attach relevant issue to PR
1.0
persist database to volume storage in k8s - # Overview / Motivation Our pods in k8s running spacemesh allocate files for the database, this database keeps growing (it is the mesh), k8s treats this storage as part of the pod memory, means if we have limits on memory we'll eventually reach them no matter what. we need to attach a persistent storage volume to the pod and save the database there. # The Task TODO: Clearly describe the issue requirements here... # Implementation Notes TODO: Add links to relevant resources, specs, related issues, etc... # Contribution Guidelines Important: Issue assignment to developers will be by the order of their application and proficiency level according to the tasks complexity. We will not assign tasks to developers who have'nt introduced themselves on our Gitter [dev channel](https://gitter.im/spacemesh-os/Lobby) 1. Introduce yourself on go-spacemesh [dev chat channel](https://gitter.im/spacemesh-os/Lobby) - ask our team any question you may have about this task 2. Fork branch `develop` to your own repo and work in your repo 3. You must document all methods, enums and types with [godoc comments](https://blog.golang.org/godoc-documenting-go-code) 4. You must write go unit tests for all types and methods when submitting a component, and integration tests if you submit a feature 5. When ready for code review, submit a PR from your repo back to branch `develop` 6. Attach relevant issue to PR
automation
persist database to volume storage in overview motivation our pods in running spacemesh allocate files for the database this database keeps growing it is the mesh treats this storage as part of the pod memory means if we have limits on memory we ll eventually reach them no matter what we need to attach a persistent storage volume to the pod and save the database there the task todo clearly describe the issue requirements here implementation notes todo add links to relevant resources specs related issues etc contribution guidelines important issue assignment to developers will be by the order of their application and proficiency level according to the tasks complexity we will not assign tasks to developers who have nt introduced themselves on our gitter introduce yourself on go spacemesh ask our team any question you may have about this task fork branch develop to your own repo and work in your repo you must document all methods enums and types with you must write go unit tests for all types and methods when submitting a component and integration tests if you submit a feature when ready for code review submit a pr from your repo back to branch develop attach relevant issue to pr
1
5,422
19,564,591,404
IssuesEvent
2022-01-03 21:32:53
mozilla-mobile/fenix
https://api.github.com/repos/mozilla-mobile/fenix
closed
Add hand curated parameter files for testing taskgraph changes locally
eng:automation needs:triage
This will help folks who are making changes to taskgraph test that they aren't making unexpected changes to other contexts (like a release graph). These files currently live here: https://hg.mozilla.org/build/braindump/file/tip/taskcluster/taskgraph-diff/params-fenix But having them in the actual repo is much more convenient. The standard spot we're placing them is in `taskcluster/test/params`.
1.0
Add hand curated parameter files for testing taskgraph changes locally - This will help folks who are making changes to taskgraph test that they aren't making unexpected changes to other contexts (like a release graph). These files currently live here: https://hg.mozilla.org/build/braindump/file/tip/taskcluster/taskgraph-diff/params-fenix But having them in the actual repo is much more convenient. The standard spot we're placing them is in `taskcluster/test/params`.
automation
add hand curated parameter files for testing taskgraph changes locally this will help folks who are making changes to taskgraph test that they aren t making unexpected changes to other contexts like a release graph these files currently live here but having them in the actual repo is much more convenient the standard spot we re placing them is in taskcluster test params
1
28,951
2,712,595,355
IssuesEvent
2015-04-09 14:40:31
HeinrichReimer/material-drawer
https://api.github.com/repos/HeinrichReimer/material-drawer
closed
CloseDrawer Lag
bug low priority question
Hey, first of all thanks for this great library, I just have 2 small problems: - I start new activites in the OnItemClickListener for each item and want to close the drawer beforehand. Unfortunately the animation isnt finished when the new activity is started and it get stuck for a brief moment. Is it possible to "wait" for the animation to finish? I'm not sure if im doing something wrong, here is the code for one item: ```java drawer.addFixedItem( new DrawerItem() .setImage(getResources().getDrawable(R.drawable.ic_format_line_spacing_grey600_48dp)) .setTextPrimary(getString(R.string.drawer_sixth_item)) .setTextSecondary(getString(R.string.drawer_sixth_description)) .setOnItemClickListener(new DrawerItem.OnItemClickListener() { @Override public void onClick(DrawerItem drawerItem, int i, int position) { drawerLayout.closeDrawer(drawer); intent = new Intent(getApplicationContext(), Swipe.class); intent.putExtra("toGo", 0); startActivity(intent); } }) ); ``` What is a good practice to get the drawer in every fragment or activity? Right now I created a base class and let every activity extend it, is this a good idea? - Debug Log is getting spammed with methodcalls: ``` 03-02 15:56:53.298 21730-21730/com.example D/DrawerView﹕ DrawerView() 03-02 15:56:53.298 21730-21730/com.example D/DrawerView﹕ init() 03-02 15:56:53.308 21730-21730/com.example D/DrawerView﹕ findViews() 03-02 15:56:53.308 21730-21730/com.example D/DrawerView﹕ updateProfile() 03-02 15:56:53.308 21730-21730/com.example D/DrawerView﹕ updateList() 03-02 15:56:53.308 21730-21730/com.example D/DrawerView﹕ updateListSpacing() 03-02 15:56:53.308 21730-21730/com.example D/DrawerView﹕ updateFixedList() 03-02 15:56:53.308 21730-21730/com.example D/DrawerView﹕ updateListSpacing() 03-02 15:56:53.318 21730-21730/com.example D/DrawerView﹕ updateProfile() 03-02 15:56:53.318 21730-21730/com.example D/DrawerView﹕ updateFixedList() 03-02 15:56:53.318 21730-21730/com.example D/DrawerView﹕ updateListSpacing() 03-02 15:56:53.318 21730-21730/com.example D/DrawerView﹕ updateList() 03-02 15:56:53.328 21730-21730/com.example D/DrawerView﹕ updateListSpacing() 03-02 15:56:53.358 21730-21730/com.example D/DrawerView﹕ updateFixedList() 03-02 15:56:53.358 21730-21730/com.example D/DrawerView﹕ updateListSpacing() 03-02 15:56:53.378 21730-21730/com.example D/DrawerView﹕ updateList() 03-02 15:56:53.378 21730-21730/com.example D/DrawerView﹕ updateListSpacing() 03-02 15:56:53.418 21730-21730/com.example D/DrawerView﹕ updateFixedList() 03-02 15:56:53.418 21730-21730/com.example D/DrawerView﹕ updateListSpacing() 03-02 15:56:53.438 21730-21730/com.example D/DrawerView﹕ updateList() 03-02 15:56:53.438 21730-21730/com.example D/DrawerView﹕ updateListSpacing() 03-02 15:56:53.498 21730-21730/com.example D/DrawerView﹕ updateFixedList() 03-02 15:56:53.498 21730-21730/com.example D/DrawerView﹕ updateListSpacing() 03-02 15:56:53.568 21730-21730/com.example D/DrawerView﹕ updateFixedList() 03-02 15:56:53.568 21730-21730/com.example D/DrawerView﹕ updateListSpacing() 03-02 15:56:53.628 21730-21730/com.example D/DrawerView﹕ updateFixedList() 03-02 15:56:53.628 21730-21730/com.example D/DrawerView﹕ updateListSpacing() ``` Thanks for any help!
1.0
CloseDrawer Lag - Hey, first of all thanks for this great library, I just have 2 small problems: - I start new activites in the OnItemClickListener for each item and want to close the drawer beforehand. Unfortunately the animation isnt finished when the new activity is started and it get stuck for a brief moment. Is it possible to "wait" for the animation to finish? I'm not sure if im doing something wrong, here is the code for one item: ```java drawer.addFixedItem( new DrawerItem() .setImage(getResources().getDrawable(R.drawable.ic_format_line_spacing_grey600_48dp)) .setTextPrimary(getString(R.string.drawer_sixth_item)) .setTextSecondary(getString(R.string.drawer_sixth_description)) .setOnItemClickListener(new DrawerItem.OnItemClickListener() { @Override public void onClick(DrawerItem drawerItem, int i, int position) { drawerLayout.closeDrawer(drawer); intent = new Intent(getApplicationContext(), Swipe.class); intent.putExtra("toGo", 0); startActivity(intent); } }) ); ``` What is a good practice to get the drawer in every fragment or activity? Right now I created a base class and let every activity extend it, is this a good idea? - Debug Log is getting spammed with methodcalls: ``` 03-02 15:56:53.298 21730-21730/com.example D/DrawerView﹕ DrawerView() 03-02 15:56:53.298 21730-21730/com.example D/DrawerView﹕ init() 03-02 15:56:53.308 21730-21730/com.example D/DrawerView﹕ findViews() 03-02 15:56:53.308 21730-21730/com.example D/DrawerView﹕ updateProfile() 03-02 15:56:53.308 21730-21730/com.example D/DrawerView﹕ updateList() 03-02 15:56:53.308 21730-21730/com.example D/DrawerView﹕ updateListSpacing() 03-02 15:56:53.308 21730-21730/com.example D/DrawerView﹕ updateFixedList() 03-02 15:56:53.308 21730-21730/com.example D/DrawerView﹕ updateListSpacing() 03-02 15:56:53.318 21730-21730/com.example D/DrawerView﹕ updateProfile() 03-02 15:56:53.318 21730-21730/com.example D/DrawerView﹕ updateFixedList() 03-02 15:56:53.318 21730-21730/com.example D/DrawerView﹕ updateListSpacing() 03-02 15:56:53.318 21730-21730/com.example D/DrawerView﹕ updateList() 03-02 15:56:53.328 21730-21730/com.example D/DrawerView﹕ updateListSpacing() 03-02 15:56:53.358 21730-21730/com.example D/DrawerView﹕ updateFixedList() 03-02 15:56:53.358 21730-21730/com.example D/DrawerView﹕ updateListSpacing() 03-02 15:56:53.378 21730-21730/com.example D/DrawerView﹕ updateList() 03-02 15:56:53.378 21730-21730/com.example D/DrawerView﹕ updateListSpacing() 03-02 15:56:53.418 21730-21730/com.example D/DrawerView﹕ updateFixedList() 03-02 15:56:53.418 21730-21730/com.example D/DrawerView﹕ updateListSpacing() 03-02 15:56:53.438 21730-21730/com.example D/DrawerView﹕ updateList() 03-02 15:56:53.438 21730-21730/com.example D/DrawerView﹕ updateListSpacing() 03-02 15:56:53.498 21730-21730/com.example D/DrawerView﹕ updateFixedList() 03-02 15:56:53.498 21730-21730/com.example D/DrawerView﹕ updateListSpacing() 03-02 15:56:53.568 21730-21730/com.example D/DrawerView﹕ updateFixedList() 03-02 15:56:53.568 21730-21730/com.example D/DrawerView﹕ updateListSpacing() 03-02 15:56:53.628 21730-21730/com.example D/DrawerView﹕ updateFixedList() 03-02 15:56:53.628 21730-21730/com.example D/DrawerView﹕ updateListSpacing() ``` Thanks for any help!
non_automation
closedrawer lag hey first of all thanks for this great library i just have small problems i start new activites in the onitemclicklistener for each item and want to close the drawer beforehand unfortunately the animation isnt finished when the new activity is started and it get stuck for a brief moment is it possible to wait for the animation to finish i m not sure if im doing something wrong here is the code for one item java drawer addfixeditem new draweritem setimage getresources getdrawable r drawable ic format line spacing settextprimary getstring r string drawer sixth item settextsecondary getstring r string drawer sixth description setonitemclicklistener new draweritem onitemclicklistener override public void onclick draweritem draweritem int i int position drawerlayout closedrawer drawer intent new intent getapplicationcontext swipe class intent putextra togo startactivity intent what is a good practice to get the drawer in every fragment or activity right now i created a base class and let every activity extend it is this a good idea debug log is getting spammed with methodcalls com example d drawerview﹕ drawerview com example d drawerview﹕ init com example d drawerview﹕ findviews com example d drawerview﹕ updateprofile com example d drawerview﹕ updatelist com example d drawerview﹕ updatelistspacing com example d drawerview﹕ updatefixedlist com example d drawerview﹕ updatelistspacing com example d drawerview﹕ updateprofile com example d drawerview﹕ updatefixedlist com example d drawerview﹕ updatelistspacing com example d drawerview﹕ updatelist com example d drawerview﹕ updatelistspacing com example d drawerview﹕ updatefixedlist com example d drawerview﹕ updatelistspacing com example d drawerview﹕ updatelist com example d drawerview﹕ updatelistspacing com example d drawerview﹕ updatefixedlist com example d drawerview﹕ updatelistspacing com example d drawerview﹕ updatelist com example d drawerview﹕ updatelistspacing com example d drawerview﹕ updatefixedlist com example d drawerview﹕ updatelistspacing com example d drawerview﹕ updatefixedlist com example d drawerview﹕ updatelistspacing com example d drawerview﹕ updatefixedlist com example d drawerview﹕ updatelistspacing thanks for any help
0
2,034
11,296,524,170
IssuesEvent
2020-01-17 02:11:01
StoneCypher/fsl
https://api.github.com/repos/StoneCypher/fsl
opened
Add stale watchdog
Automation Chore Cleanup Research material Tooling needed
https://github.com/actions/stale Figure out how to add this to the workflow Also remember to configure close to 3,650,000 days (we only want the label)
1.0
Add stale watchdog - https://github.com/actions/stale Figure out how to add this to the workflow Also remember to configure close to 3,650,000 days (we only want the label)
automation
add stale watchdog figure out how to add this to the workflow also remember to configure close to days we only want the label
1
1,572
10,346,472,562
IssuesEvent
2019-09-04 15:20:12
ASL-LEX/asl-lex
https://api.github.com/repos/ASL-LEX/asl-lex
closed
Make a top level python script to pre-generate edge lists
automation
- [ ] Script should import PyND - [ ] script should run PyND for a configurable list of features Will update the criteria once Naomi sends those
1.0
Make a top level python script to pre-generate edge lists - - [ ] Script should import PyND - [ ] script should run PyND for a configurable list of features Will update the criteria once Naomi sends those
automation
make a top level python script to pre generate edge lists script should import pynd script should run pynd for a configurable list of features will update the criteria once naomi sends those
1
4,436
16,542,140,236
IssuesEvent
2021-05-27 18:15:31
rancher-sandbox/cOS-toolkit
https://api.github.com/repos/rancher-sandbox/cOS-toolkit
closed
ci: docker-build test fails for unavailable space
automation bug
Seems we run out of space in GH workers when building from the docker image **cos-toolkit version:** N/A **CPU architecture, OS, and Version:** N/A **Describe the bug** ``` 📦 build/golang-1.16.4+3 🐋 Generating 'package' image from raccos/fedora:builder-b3dec7ea9a4bb0531b15ad057fa45532 as raccos/fedora:41e31bc8233e5547b495f66497770089de58af8579f65cc58917883eaba8a3dc with build steps 🐋 Downloaded image: raccos/fedora:41e31bc8233e5547b495f66497770089de58af8579f65cc58917883eaba8a3dc 📦 build/golang-1.16.4+3 🔨 Generating delta Error: while resolving multi-stage images: failed building multi-stage image: Failed compiling build/golang-1.16.4+3: Error met while generating delta: Could not generate changes from layers: Error met while unpacking dst image raccos/fedora:41e31bc8233e5547b495f66497770089de58af8579f65cc58917883eaba8a3dc: failed while extracting rootfs for raccos/fedora:41e31bc8233e5547b495f66497770089de58af8579f65cc58917883eaba8a3dc: Failed exporting image: write /var/tmp/luet/extraction907140095/dst930592905/tmprootfs313814694/.docker_temp_510053849: no space left on device ``` **To Reproduce** <!-- Steps to reproduce the behavior, including the luet command used --> **Expected behavior** Successful build **Logs** https://github.com/rancher-sandbox/cOS-toolkit/runs/2659492726 **Additional context** <!-- Add any other context about the problem here. -->
1.0
ci: docker-build test fails for unavailable space - Seems we run out of space in GH workers when building from the docker image **cos-toolkit version:** N/A **CPU architecture, OS, and Version:** N/A **Describe the bug** ``` 📦 build/golang-1.16.4+3 🐋 Generating 'package' image from raccos/fedora:builder-b3dec7ea9a4bb0531b15ad057fa45532 as raccos/fedora:41e31bc8233e5547b495f66497770089de58af8579f65cc58917883eaba8a3dc with build steps 🐋 Downloaded image: raccos/fedora:41e31bc8233e5547b495f66497770089de58af8579f65cc58917883eaba8a3dc 📦 build/golang-1.16.4+3 🔨 Generating delta Error: while resolving multi-stage images: failed building multi-stage image: Failed compiling build/golang-1.16.4+3: Error met while generating delta: Could not generate changes from layers: Error met while unpacking dst image raccos/fedora:41e31bc8233e5547b495f66497770089de58af8579f65cc58917883eaba8a3dc: failed while extracting rootfs for raccos/fedora:41e31bc8233e5547b495f66497770089de58af8579f65cc58917883eaba8a3dc: Failed exporting image: write /var/tmp/luet/extraction907140095/dst930592905/tmprootfs313814694/.docker_temp_510053849: no space left on device ``` **To Reproduce** <!-- Steps to reproduce the behavior, including the luet command used --> **Expected behavior** Successful build **Logs** https://github.com/rancher-sandbox/cOS-toolkit/runs/2659492726 **Additional context** <!-- Add any other context about the problem here. -->
automation
ci docker build test fails for unavailable space seems we run out of space in gh workers when building from the docker image cos toolkit version n a cpu architecture os and version n a describe the bug 📦 build golang 🐋 generating package image from raccos fedora builder as raccos fedora with build steps 🐋 downloaded image raccos fedora 📦 build golang 🔨 generating delta error while resolving multi stage images failed building multi stage image failed compiling build golang error met while generating delta could not generate changes from layers error met while unpacking dst image raccos fedora failed while extracting rootfs for raccos fedora failed exporting image write var tmp luet docker temp no space left on device to reproduce expected behavior successful build logs additional context
1
30,914
11,860,123,272
IssuesEvent
2020-03-25 14:26:47
BrianMcDonaldWS/genie
https://api.github.com/repos/BrianMcDonaldWS/genie
opened
CVE-2019-0201 (Medium) detected in zookeeper-3.4.12.jar
security vulnerability
## CVE-2019-0201 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>zookeeper-3.4.12.jar</b></p></summary> <p></p> <p>Path to dependency file: /tmp/ws-scm/genie/genie-ui/build.gradle</p> <p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.4.12/cc9c95b358202be355af8abddeb6105f089b1a8c/zookeeper-3.4.12.jar,/root/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.4.12/cc9c95b358202be355af8abddeb6105f089b1a8c/zookeeper-3.4.12.jar</p> <p> Dependency Hierarchy: - spring-integration-zookeeper-5.2.2.RELEASE.jar (Root Library) - curator-recipes-4.0.1.jar - curator-framework-4.0.1.jar - curator-client-4.0.1.jar - :x: **zookeeper-3.4.12.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/BrianMcDonaldWS/genie/commit/568866fb6e52bc93c68e71b643c3271128773566">568866fb6e52bc93c68e71b643c3271128773566</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue is present in Apache ZooKeeper 1.0.0 to 3.4.13 and 3.5.0-alpha to 3.5.4-beta. ZooKeeper’s getACL() command doesn’t check any permission when retrieves the ACLs of the requested node and returns all information contained in the ACL Id field as plaintext string. DigestAuthenticationProvider overloads the Id field with the hash value that is used for user authentication. As a consequence, if Digest Authentication is in use, the unsalted hash value will be disclosed by getACL() request for unauthenticated or unprivileged users. <p>Publish Date: 2019-05-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0201>CVE-2019-0201</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://zookeeper.apache.org/security.html">https://zookeeper.apache.org/security.html</a></p> <p>Release Date: 2019-05-23</p> <p>Fix Resolution: 3.4.14, 3.5.5</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.zookeeper","packageName":"zookeeper","packageVersion":"3.4.12","isTransitiveDependency":true,"dependencyTree":"org.springframework.integration:spring-integration-zookeeper:5.2.2.RELEASE;org.apache.curator:curator-recipes:4.0.1;org.apache.curator:curator-framework:4.0.1;org.apache.curator:curator-client:4.0.1;org.apache.zookeeper:zookeeper:3.4.12","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.4.14, 3.5.5"}],"vulnerabilityIdentifier":"CVE-2019-0201","vulnerabilityDetails":"An issue is present in Apache ZooKeeper 1.0.0 to 3.4.13 and 3.5.0-alpha to 3.5.4-beta. ZooKeeper’s getACL() command doesn’t check any permission when retrieves the ACLs of the requested node and returns all information contained in the ACL Id field as plaintext string. DigestAuthenticationProvider overloads the Id field with the hash value that is used for user authentication. As a consequence, if Digest Authentication is in use, the unsalted hash value will be disclosed by getACL() request for unauthenticated or unprivileged users.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0201","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2019-0201 (Medium) detected in zookeeper-3.4.12.jar - ## CVE-2019-0201 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>zookeeper-3.4.12.jar</b></p></summary> <p></p> <p>Path to dependency file: /tmp/ws-scm/genie/genie-ui/build.gradle</p> <p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.4.12/cc9c95b358202be355af8abddeb6105f089b1a8c/zookeeper-3.4.12.jar,/root/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.4.12/cc9c95b358202be355af8abddeb6105f089b1a8c/zookeeper-3.4.12.jar</p> <p> Dependency Hierarchy: - spring-integration-zookeeper-5.2.2.RELEASE.jar (Root Library) - curator-recipes-4.0.1.jar - curator-framework-4.0.1.jar - curator-client-4.0.1.jar - :x: **zookeeper-3.4.12.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/BrianMcDonaldWS/genie/commit/568866fb6e52bc93c68e71b643c3271128773566">568866fb6e52bc93c68e71b643c3271128773566</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue is present in Apache ZooKeeper 1.0.0 to 3.4.13 and 3.5.0-alpha to 3.5.4-beta. ZooKeeper’s getACL() command doesn’t check any permission when retrieves the ACLs of the requested node and returns all information contained in the ACL Id field as plaintext string. DigestAuthenticationProvider overloads the Id field with the hash value that is used for user authentication. As a consequence, if Digest Authentication is in use, the unsalted hash value will be disclosed by getACL() request for unauthenticated or unprivileged users. <p>Publish Date: 2019-05-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0201>CVE-2019-0201</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://zookeeper.apache.org/security.html">https://zookeeper.apache.org/security.html</a></p> <p>Release Date: 2019-05-23</p> <p>Fix Resolution: 3.4.14, 3.5.5</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.zookeeper","packageName":"zookeeper","packageVersion":"3.4.12","isTransitiveDependency":true,"dependencyTree":"org.springframework.integration:spring-integration-zookeeper:5.2.2.RELEASE;org.apache.curator:curator-recipes:4.0.1;org.apache.curator:curator-framework:4.0.1;org.apache.curator:curator-client:4.0.1;org.apache.zookeeper:zookeeper:3.4.12","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.4.14, 3.5.5"}],"vulnerabilityIdentifier":"CVE-2019-0201","vulnerabilityDetails":"An issue is present in Apache ZooKeeper 1.0.0 to 3.4.13 and 3.5.0-alpha to 3.5.4-beta. ZooKeeper’s getACL() command doesn’t check any permission when retrieves the ACLs of the requested node and returns all information contained in the ACL Id field as plaintext string. DigestAuthenticationProvider overloads the Id field with the hash value that is used for user authentication. As a consequence, if Digest Authentication is in use, the unsalted hash value will be disclosed by getACL() request for unauthenticated or unprivileged users.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0201","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_automation
cve medium detected in zookeeper jar cve medium severity vulnerability vulnerable library zookeeper jar path to dependency file tmp ws scm genie genie ui build gradle path to vulnerable library root gradle caches modules files org apache zookeeper zookeeper zookeeper jar root gradle caches modules files org apache zookeeper zookeeper zookeeper jar dependency hierarchy spring integration zookeeper release jar root library curator recipes jar curator framework jar curator client jar x zookeeper jar vulnerable library found in head commit a href vulnerability details an issue is present in apache zookeeper to and alpha to beta zookeeper’s getacl command doesn’t check any permission when retrieves the acls of the requested node and returns all information contained in the acl id field as plaintext string digestauthenticationprovider overloads the id field with the hash value that is used for user authentication as a consequence if digest authentication is in use the unsalted hash value will be disclosed by getacl request for unauthenticated or unprivileged users publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails an issue is present in apache zookeeper to and alpha to beta zookeeper’s getacl command doesn’t check any permission when retrieves the acls of the requested node and returns all information contained in the acl id field as plaintext string digestauthenticationprovider overloads the id field with the hash value that is used for user authentication as a consequence if digest authentication is in use the unsalted hash value will be disclosed by getacl request for unauthenticated or unprivileged users vulnerabilityurl
0
5,546
20,031,617,376
IssuesEvent
2022-02-02 07:01:55
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Update of the information
automation/svc triaged cxp doc-enhancement Pri2
I would suggest the following update to the information on this age. 4. In your Log Analytics workspace, select Computer Groups from the left-hand menu. 5. From Computer Groups in the right-hand pane, the Saved groups tab is shown by default. 6. From the table, click the icon Run query to the right of the item MicrosoftDefaultComputerGroup. 7. In the query editor, change from Tables to Functions. Find the Updates_MicrosoftDefaultComputerGroup and click on it and hold the mouse cursor over it which will show more options, click on the load the function code. 8. The review the code and find the UUID for the machine. Remove the UUID for the machine and repeat the steps for any other machines you want to remove. #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 9a94d637-558c-b26e-a1de-c4381aa6783c * Version Independent ID: d8c47851-0ac5-3932-e1e1-e224285e7476 * Content: [Remove machines from Azure Automation Update Management](https://docs.microsoft.com/en-us/azure/automation/update-management/remove-vms?tabs=azure-vm) * Content Source: [articles/automation/update-management/remove-vms.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/update-management/remove-vms.md) * Service: **automation** * GitHub Login: @SGSneha * Microsoft Alias: **v-ssudhir**
1.0
Update of the information - I would suggest the following update to the information on this age. 4. In your Log Analytics workspace, select Computer Groups from the left-hand menu. 5. From Computer Groups in the right-hand pane, the Saved groups tab is shown by default. 6. From the table, click the icon Run query to the right of the item MicrosoftDefaultComputerGroup. 7. In the query editor, change from Tables to Functions. Find the Updates_MicrosoftDefaultComputerGroup and click on it and hold the mouse cursor over it which will show more options, click on the load the function code. 8. The review the code and find the UUID for the machine. Remove the UUID for the machine and repeat the steps for any other machines you want to remove. #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 9a94d637-558c-b26e-a1de-c4381aa6783c * Version Independent ID: d8c47851-0ac5-3932-e1e1-e224285e7476 * Content: [Remove machines from Azure Automation Update Management](https://docs.microsoft.com/en-us/azure/automation/update-management/remove-vms?tabs=azure-vm) * Content Source: [articles/automation/update-management/remove-vms.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/update-management/remove-vms.md) * Service: **automation** * GitHub Login: @SGSneha * Microsoft Alias: **v-ssudhir**
automation
update of the information i would suggest the following update to the information on this age in your log analytics workspace select computer groups from the left hand menu from computer groups in the right hand pane the saved groups tab is shown by default from the table click the icon run query to the right of the item microsoftdefaultcomputergroup in the query editor change from tables to functions find the updates microsoftdefaultcomputergroup and click on it and hold the mouse cursor over it which will show more options click on the load the function code the review the code and find the uuid for the machine remove the uuid for the machine and repeat the steps for any other machines you want to remove document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation github login sgsneha microsoft alias v ssudhir
1
105,764
9,100,680,243
IssuesEvent
2019-02-20 09:12:48
humera987/FXLabs-Test-Automation
https://api.github.com/repos/humera987/FXLabs-Test-Automation
closed
Test : ApiV1ProjectsIdNewAutocodeconfigPostAutocodeconfiguseraAllowAbact3positive
test
Project : Test Job : Default Env : Default Category : null Tags : null Severity : null Region : US_WEST Result : fail Status Code : 500 Headers : {} Endpoint : http://13.56.210.25/api/v1/api/v1/projects//new/autocodeconfig Request : { "createdBy" : "", "createdDate" : "", "genPolicy" : "None", "generators" : [ { "abacResources" : [ { "createBody" : "WqtCiOB7", "createEndpoint" : "WqtCiOB7", "createUserAuth" : "WqtCiOB7", "createdBy" : "", "createdDate" : "", "deleteEndpoint" : "WqtCiOB7", "enumValues" : "WqtCiOB7", "generatorId" : "WqtCiOB7", "id" : "", "inactive" : false, "initScriptName" : "WqtCiOB7", "lock" : false, "modifiedBy" : "", "modifiedDate" : "", "resourceName" : "WqtCiOB7", "scripts" : [ { "body" : "WqtCiOB7", "deleteEndPoint" : "WqtCiOB7", "endpoint" : "WqtCiOB7", "resourceName" : "WqtCiOB7", "scriptName" : "WqtCiOB7", "scriptType" : "WqtCiOB7", "sequence" : "851820671", "userAuth" : "WqtCiOB7", "validationScript" : false } ], "typeThreeCreateEndpoint" : "WqtCiOB7", "validations" : [ { "body" : "WqtCiOB7", "endpoint" : "WqtCiOB7", "inactive" : false, "lock" : false, "path" : "WqtCiOB7", "userAuth" : "WqtCiOB7", "validationType" : "WqtCiOB7" } ], "version" : "" } ], "assertionDescription" : "WqtCiOB7", "assertions" : [ "WqtCiOB7" ], "assertionsText" : "WqtCiOB7", "authors" : "WqtCiOB7", "category" : "Null_Value", "coverageMultiplier" : "851820671", "currentScripts" : "851820671", "database" : { "name" : "WqtCiOB7", "version" : "" }, "displayHeaderDescription" : "WqtCiOB7", "displayHeaderLabel" : "WqtCiOB7", "expectedScripts" : "851820671", "fixHours" : "WqtCiOB7", "id" : "", "inactive" : false, "matches" : [ { "allowRoles" : "WqtCiOB7", "bodyProperties" : "WqtCiOB7", "denyRoles" : "WqtCiOB7", "id" : "", "methods" : "WqtCiOB7", "name" : "WqtCiOB7", "pathPatterns" : "WqtCiOB7", "queryParams" : "WqtCiOB7", "resourceSamples" : "WqtCiOB7", "value" : "WqtCiOB7" } ], "newlyAdded" : false, "project" : { "account" : { "accountType" : "Http", "createdBy" : "", "createdDate" : "", "id" : "", "inactive" : false, "modifiedBy" : "", "modifiedDate" : "", "name" : "WqtCiOB7", "org" : { "createdBy" : "", "createdDate" : "", "id" : "", "inactive" : false, "modifiedBy" : "", "modifiedDate" : "", "name" : "WqtCiOB7", "version" : "" }, "region" : "WqtCiOB7", "version" : "" }, "autoGenSuites" : "851820671", "branch" : "WqtCiOB7", "createdBy" : "", "createdDate" : "", "description" : "WqtCiOB7", "genPolicy" : "None", "id" : "", "inactive" : false, "isFileLoad" : "WqtCiOB7", "issueTracker" : { "account" : "WqtCiOB7", "accountType" : "GitLab", "id" : "", "name" : "WqtCiOB7", "projectKey" : "WqtCiOB7", "url" : "WqtCiOB7" }, "lastCommit" : "WqtCiOB7", "lastSync" : null, "licenses" : [ "WqtCiOB7" ], "modifiedBy" : "", "modifiedDate" : "", "name" : "WqtCiOB7", "notifications" : [ { "account" : "WqtCiOB7", "channel" : "WqtCiOB7", "id" : "", "name" : "WqtCiOB7", "to" : "WqtCiOB7" } ], "openAPISpec" : "WqtCiOB7", "openText" : "WqtCiOB7", "org" : { "createdBy" : "", "createdDate" : "", "id" : "", "inactive" : false, "modifiedBy" : "", "modifiedDate" : "", "name" : "WqtCiOB7", "version" : "" }, "props" : null, "url" : "WqtCiOB7", "version" : "" }, "sequenceOrder" : "851820671", "severity" : "Minor", "tags" : [ "WqtCiOB7" ], "type" : "WqtCiOB7" } ], "id" : "", "inactive" : false, "modifiedBy" : "", "modifiedDate" : "", "openAPISpec" : "WqtCiOB7", "project" : { "account" : { "accountType" : "Http", "createdBy" : "", "createdDate" : "", "id" : "", "inactive" : false, "modifiedBy" : "", "modifiedDate" : "", "name" : "WqtCiOB7", "org" : { "createdBy" : "", "createdDate" : "", "id" : "", "inactive" : false, "modifiedBy" : "", "modifiedDate" : "", "name" : "WqtCiOB7", "version" : "" }, "region" : "WqtCiOB7", "version" : "" }, "autoGenSuites" : "851820671", "branch" : "WqtCiOB7", "createdBy" : "", "createdDate" : "", "description" : "WqtCiOB7", "genPolicy" : "None", "id" : "", "inactive" : false, "isFileLoad" : "WqtCiOB7", "issueTracker" : { "account" : "WqtCiOB7", "accountType" : "GitLab", "id" : "", "name" : "WqtCiOB7", "projectKey" : "WqtCiOB7", "url" : "WqtCiOB7" }, "lastCommit" : "WqtCiOB7", "lastSync" : null, "licenses" : [ "WqtCiOB7" ], "modifiedBy" : "", "modifiedDate" : "", "name" : "WqtCiOB7", "notifications" : [ { "account" : "WqtCiOB7", "channel" : "WqtCiOB7", "id" : "", "name" : "WqtCiOB7", "to" : "WqtCiOB7" } ], "openAPISpec" : "WqtCiOB7", "openText" : "WqtCiOB7", "org" : { "createdBy" : "", "createdDate" : "", "id" : "", "inactive" : false, "modifiedBy" : "", "modifiedDate" : "", "name" : "WqtCiOB7", "version" : "" }, "props" : null, "url" : "WqtCiOB7", "version" : "" }, "version" : "" } Response : I/O error on POST request for "http://13.56.210.25/api/v1/api/v1/projects/new/autocodeconfig": Read timed out; nested exception is java.net.SocketTimeoutException: Read timed out Logs : Assertion [@StatusCode == 401 OR @StatusCode == 403 OR @Response.errors == true] resolved-to [500 == 401 OR 500 == 403 OR == true] result [Failed] --- FX Bot ---
1.0
Test : ApiV1ProjectsIdNewAutocodeconfigPostAutocodeconfiguseraAllowAbact3positive - Project : Test Job : Default Env : Default Category : null Tags : null Severity : null Region : US_WEST Result : fail Status Code : 500 Headers : {} Endpoint : http://13.56.210.25/api/v1/api/v1/projects//new/autocodeconfig Request : { "createdBy" : "", "createdDate" : "", "genPolicy" : "None", "generators" : [ { "abacResources" : [ { "createBody" : "WqtCiOB7", "createEndpoint" : "WqtCiOB7", "createUserAuth" : "WqtCiOB7", "createdBy" : "", "createdDate" : "", "deleteEndpoint" : "WqtCiOB7", "enumValues" : "WqtCiOB7", "generatorId" : "WqtCiOB7", "id" : "", "inactive" : false, "initScriptName" : "WqtCiOB7", "lock" : false, "modifiedBy" : "", "modifiedDate" : "", "resourceName" : "WqtCiOB7", "scripts" : [ { "body" : "WqtCiOB7", "deleteEndPoint" : "WqtCiOB7", "endpoint" : "WqtCiOB7", "resourceName" : "WqtCiOB7", "scriptName" : "WqtCiOB7", "scriptType" : "WqtCiOB7", "sequence" : "851820671", "userAuth" : "WqtCiOB7", "validationScript" : false } ], "typeThreeCreateEndpoint" : "WqtCiOB7", "validations" : [ { "body" : "WqtCiOB7", "endpoint" : "WqtCiOB7", "inactive" : false, "lock" : false, "path" : "WqtCiOB7", "userAuth" : "WqtCiOB7", "validationType" : "WqtCiOB7" } ], "version" : "" } ], "assertionDescription" : "WqtCiOB7", "assertions" : [ "WqtCiOB7" ], "assertionsText" : "WqtCiOB7", "authors" : "WqtCiOB7", "category" : "Null_Value", "coverageMultiplier" : "851820671", "currentScripts" : "851820671", "database" : { "name" : "WqtCiOB7", "version" : "" }, "displayHeaderDescription" : "WqtCiOB7", "displayHeaderLabel" : "WqtCiOB7", "expectedScripts" : "851820671", "fixHours" : "WqtCiOB7", "id" : "", "inactive" : false, "matches" : [ { "allowRoles" : "WqtCiOB7", "bodyProperties" : "WqtCiOB7", "denyRoles" : "WqtCiOB7", "id" : "", "methods" : "WqtCiOB7", "name" : "WqtCiOB7", "pathPatterns" : "WqtCiOB7", "queryParams" : "WqtCiOB7", "resourceSamples" : "WqtCiOB7", "value" : "WqtCiOB7" } ], "newlyAdded" : false, "project" : { "account" : { "accountType" : "Http", "createdBy" : "", "createdDate" : "", "id" : "", "inactive" : false, "modifiedBy" : "", "modifiedDate" : "", "name" : "WqtCiOB7", "org" : { "createdBy" : "", "createdDate" : "", "id" : "", "inactive" : false, "modifiedBy" : "", "modifiedDate" : "", "name" : "WqtCiOB7", "version" : "" }, "region" : "WqtCiOB7", "version" : "" }, "autoGenSuites" : "851820671", "branch" : "WqtCiOB7", "createdBy" : "", "createdDate" : "", "description" : "WqtCiOB7", "genPolicy" : "None", "id" : "", "inactive" : false, "isFileLoad" : "WqtCiOB7", "issueTracker" : { "account" : "WqtCiOB7", "accountType" : "GitLab", "id" : "", "name" : "WqtCiOB7", "projectKey" : "WqtCiOB7", "url" : "WqtCiOB7" }, "lastCommit" : "WqtCiOB7", "lastSync" : null, "licenses" : [ "WqtCiOB7" ], "modifiedBy" : "", "modifiedDate" : "", "name" : "WqtCiOB7", "notifications" : [ { "account" : "WqtCiOB7", "channel" : "WqtCiOB7", "id" : "", "name" : "WqtCiOB7", "to" : "WqtCiOB7" } ], "openAPISpec" : "WqtCiOB7", "openText" : "WqtCiOB7", "org" : { "createdBy" : "", "createdDate" : "", "id" : "", "inactive" : false, "modifiedBy" : "", "modifiedDate" : "", "name" : "WqtCiOB7", "version" : "" }, "props" : null, "url" : "WqtCiOB7", "version" : "" }, "sequenceOrder" : "851820671", "severity" : "Minor", "tags" : [ "WqtCiOB7" ], "type" : "WqtCiOB7" } ], "id" : "", "inactive" : false, "modifiedBy" : "", "modifiedDate" : "", "openAPISpec" : "WqtCiOB7", "project" : { "account" : { "accountType" : "Http", "createdBy" : "", "createdDate" : "", "id" : "", "inactive" : false, "modifiedBy" : "", "modifiedDate" : "", "name" : "WqtCiOB7", "org" : { "createdBy" : "", "createdDate" : "", "id" : "", "inactive" : false, "modifiedBy" : "", "modifiedDate" : "", "name" : "WqtCiOB7", "version" : "" }, "region" : "WqtCiOB7", "version" : "" }, "autoGenSuites" : "851820671", "branch" : "WqtCiOB7", "createdBy" : "", "createdDate" : "", "description" : "WqtCiOB7", "genPolicy" : "None", "id" : "", "inactive" : false, "isFileLoad" : "WqtCiOB7", "issueTracker" : { "account" : "WqtCiOB7", "accountType" : "GitLab", "id" : "", "name" : "WqtCiOB7", "projectKey" : "WqtCiOB7", "url" : "WqtCiOB7" }, "lastCommit" : "WqtCiOB7", "lastSync" : null, "licenses" : [ "WqtCiOB7" ], "modifiedBy" : "", "modifiedDate" : "", "name" : "WqtCiOB7", "notifications" : [ { "account" : "WqtCiOB7", "channel" : "WqtCiOB7", "id" : "", "name" : "WqtCiOB7", "to" : "WqtCiOB7" } ], "openAPISpec" : "WqtCiOB7", "openText" : "WqtCiOB7", "org" : { "createdBy" : "", "createdDate" : "", "id" : "", "inactive" : false, "modifiedBy" : "", "modifiedDate" : "", "name" : "WqtCiOB7", "version" : "" }, "props" : null, "url" : "WqtCiOB7", "version" : "" }, "version" : "" } Response : I/O error on POST request for "http://13.56.210.25/api/v1/api/v1/projects/new/autocodeconfig": Read timed out; nested exception is java.net.SocketTimeoutException: Read timed out Logs : Assertion [@StatusCode == 401 OR @StatusCode == 403 OR @Response.errors == true] resolved-to [500 == 401 OR 500 == 403 OR == true] result [Failed] --- FX Bot ---
non_automation
test project test job default env default category null tags null severity null region us west result fail status code headers endpoint request createdby createddate genpolicy none generators abacresources createbody createendpoint createuserauth createdby createddate deleteendpoint enumvalues generatorid id inactive false initscriptname lock false modifiedby modifieddate resourcename scripts body deleteendpoint endpoint resourcename scriptname scripttype sequence userauth validationscript false typethreecreateendpoint validations body endpoint inactive false lock false path userauth validationtype version assertiondescription assertions assertionstext authors category null value coveragemultiplier currentscripts database name version displayheaderdescription displayheaderlabel expectedscripts fixhours id inactive false matches allowroles bodyproperties denyroles id methods name pathpatterns queryparams resourcesamples value newlyadded false project account accounttype http createdby createddate id inactive false modifiedby modifieddate name org createdby createddate id inactive false modifiedby modifieddate name version region version autogensuites branch createdby createddate description genpolicy none id inactive false isfileload issuetracker account accounttype gitlab id name projectkey url lastcommit lastsync null licenses modifiedby modifieddate name notifications account channel id name to openapispec opentext org createdby createddate id inactive false modifiedby modifieddate name version props null url version sequenceorder severity minor tags type id inactive false modifiedby modifieddate openapispec project account accounttype http createdby createddate id inactive false modifiedby modifieddate name org createdby createddate id inactive false modifiedby modifieddate name version region version autogensuites branch createdby createddate description genpolicy none id inactive false isfileload issuetracker account accounttype gitlab id name projectkey url lastcommit lastsync null licenses modifiedby modifieddate name notifications account channel id name to openapispec opentext org createdby createddate id inactive false modifiedby modifieddate name version props null url version version response i o error on post request for read timed out nested exception is java net sockettimeoutexception read timed out logs assertion resolved to result fx bot
0
1,009
12,179,383,253
IssuesEvent
2020-04-28 10:34:49
rook/rook
https://api.github.com/repos/rook/rook
closed
Convert the Ceph Cluster controller to the controller-runtime
ceph - feature reliability
**Is this a bug report or feature request?** * Feature Request **What should the feature do:** Convert the [CephCluster controller](https://github.com/rook/rook/blob/master/pkg/operator/ceph/cluster/controller.go) to be managed with the controller-runtime. Currently Rook only has a simple watch in an informer as seen [here](https://github.com/rook/rook/blob/master/pkg/operator/k8sutil/customresource.go#L54). **What is use case behind this feature:** The controller runtime will improve reliability of the operator in several areas: - Events can be re-queued if failed or the operator is not able to complete the operation - Exponential backoff is provided automatically for re-queued events - Waiting for the next event does not need to block on the current event if it is taking a long time and the event can be re-queued. Several controllers in Rook are using the controller runtime. For examples, see the [pool controller](https://github.com/rook/rook/blob/master/pkg/operator/ceph/pool/controller.go) or [disruption budget](https://github.com/rook/rook/blob/master/pkg/operator/ceph/disruption/clusterdisruption/reconcile.go) controller.
True
Convert the Ceph Cluster controller to the controller-runtime - **Is this a bug report or feature request?** * Feature Request **What should the feature do:** Convert the [CephCluster controller](https://github.com/rook/rook/blob/master/pkg/operator/ceph/cluster/controller.go) to be managed with the controller-runtime. Currently Rook only has a simple watch in an informer as seen [here](https://github.com/rook/rook/blob/master/pkg/operator/k8sutil/customresource.go#L54). **What is use case behind this feature:** The controller runtime will improve reliability of the operator in several areas: - Events can be re-queued if failed or the operator is not able to complete the operation - Exponential backoff is provided automatically for re-queued events - Waiting for the next event does not need to block on the current event if it is taking a long time and the event can be re-queued. Several controllers in Rook are using the controller runtime. For examples, see the [pool controller](https://github.com/rook/rook/blob/master/pkg/operator/ceph/pool/controller.go) or [disruption budget](https://github.com/rook/rook/blob/master/pkg/operator/ceph/disruption/clusterdisruption/reconcile.go) controller.
non_automation
convert the ceph cluster controller to the controller runtime is this a bug report or feature request feature request what should the feature do convert the to be managed with the controller runtime currently rook only has a simple watch in an informer as seen what is use case behind this feature the controller runtime will improve reliability of the operator in several areas events can be re queued if failed or the operator is not able to complete the operation exponential backoff is provided automatically for re queued events waiting for the next event does not need to block on the current event if it is taking a long time and the event can be re queued several controllers in rook are using the controller runtime for examples see the or controller
0
617,468
19,358,763,011
IssuesEvent
2021-12-16 00:55:39
UC-Davis-molecular-computing/scadnano
https://api.github.com/repos/UC-Davis-molecular-computing/scadnano
closed
domain names move when switching orientation of strand
bug high priority closed in dev
Take a strand with domain labels: ![image](https://user-images.githubusercontent.com/19274365/91330628-90d51280-e77e-11ea-8cb1-5af084311201.png) Drag it to reverse its orientation: ![image](https://user-images.githubusercontent.com/19274365/91330654-992d4d80-e77e-11ea-8bf9-1ee2b8bcdf7b.png) The domain labels should stay in the same order 5' to 3', but they have reversed (since they are in the same "screen order" but now the strand is pointing the other way. See also issue #654, which is a similar issue (but on the design with that issue, this issue does not show up.)
1.0
domain names move when switching orientation of strand - Take a strand with domain labels: ![image](https://user-images.githubusercontent.com/19274365/91330628-90d51280-e77e-11ea-8cb1-5af084311201.png) Drag it to reverse its orientation: ![image](https://user-images.githubusercontent.com/19274365/91330654-992d4d80-e77e-11ea-8bf9-1ee2b8bcdf7b.png) The domain labels should stay in the same order 5' to 3', but they have reversed (since they are in the same "screen order" but now the strand is pointing the other way. See also issue #654, which is a similar issue (but on the design with that issue, this issue does not show up.)
non_automation
domain names move when switching orientation of strand take a strand with domain labels drag it to reverse its orientation the domain labels should stay in the same order to but they have reversed since they are in the same screen order but now the strand is pointing the other way see also issue which is a similar issue but on the design with that issue this issue does not show up
0
5,442
19,604,874,410
IssuesEvent
2022-01-06 08:07:27
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
tikv have not logs saved in k8s
type/bug severity/major found/automation
## Bug Report <!-- Thanks for your bug report! Don't worry if you can't fill out all the sections. --> ### What version of TiKV are you using? / # ./tikv-server -V TiKV Release Version: 5.4.0-alpha Edition: Community Git Commit Hash: 99b3436 Git Commit Branch: heads/refs/tags/v5.4.0-nightly UTC Build Time: 2022-01-04 01:15:55 Rust Version: rustc 1.56.0-nightly (2faabf579 2021-07-27) Enable Features: jemalloc mem-profiling portable sse test-engines-rocksdb cloud-aws cloud-gcp cloud-azure Profile: dist_release ### What operating system and CPU are you using? 8core 16G ### Steps to reproduce no matter ### What did you expect? tikv logs can be saved ### What did happened? tikv have not logs saved in k8s ![image](https://user-images.githubusercontent.com/84712107/148193798-f0491102-200e-4b5e-af83-3e7cf6f1f2b6.png)
1.0
tikv have not logs saved in k8s - ## Bug Report <!-- Thanks for your bug report! Don't worry if you can't fill out all the sections. --> ### What version of TiKV are you using? / # ./tikv-server -V TiKV Release Version: 5.4.0-alpha Edition: Community Git Commit Hash: 99b3436 Git Commit Branch: heads/refs/tags/v5.4.0-nightly UTC Build Time: 2022-01-04 01:15:55 Rust Version: rustc 1.56.0-nightly (2faabf579 2021-07-27) Enable Features: jemalloc mem-profiling portable sse test-engines-rocksdb cloud-aws cloud-gcp cloud-azure Profile: dist_release ### What operating system and CPU are you using? 8core 16G ### Steps to reproduce no matter ### What did you expect? tikv logs can be saved ### What did happened? tikv have not logs saved in k8s ![image](https://user-images.githubusercontent.com/84712107/148193798-f0491102-200e-4b5e-af83-3e7cf6f1f2b6.png)
automation
tikv have not logs saved in bug report what version of tikv are you using tikv server v tikv release version alpha edition community git commit hash git commit branch heads refs tags nightly utc build time rust version rustc nightly enable features jemalloc mem profiling portable sse test engines rocksdb cloud aws cloud gcp cloud azure profile dist release what operating system and cpu are you using steps to reproduce no matter what did you expect tikv logs can be saved what did happened tikv have not logs saved in
1
4,779
17,461,992,914
IssuesEvent
2021-08-06 11:52:46
iGEM-Engineering/iGEM-distribution
https://api.github.com/repos/iGEM-Engineering/iGEM-distribution
opened
Detect twins
automation
Some parts are likely to be submitted that will be twins of other parts with different names but the same sequence. We should automatically search for twins.
1.0
Detect twins - Some parts are likely to be submitted that will be twins of other parts with different names but the same sequence. We should automatically search for twins.
automation
detect twins some parts are likely to be submitted that will be twins of other parts with different names but the same sequence we should automatically search for twins
1
9,705
30,305,902,687
IssuesEvent
2023-07-10 09:27:19
litentry/litentry-parachain
https://api.github.com/repos/litentry/litentry-parachain
closed
Create a script/GHA to tell if sidechain on staging works
I3-high D6-automation
### Context It's possible that we get error notifications from the staging-sidechain but it still functions. Before we restart it, it's better to test if "it still works" in the first place. We need a script/GHA for that, similar to ts-test but more light-weighted and accurate. --- :heavy_check_mark: Please set appropriate **labels** and **assignees** if applicable.
1.0
Create a script/GHA to tell if sidechain on staging works - ### Context It's possible that we get error notifications from the staging-sidechain but it still functions. Before we restart it, it's better to test if "it still works" in the first place. We need a script/GHA for that, similar to ts-test but more light-weighted and accurate. --- :heavy_check_mark: Please set appropriate **labels** and **assignees** if applicable.
automation
create a script gha to tell if sidechain on staging works context it s possible that we get error notifications from the staging sidechain but it still functions before we restart it it s better to test if it still works in the first place we need a script gha for that similar to ts test but more light weighted and accurate heavy check mark please set appropriate labels and assignees if applicable
1
735,083
25,378,400,605
IssuesEvent
2022-11-21 15:41:50
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.msn.com - design is broken
browser-firefox priority-critical engine-gecko
<!-- @browser: Firefox 107.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:107.0) Gecko/20100101 Firefox/107.0 --> <!-- @reported_with: addon-reporter-firefox --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/114405 --> **URL**: https://www.msn.com/en-gb/news/world/meet-sergei-shoigu-russia-s-minister-of-defense-and-possible-successor-to-putin/ss-AAUS24n?cvid=a02f3578ae1540b8bb158c8a9636917c#image=2 **Browser / Version**: Firefox 107.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes Edge **Problem type**: Design is broken **Description**: Items are misaligned **Steps to Reproduce**: The design is shifted to the right compared with Edge <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2022/11/0a03e9c5-1f0e-46db-8015-e82a7f975eed.jpg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.msn.com - design is broken - <!-- @browser: Firefox 107.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:107.0) Gecko/20100101 Firefox/107.0 --> <!-- @reported_with: addon-reporter-firefox --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/114405 --> **URL**: https://www.msn.com/en-gb/news/world/meet-sergei-shoigu-russia-s-minister-of-defense-and-possible-successor-to-putin/ss-AAUS24n?cvid=a02f3578ae1540b8bb158c8a9636917c#image=2 **Browser / Version**: Firefox 107.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes Edge **Problem type**: Design is broken **Description**: Items are misaligned **Steps to Reproduce**: The design is shifted to the right compared with Edge <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2022/11/0a03e9c5-1f0e-46db-8015-e82a7f975eed.jpg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_automation
design is broken url browser version firefox operating system windows tested another browser yes edge problem type design is broken description items are misaligned steps to reproduce the design is shifted to the right compared with edge view the screenshot img alt screenshot src browser configuration none from with ❤️
0
7,384
24,755,789,626
IssuesEvent
2022-10-21 17:35:22
o3de/o3de
https://api.github.com/repos/o3de/o3de
closed
test_InstantiatePrefab_LevelPrefab fails on Linux
kind/bug priority/major kind/automation feature/prefabs
**Describe the bug** test_InstantiatePrefab_LevelPrefab fails on Linux ``` [2022-10-21T07:06:43.565Z] E [editor_test.log] EXCEPTION raised: [2022-10-21T07:06:43.565Z] E [editor_test.log] Traceback (most recent call last): [2022-10-21T07:06:43.565Z] E [editor_test.log] File "/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/utils.py", line 328, in start_test [2022-10-21T07:06:43.565Z] E [editor_test.log] test_function() [2022-10-21T07:06:43.565Z] E [editor_test.log] File "/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/Prefab/tests/instantiate_prefab/InstantiatePrefab_LevelPrefab.py", line 30, in InstantiatePrefab_LevelPrefab [2022-10-21T07:06:43.565Z] E [editor_test.log] test_level_prefab = Prefab.get_prefab(test_level_prefab_path) [2022-10-21T07:06:43.565Z] E [editor_test.log] File "/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/prefab_utils.py", line 201, in get_prefab [2022-10-21T07:06:43.565Z] E [editor_test.log] assert Prefab.prefab_exists(file_path), f"Attempted to get a prefab \"{file_path}\" that doesn't exist" [2022-10-21T07:06:43.565Z] E [editor_test.log] AssertionError: Attempted to get a prefab "levels/prefab/QuitOnSuccessfulSpawn/QuitOnSuccessfulSpawn.prefab" that doesn't exist [2022-10-21T07:06:43.565Z] E [editor_test.log] Test result: FAILURE ``` **Failed Jenkins Job Information:** [The name of the job that failed, job build number, and code snippit of the failure.](https://jenkins.build.o3de.org/blue/organizations/jenkins/O3DE_periodic-incremental-daily/detail/development/137/pipeline/797) **Additional context** Looks to be due to a casing issue with the prefab file path: ``` test_level_prefab_path = os.path.join("levels", "prefab", "QuitOnSuccessfulSpawn", "QuitOnSuccessfulSpawn.prefab") ```
1.0
test_InstantiatePrefab_LevelPrefab fails on Linux - **Describe the bug** test_InstantiatePrefab_LevelPrefab fails on Linux ``` [2022-10-21T07:06:43.565Z] E [editor_test.log] EXCEPTION raised: [2022-10-21T07:06:43.565Z] E [editor_test.log] Traceback (most recent call last): [2022-10-21T07:06:43.565Z] E [editor_test.log] File "/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/utils.py", line 328, in start_test [2022-10-21T07:06:43.565Z] E [editor_test.log] test_function() [2022-10-21T07:06:43.565Z] E [editor_test.log] File "/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/Prefab/tests/instantiate_prefab/InstantiatePrefab_LevelPrefab.py", line 30, in InstantiatePrefab_LevelPrefab [2022-10-21T07:06:43.565Z] E [editor_test.log] test_level_prefab = Prefab.get_prefab(test_level_prefab_path) [2022-10-21T07:06:43.565Z] E [editor_test.log] File "/data/workspace/o3de/AutomatedTesting/Gem/PythonTests/EditorPythonTestTools/editor_python_test_tools/prefab_utils.py", line 201, in get_prefab [2022-10-21T07:06:43.565Z] E [editor_test.log] assert Prefab.prefab_exists(file_path), f"Attempted to get a prefab \"{file_path}\" that doesn't exist" [2022-10-21T07:06:43.565Z] E [editor_test.log] AssertionError: Attempted to get a prefab "levels/prefab/QuitOnSuccessfulSpawn/QuitOnSuccessfulSpawn.prefab" that doesn't exist [2022-10-21T07:06:43.565Z] E [editor_test.log] Test result: FAILURE ``` **Failed Jenkins Job Information:** [The name of the job that failed, job build number, and code snippit of the failure.](https://jenkins.build.o3de.org/blue/organizations/jenkins/O3DE_periodic-incremental-daily/detail/development/137/pipeline/797) **Additional context** Looks to be due to a casing issue with the prefab file path: ``` test_level_prefab_path = os.path.join("levels", "prefab", "QuitOnSuccessfulSpawn", "QuitOnSuccessfulSpawn.prefab") ```
automation
test instantiateprefab levelprefab fails on linux describe the bug test instantiateprefab levelprefab fails on linux e exception raised e traceback most recent call last e file data workspace automatedtesting gem pythontests editorpythontesttools editor python test tools utils py line in start test e test function e file data workspace automatedtesting gem pythontests prefab tests instantiate prefab instantiateprefab levelprefab py line in instantiateprefab levelprefab e test level prefab prefab get prefab test level prefab path e file data workspace automatedtesting gem pythontests editorpythontesttools editor python test tools prefab utils py line in get prefab e assert prefab prefab exists file path f attempted to get a prefab file path that doesn t exist e assertionerror attempted to get a prefab levels prefab quitonsuccessfulspawn quitonsuccessfulspawn prefab that doesn t exist e test result failure failed jenkins job information additional context looks to be due to a casing issue with the prefab file path test level prefab path os path join levels prefab quitonsuccessfulspawn quitonsuccessfulspawn prefab
1
2,905
12,754,313,341
IssuesEvent
2020-06-28 04:31:52
chavarera/python-mini-projects
https://api.github.com/repos/chavarera/python-mini-projects
closed
Add Watermark on Set of images
Automation
**Adding watermark to multiple images using one command.** Ask the user for the input of specific folder containing images and watermark image `Enter Folder Path : E:\Bootstrap\-hotel\redplanet\redplanet\images` `Enter Watermark Path : E:\python\image watermark\watermark.png` The output should be in the same folder `output/filename`
1.0
Add Watermark on Set of images - **Adding watermark to multiple images using one command.** Ask the user for the input of specific folder containing images and watermark image `Enter Folder Path : E:\Bootstrap\-hotel\redplanet\redplanet\images` `Enter Watermark Path : E:\python\image watermark\watermark.png` The output should be in the same folder `output/filename`
automation
add watermark on set of images adding watermark to multiple images using one command ask the user for the input of specific folder containing images and watermark image enter folder path e bootstrap hotel redplanet redplanet images enter watermark path e python image watermark watermark png the output should be in the same folder output filename
1
53,026
7,803,352,325
IssuesEvent
2018-06-10 22:48:22
vitessio/vitess
https://api.github.com/repos/vitessio/vitess
closed
Guide to use VItess on AWS kubernetes
P3 Type: Documentation
Hi, We want to move of Amazon RDS and use Vitess on kubernetes. I am not able to find any documentation for that. Please provide any pointer to use Vitess in AWS Kube.
1.0
Guide to use VItess on AWS kubernetes - Hi, We want to move of Amazon RDS and use Vitess on kubernetes. I am not able to find any documentation for that. Please provide any pointer to use Vitess in AWS Kube.
non_automation
guide to use vitess on aws kubernetes hi we want to move of amazon rds and use vitess on kubernetes i am not able to find any documentation for that please provide any pointer to use vitess in aws kube
0
45,721
2,938,844,454
IssuesEvent
2015-07-01 13:24:40
moneymanagerex/android-money-manager-ex
https://api.github.com/repos/moneymanagerex/android-money-manager-ex
closed
Investigate automatic Dropbox sync, possible cause of exceptions
priority
The automatic Dropbox sync could be causing the torrent of Illegal State exceptions. Requires detailed investigation. DropboxServiceIntent, method downloadFile.
1.0
Investigate automatic Dropbox sync, possible cause of exceptions - The automatic Dropbox sync could be causing the torrent of Illegal State exceptions. Requires detailed investigation. DropboxServiceIntent, method downloadFile.
non_automation
investigate automatic dropbox sync possible cause of exceptions the automatic dropbox sync could be causing the torrent of illegal state exceptions requires detailed investigation dropboxserviceintent method downloadfile
0
9,778
4,641,460,267
IssuesEvent
2016-09-30 04:59:10
debugworkbench/hydragon
https://api.github.com/repos/debugworkbench/hydragon
closed
Consider replacing DefinitelyTyped typings
build Status: Pending Type: Cleanup
Seems like https://github.com/typings/typings claims to work with proper external module based typings instead of just ambient external module typings. I'm not entirely sure how it manages to work with TypeScript's node-like module resolution, but that should be easy enough to test with the typings at https://github.com/typings/typed-source-map If it works as claimed it would be nice to convert the Electron typings over to the proper external module d.ts format.
1.0
Consider replacing DefinitelyTyped typings - Seems like https://github.com/typings/typings claims to work with proper external module based typings instead of just ambient external module typings. I'm not entirely sure how it manages to work with TypeScript's node-like module resolution, but that should be easy enough to test with the typings at https://github.com/typings/typed-source-map If it works as claimed it would be nice to convert the Electron typings over to the proper external module d.ts format.
non_automation
consider replacing definitelytyped typings seems like claims to work with proper external module based typings instead of just ambient external module typings i m not entirely sure how it manages to work with typescript s node like module resolution but that should be easy enough to test with the typings at if it works as claimed it would be nice to convert the electron typings over to the proper external module d ts format
0
367,024
25,715,205,500
IssuesEvent
2022-12-07 09:50:50
zcash/secant-android-wallet
https://api.github.com/repos/zcash/secant-android-wallet
opened
Testing documentation update
documentation enhancement
## Is your feature request related to a problem? Please describe. We'd like to have our approach to testing better documented. ## Describe the solution you'd like Ideally, it'd be one `docs/testing/Testing.md` file, which outlines possibly all corners of how we test the app: - automated tests (unit x instrumented) - integration tests - manual tests - tests run on CI - benchmark tests - screenshot tests - etc.
1.0
Testing documentation update - ## Is your feature request related to a problem? Please describe. We'd like to have our approach to testing better documented. ## Describe the solution you'd like Ideally, it'd be one `docs/testing/Testing.md` file, which outlines possibly all corners of how we test the app: - automated tests (unit x instrumented) - integration tests - manual tests - tests run on CI - benchmark tests - screenshot tests - etc.
non_automation
testing documentation update is your feature request related to a problem please describe we d like to have our approach to testing better documented describe the solution you d like ideally it d be one docs testing testing md file which outlines possibly all corners of how we test the app automated tests unit x instrumented integration tests manual tests tests run on ci benchmark tests screenshot tests etc
0
750,809
26,218,549,951
IssuesEvent
2023-01-04 13:05:25
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.homedepot.ca - see bug description
browser-firefox priority-normal engine-gecko
<!-- @browser: Firefox 108.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:108.0) Gecko/20100101 Firefox/108.0 --> <!-- @reported_with: unknown --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/116295 --> **URL**: https://www.homedepot.ca/checkout **Browser / Version**: Firefox 108.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes Edge **Problem type**: Something else **Description**: Get "Unknown Error" when trying to checkout on Firefox **Steps to Reproduce**: When trying to checkout on Firefox I get a message that says "Unknown Error". This error doesn't show up on Chrome-based browsers. <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2023/1/31c3787e-b1ee-4028-9481-715dc83ea342.jpg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.homedepot.ca - see bug description - <!-- @browser: Firefox 108.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:108.0) Gecko/20100101 Firefox/108.0 --> <!-- @reported_with: unknown --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/116295 --> **URL**: https://www.homedepot.ca/checkout **Browser / Version**: Firefox 108.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes Edge **Problem type**: Something else **Description**: Get "Unknown Error" when trying to checkout on Firefox **Steps to Reproduce**: When trying to checkout on Firefox I get a message that says "Unknown Error". This error doesn't show up on Chrome-based browsers. <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2023/1/31c3787e-b1ee-4028-9481-715dc83ea342.jpg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_automation
see bug description url browser version firefox operating system windows tested another browser yes edge problem type something else description get unknown error when trying to checkout on firefox steps to reproduce when trying to checkout on firefox i get a message that says unknown error this error doesn t show up on chrome based browsers view the screenshot img alt screenshot src browser configuration none from with ❤️
0
2,005
11,256,337,208
IssuesEvent
2020-01-12 15:35:01
spacemeshos/go-spacemesh
https://api.github.com/repos/spacemeshos/go-spacemesh
closed
Monitoring system
Epic automation monitoring
# The Motivation During testnet and mainnet we would want to collect information that will allow us to track different flows in the system and detect problems and trends without having to rely on data coming from the nodes # The Requirement In general, add functionality to gather this information from a tap on the network. Few examples of info that we would like to collect - 1. Histogram of node versions (can be gathered from handshake messages) 2. Participating in Hare committees 3. PoET usages
1.0
Monitoring system - # The Motivation During testnet and mainnet we would want to collect information that will allow us to track different flows in the system and detect problems and trends without having to rely on data coming from the nodes # The Requirement In general, add functionality to gather this information from a tap on the network. Few examples of info that we would like to collect - 1. Histogram of node versions (can be gathered from handshake messages) 2. Participating in Hare committees 3. PoET usages
automation
monitoring system the motivation during testnet and mainnet we would want to collect information that will allow us to track different flows in the system and detect problems and trends without having to rely on data coming from the nodes the requirement in general add functionality to gather this information from a tap on the network few examples of info that we would like to collect histogram of node versions can be gathered from handshake messages participating in hare committees poet usages
1
3,941
15,014,667,312
IssuesEvent
2021-02-01 07:02:43
MISP/MISP
https://api.github.com/repos/MISP/MISP
closed
MISP Automation , not working properly. event wise data is not getting downloaded.
T: support automation
Hello, I am trying to automate the process of suricata rules export . I am trying this API format : https://[misp url]/events/nids/[format]/download/[eventid]/[frame]/[tags]/[from]/[to]/[last] my final API would be, let say if I want to export just for event 6: https://[misp url]/events/nids/suricata/download/6 the above event wise api is not working for any specific event id, it is exporting all the rules from all events. even when I am trying to export all the suricata rules with the api: https://[misp url]/events/nids/suricata/download it is leaving my eventa 6, 4, 1207.. to download the suricata rule for. means it is not completed. though these evens contains IDS published attributes. please let me have a solution here.
1.0
MISP Automation , not working properly. event wise data is not getting downloaded. - Hello, I am trying to automate the process of suricata rules export . I am trying this API format : https://[misp url]/events/nids/[format]/download/[eventid]/[frame]/[tags]/[from]/[to]/[last] my final API would be, let say if I want to export just for event 6: https://[misp url]/events/nids/suricata/download/6 the above event wise api is not working for any specific event id, it is exporting all the rules from all events. even when I am trying to export all the suricata rules with the api: https://[misp url]/events/nids/suricata/download it is leaving my eventa 6, 4, 1207.. to download the suricata rule for. means it is not completed. though these evens contains IDS published attributes. please let me have a solution here.
automation
misp automation not working properly event wise data is not getting downloaded hello i am trying to automate the process of suricata rules export i am trying this api format https events nids download my final api would be let say if i want to export just for event https events nids suricata download the above event wise api is not working for any specific event id it is exporting all the rules from all events even when i am trying to export all the suricata rules with the api https events nids suricata download it is leaving my eventa to download the suricata rule for means it is not completed though these evens contains ids published attributes please let me have a solution here
1
End of preview. Expand in Data Studio

Dataset Card for "binary-10IQR-automation"

More Information needed

Downloads last month
19

Collection including karths/binary-10IQR-automation