Technical Debt and it Types Datasets
Collection
24 items
•
Updated
•
1
Unnamed: 0
int64 1
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 3
438
| labels
stringlengths 4
308
| body
stringlengths 7
254k
| index
stringclasses 7
values | text_combine
stringlengths 96
254k
| label
stringclasses 2
values | text
stringlengths 96
246k
| binary_label
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4,265 | 21,280,249,914 | IssuesEvent | 2022-04-14 00:29:34 | aws/aws-lambda-builders | https://api.github.com/repos/aws/aws-lambda-builders | closed | JavaGradleWorkflow fails with Gradle <3.5; Cannot convert the provided notation to a File or URI | area/workflow/java_gradle maintainer/need-followup | **Description:**
When I run `sam build` on my Java project with Gradle 3.4.1, it always fails.
This appears to be due to https://github.com/awslabs/aws-lambda-builders/blob/develop/aws_lambda_builders/workflows/java_gradle/resources/lambda-build-init.gradle#L18 setting the project `buildDir` property to a `Path`, when it isn't prepared for it. Adding `.toFile()` to this line appears to fix the problem.
It appears that Gradle 3.5+ support resolving `Path` values.
**Steps to reproduce the issue:**
1. `sam init --runtime java8 --dependency-manager gradle --name gradlebuildtest`
2. `cd gradlebuildtest/`
3. `(cd HelloWorldFunction && sdk use gradle 3.4.1 && gradle wrapper) && sam build`
**Observed result:**
```
Using gradle version 3.4.1 in this shell.
:wrapper
BUILD SUCCESSFUL
Total time: 1.31 secs
Building resource 'HelloWorldFunction'
Running JavaGradleWorkflow:GradleBuild
Build Failed
Error: JavaGradleWorkflow:GradleBuild - Gradle Failed: FAILURE: Build failed with an exception.
* What went wrong:
A problem occurred configuring root project 'HelloWorldFunction'.
> Cannot convert the provided notation to a File or URI: /var/folders/p1/w3tg9xz54lj_4xgwj60tffb126v92f/T/tmptseldu67/4abbb2503507efca0fbeaf9d14459fc8cdd6af90/build.
The following types/formats are supported:
- A String or CharSequence path, for example 'src/main/java' or '/usr/include'.
- A String or CharSequence URI, for example 'file:/usr/include'.
- A File instance.
- A URI or URL instance.
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
mac-dcarr:gradlebuildtest dcarr$
```
**Expected result:**
```
mac-dcarr:gradlebuildtest dcarr$ (cd HelloWorldFunction && sdk use gradle 5.6.3 && gradle wrapper) && sam build
Using gradle version 5.6.3 in this shell.
BUILD SUCCESSFUL in 727ms
1 actionable task: 1 executed
Building resource 'HelloWorldFunction'
Running JavaGradleWorkflow:GradleBuild
Running JavaGradleWorkflow:CopyArtifacts
Build Succeeded
Built Artifacts : .aws-sam/build
Built Template : .aws-sam/build/template.yaml
Commands you can use next
=========================
[*] Invoke Function: sam local invoke
[*] Package: sam package --s3-bucket <yourbucket>
mac-dcarr:gradlebuildtest dcarr$
```
**Additional environment details (Ex: Windows, Mac, Amazon Linux etc)**
Scripts were run on Mac OS X 10.14.6, and use https://sdkman.io/ for Gradle version management.
| True | JavaGradleWorkflow fails with Gradle <3.5; Cannot convert the provided notation to a File or URI - **Description:**
When I run `sam build` on my Java project with Gradle 3.4.1, it always fails.
This appears to be due to https://github.com/awslabs/aws-lambda-builders/blob/develop/aws_lambda_builders/workflows/java_gradle/resources/lambda-build-init.gradle#L18 setting the project `buildDir` property to a `Path`, when it isn't prepared for it. Adding `.toFile()` to this line appears to fix the problem.
It appears that Gradle 3.5+ support resolving `Path` values.
**Steps to reproduce the issue:**
1. `sam init --runtime java8 --dependency-manager gradle --name gradlebuildtest`
2. `cd gradlebuildtest/`
3. `(cd HelloWorldFunction && sdk use gradle 3.4.1 && gradle wrapper) && sam build`
**Observed result:**
```
Using gradle version 3.4.1 in this shell.
:wrapper
BUILD SUCCESSFUL
Total time: 1.31 secs
Building resource 'HelloWorldFunction'
Running JavaGradleWorkflow:GradleBuild
Build Failed
Error: JavaGradleWorkflow:GradleBuild - Gradle Failed: FAILURE: Build failed with an exception.
* What went wrong:
A problem occurred configuring root project 'HelloWorldFunction'.
> Cannot convert the provided notation to a File or URI: /var/folders/p1/w3tg9xz54lj_4xgwj60tffb126v92f/T/tmptseldu67/4abbb2503507efca0fbeaf9d14459fc8cdd6af90/build.
The following types/formats are supported:
- A String or CharSequence path, for example 'src/main/java' or '/usr/include'.
- A String or CharSequence URI, for example 'file:/usr/include'.
- A File instance.
- A URI or URL instance.
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
mac-dcarr:gradlebuildtest dcarr$
```
**Expected result:**
```
mac-dcarr:gradlebuildtest dcarr$ (cd HelloWorldFunction && sdk use gradle 5.6.3 && gradle wrapper) && sam build
Using gradle version 5.6.3 in this shell.
BUILD SUCCESSFUL in 727ms
1 actionable task: 1 executed
Building resource 'HelloWorldFunction'
Running JavaGradleWorkflow:GradleBuild
Running JavaGradleWorkflow:CopyArtifacts
Build Succeeded
Built Artifacts : .aws-sam/build
Built Template : .aws-sam/build/template.yaml
Commands you can use next
=========================
[*] Invoke Function: sam local invoke
[*] Package: sam package --s3-bucket <yourbucket>
mac-dcarr:gradlebuildtest dcarr$
```
**Additional environment details (Ex: Windows, Mac, Amazon Linux etc)**
Scripts were run on Mac OS X 10.14.6, and use https://sdkman.io/ for Gradle version management.
| main | javagradleworkflow fails with gradle cannot convert the provided notation to a file or uri description when i run sam build on my java project with gradle it always fails this appears to be due to setting the project builddir property to a path when it isn t prepared for it adding tofile to this line appears to fix the problem it appears that gradle support resolving path values steps to reproduce the issue sam init runtime dependency manager gradle name gradlebuildtest cd gradlebuildtest cd helloworldfunction sdk use gradle gradle wrapper sam build observed result using gradle version in this shell wrapper build successful total time secs building resource helloworldfunction running javagradleworkflow gradlebuild build failed error javagradleworkflow gradlebuild gradle failed failure build failed with an exception what went wrong a problem occurred configuring root project helloworldfunction cannot convert the provided notation to a file or uri var folders t build the following types formats are supported a string or charsequence path for example src main java or usr include a string or charsequence uri for example file usr include a file instance a uri or url instance try run with stacktrace option to get the stack trace run with info or debug option to get more log output mac dcarr gradlebuildtest dcarr expected result mac dcarr gradlebuildtest dcarr cd helloworldfunction sdk use gradle gradle wrapper sam build using gradle version in this shell build successful in actionable task executed building resource helloworldfunction running javagradleworkflow gradlebuild running javagradleworkflow copyartifacts build succeeded built artifacts aws sam build built template aws sam build template yaml commands you can use next invoke function sam local invoke package sam package bucket mac dcarr gradlebuildtest dcarr additional environment details ex windows mac amazon linux etc scripts were run on mac os x and use for gradle version management | 1 |
4,612 | 23,879,130,572 | IssuesEvent | 2022-09-07 22:28:31 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | Feature request: AWS::LanguageExtensions support | type/feature maintainer/need-followup | ### Describe your idea/feature/enhancement
I wish SAM CLI would handle the new AWS::LanguageExtensions transform as specified in the documentation: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-languageextension-transform.html
Example template:
```yml
AWSTemplateFormatVersion: 2010-09-09
Transform:
- AWS::LanguageExtensions
- AWS::Serverless-2016-10-31
Parameters:
Environment:
Type: String
Default: dev
AllowedValues:
- dev
- prod
Conditions:
IsProd: !Equals [!Ref Environment, prod]
Resources:
Bucket:
Type: AWS::S3::Bucket
DeletionPolicy: !If [IsProd, Retain, Delete]
UpdateReplacePolicy: !If [IsProd, Retain, Delete]
```
`AWS CLI`, version 2.7.7 succeeds with:
```
❯ aws cloudformation deploy --stack-name test-language-extension --template-file sam/test-template.yml
Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - test-language-extension
```
`SAM CLI`, version 1.55.0 fail with:
```
sam deploy --stack-name test-language-extension --template-file sam/test-template.yml
Traceback (most recent call last):
[... redacted stack trace ...]
File "/usr/local/Cellar/aws-sam-cli/1.55.0/libexec/lib/python3.8/site-packages/samcli/lib/samlib/wrapper.py", line 70, in run_plugins
raise InvalidSamDocumentException(
samcli.commands.validate.lib.exceptions.InvalidSamDocumentException: [InvalidTemplateException('Every DeletionPolicy member must be a string.')] Every DeletionPolicy member must be a string.
```
### Proposal
Implement the same changes as `aws-cli` and `cfn-lint`
For reference, the releases/announcements from both tools
- https://github.com/aws-cloudformation/cfn-lint/compare/v0.62.0..v0.63.0
- https://github.com/aws/aws-cli/issues/3825#issuecomment-1231608258
Things to consider:
The order of the transforms is important and using multiple transforms is currently broken in `cfn-lint` https://github.com/aws-cloudformation/cfn-lint/issues/2346
### Additional Details
No additional details | True | Feature request: AWS::LanguageExtensions support - ### Describe your idea/feature/enhancement
I wish SAM CLI would handle the new AWS::LanguageExtensions transform as specified in the documentation: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-languageextension-transform.html
Example template:
```yml
AWSTemplateFormatVersion: 2010-09-09
Transform:
- AWS::LanguageExtensions
- AWS::Serverless-2016-10-31
Parameters:
Environment:
Type: String
Default: dev
AllowedValues:
- dev
- prod
Conditions:
IsProd: !Equals [!Ref Environment, prod]
Resources:
Bucket:
Type: AWS::S3::Bucket
DeletionPolicy: !If [IsProd, Retain, Delete]
UpdateReplacePolicy: !If [IsProd, Retain, Delete]
```
`AWS CLI`, version 2.7.7 succeeds with:
```
❯ aws cloudformation deploy --stack-name test-language-extension --template-file sam/test-template.yml
Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - test-language-extension
```
`SAM CLI`, version 1.55.0 fail with:
```
sam deploy --stack-name test-language-extension --template-file sam/test-template.yml
Traceback (most recent call last):
[... redacted stack trace ...]
File "/usr/local/Cellar/aws-sam-cli/1.55.0/libexec/lib/python3.8/site-packages/samcli/lib/samlib/wrapper.py", line 70, in run_plugins
raise InvalidSamDocumentException(
samcli.commands.validate.lib.exceptions.InvalidSamDocumentException: [InvalidTemplateException('Every DeletionPolicy member must be a string.')] Every DeletionPolicy member must be a string.
```
### Proposal
Implement the same changes as `aws-cli` and `cfn-lint`
For reference, the releases/announcements from both tools
- https://github.com/aws-cloudformation/cfn-lint/compare/v0.62.0..v0.63.0
- https://github.com/aws/aws-cli/issues/3825#issuecomment-1231608258
Things to consider:
The order of the transforms is important and using multiple transforms is currently broken in `cfn-lint` https://github.com/aws-cloudformation/cfn-lint/issues/2346
### Additional Details
No additional details | main | feature request aws languageextensions support describe your idea feature enhancement i wish sam cli would handle the new aws languageextensions transform as specified in the documentation example template yml awstemplateformatversion transform aws languageextensions aws serverless parameters environment type string default dev allowedvalues dev prod conditions isprod equals resources bucket type aws bucket deletionpolicy if updatereplacepolicy if aws cli version succeeds with ❯ aws cloudformation deploy stack name test language extension template file sam test template yml waiting for changeset to be created waiting for stack create update to complete successfully created updated stack test language extension sam cli version fail with sam deploy stack name test language extension template file sam test template yml traceback most recent call last file usr local cellar aws sam cli libexec lib site packages samcli lib samlib wrapper py line in run plugins raise invalidsamdocumentexception samcli commands validate lib exceptions invalidsamdocumentexception every deletionpolicy member must be a string proposal implement the same changes as aws cli and cfn lint for reference the releases announcements from both tools things to consider the order of the transforms is important and using multiple transforms is currently broken in cfn lint additional details no additional details | 1 |
1,720 | 6,574,483,942 | IssuesEvent | 2017-09-11 13:03:43 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Ansible apt ignore cache_valid_time value | affects_2.2 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
apt
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file = /home/vagrant/my/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
ansible.cfg:
[defaults]
hostfile = hosts
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
OS you are running Ansible from: Ubuntu 16.04
OS you are managing: Ubuntu 16.04
##### SUMMARY
<!--- Explain the problem briefly -->
After upgradig to ansible 2.2 I always get changes in apt module because it ignore **cache_valid_time** value.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
test.yml:
---
- hosts: localvm
become: yes
tasks:
- name: Only run "update_cache=yes" if the last one is more than 3600 seconds ago
apt:
update_cache: yes
cache_valid_time: 3600
vagrant@ans-contrl:~/my$ ansible-playbook test.yml -vvv
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Update apt cache on first run, skip updating cache on second run.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
Always changes.
<!--- Paste verbatim command output between quotes below -->
```
vagrant@ans-contrl:~/my$ ansible-playbook test.yml -vvv
Using /home/vagrant/my/ansible.cfg as config file
PLAYBOOK: test.yml *************************************************************
1 plays in test.yml
PLAY [localvm] *****************************************************************
TASK [setup] *******************************************************************
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py
<192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329 `" && echo ansible-tmp-1478178800.59-26361197346329="` echo $HOME/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329 `" ) && sleep 0'"'"''
<192.168.60.4> PUT /tmp/tmpGz1Eb9 TO /home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/setup.py
<192.168.60.4> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.60.4]'
<192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/ /home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/setup.py && sleep 0'"'"''
<192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.60.4 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-bblyfpmawwxwihkyhdzgsrwimfkjlzuk; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/setup.py; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"''
ok: [192.168.60.4]
TASK [Only run "update_cache=yes" if the last one is more than 3600 seconds ago] ***
task path: /home/vagrant/my/test.yml:6
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/packaging/os/apt.py
<192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469 `" && echo ansible-tmp-1478178801.29-209769775274469="` echo $HOME/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469 `" ) && sleep 0'"'"''
<192.168.60.4> PUT /tmp/tmpb8HOiL TO /home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/apt.py
<192.168.60.4> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.60.4]'
<192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/ /home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/apt.py && sleep 0'"'"''
<192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.60.4 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-soyskgemfitdsrhonujvdopjieqzexmq; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/apt.py; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"''
changed: [192.168.60.4] => {
"cache_update_time": 1478170123,
"cache_updated": true,
"changed": true,
"invocation": {
"module_args": {
"allow_unauthenticated": false,
"autoremove": false,
"cache_valid_time": 3600,
"deb": null,
"default_release": null,
"dpkg_options": "force-confdef,force-confold",
"force": false,
"install_recommends": null,
"only_upgrade": false,
"package": null,
"purge": false,
"state": "present",
"update_cache": true,
"upgrade": null
},
"module_name": "apt"
}
}
PLAY RECAP *********************************************************************
192.168.60.4 : ok=2 changed=1 unreachable=0 failed=0
vagrant@ans-contrl:~/my$ ansible-playbook test.yml -vvv
Using /home/vagrant/my/ansible.cfg as config file
PLAYBOOK: test.yml *************************************************************
1 plays in test.yml
PLAY [localvm] *****************************************************************
TASK [setup] *******************************************************************
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py
<192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023 `" && echo ansible-tmp-1478178871.45-218992397586023="` echo $HOME/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023 `" ) && sleep 0'"'"''
<192.168.60.4> PUT /tmp/tmpv9o0e3 TO /home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/setup.py
<192.168.60.4> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.60.4]'
<192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/ /home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/setup.py && sleep 0'"'"''
<192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.60.4 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-lwuttqhzswvnqlvkfcbraivcbuceisuz; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/setup.py; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"''
ok: [192.168.60.4]
TASK [Only run "update_cache=yes" if the last one is more than 3600 seconds ago] ***
task path: /home/vagrant/my/test.yml:6
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/packaging/os/apt.py
<192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646 `" && echo ansible-tmp-1478178872.37-148384000832646="` echo $HOME/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646 `" ) && sleep 0'"'"''
<192.168.60.4> PUT /tmp/tmp3rCfzf TO /home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/apt.py
<192.168.60.4> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.60.4]'
<192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/ /home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/apt.py && sleep 0'"'"''
<192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.60.4 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-xhjxsxornuzelhyvlsiksuindfcmjlpx; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/apt.py; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"''
changed: [192.168.60.4] => {
"cache_update_time": 1478170123,
"cache_updated": true,
"changed": true,
"invocation": {
"module_args": {
"allow_unauthenticated": false,
"autoremove": false,
"cache_valid_time": 3600,
"deb": null,
"default_release": null,
"dpkg_options": "force-confdef,force-confold",
"force": false,
"install_recommends": null,
"only_upgrade": false,
"package": null,
"purge": false,
"state": "present",
"update_cache": true,
"upgrade": null
},
"module_name": "apt"
}
}
PLAY RECAP *********************************************************************
192.168.60.4 : ok=2 changed=1 unreachable=0 failed=0
```
It seems **cache_update_time** didn't updated. | True | Ansible apt ignore cache_valid_time value - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
apt
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file = /home/vagrant/my/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
ansible.cfg:
[defaults]
hostfile = hosts
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
OS you are running Ansible from: Ubuntu 16.04
OS you are managing: Ubuntu 16.04
##### SUMMARY
<!--- Explain the problem briefly -->
After upgradig to ansible 2.2 I always get changes in apt module because it ignore **cache_valid_time** value.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
test.yml:
---
- hosts: localvm
become: yes
tasks:
- name: Only run "update_cache=yes" if the last one is more than 3600 seconds ago
apt:
update_cache: yes
cache_valid_time: 3600
vagrant@ans-contrl:~/my$ ansible-playbook test.yml -vvv
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Update apt cache on first run, skip updating cache on second run.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
Always changes.
<!--- Paste verbatim command output between quotes below -->
```
vagrant@ans-contrl:~/my$ ansible-playbook test.yml -vvv
Using /home/vagrant/my/ansible.cfg as config file
PLAYBOOK: test.yml *************************************************************
1 plays in test.yml
PLAY [localvm] *****************************************************************
TASK [setup] *******************************************************************
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py
<192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329 `" && echo ansible-tmp-1478178800.59-26361197346329="` echo $HOME/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329 `" ) && sleep 0'"'"''
<192.168.60.4> PUT /tmp/tmpGz1Eb9 TO /home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/setup.py
<192.168.60.4> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.60.4]'
<192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/ /home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/setup.py && sleep 0'"'"''
<192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.60.4 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-bblyfpmawwxwihkyhdzgsrwimfkjlzuk; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/setup.py; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"''
ok: [192.168.60.4]
TASK [Only run "update_cache=yes" if the last one is more than 3600 seconds ago] ***
task path: /home/vagrant/my/test.yml:6
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/packaging/os/apt.py
<192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469 `" && echo ansible-tmp-1478178801.29-209769775274469="` echo $HOME/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469 `" ) && sleep 0'"'"''
<192.168.60.4> PUT /tmp/tmpb8HOiL TO /home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/apt.py
<192.168.60.4> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.60.4]'
<192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/ /home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/apt.py && sleep 0'"'"''
<192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.60.4 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-soyskgemfitdsrhonujvdopjieqzexmq; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/apt.py; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"''
changed: [192.168.60.4] => {
"cache_update_time": 1478170123,
"cache_updated": true,
"changed": true,
"invocation": {
"module_args": {
"allow_unauthenticated": false,
"autoremove": false,
"cache_valid_time": 3600,
"deb": null,
"default_release": null,
"dpkg_options": "force-confdef,force-confold",
"force": false,
"install_recommends": null,
"only_upgrade": false,
"package": null,
"purge": false,
"state": "present",
"update_cache": true,
"upgrade": null
},
"module_name": "apt"
}
}
PLAY RECAP *********************************************************************
192.168.60.4 : ok=2 changed=1 unreachable=0 failed=0
vagrant@ans-contrl:~/my$ ansible-playbook test.yml -vvv
Using /home/vagrant/my/ansible.cfg as config file
PLAYBOOK: test.yml *************************************************************
1 plays in test.yml
PLAY [localvm] *****************************************************************
TASK [setup] *******************************************************************
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py
<192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023 `" && echo ansible-tmp-1478178871.45-218992397586023="` echo $HOME/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023 `" ) && sleep 0'"'"''
<192.168.60.4> PUT /tmp/tmpv9o0e3 TO /home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/setup.py
<192.168.60.4> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.60.4]'
<192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/ /home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/setup.py && sleep 0'"'"''
<192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.60.4 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-lwuttqhzswvnqlvkfcbraivcbuceisuz; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/setup.py; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"''
ok: [192.168.60.4]
TASK [Only run "update_cache=yes" if the last one is more than 3600 seconds ago] ***
task path: /home/vagrant/my/test.yml:6
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/packaging/os/apt.py
<192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646 `" && echo ansible-tmp-1478178872.37-148384000832646="` echo $HOME/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646 `" ) && sleep 0'"'"''
<192.168.60.4> PUT /tmp/tmp3rCfzf TO /home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/apt.py
<192.168.60.4> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.60.4]'
<192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/ /home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/apt.py && sleep 0'"'"''
<192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.60.4 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-xhjxsxornuzelhyvlsiksuindfcmjlpx; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/apt.py; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"''
changed: [192.168.60.4] => {
"cache_update_time": 1478170123,
"cache_updated": true,
"changed": true,
"invocation": {
"module_args": {
"allow_unauthenticated": false,
"autoremove": false,
"cache_valid_time": 3600,
"deb": null,
"default_release": null,
"dpkg_options": "force-confdef,force-confold",
"force": false,
"install_recommends": null,
"only_upgrade": false,
"package": null,
"purge": false,
"state": "present",
"update_cache": true,
"upgrade": null
},
"module_name": "apt"
}
}
PLAY RECAP *********************************************************************
192.168.60.4 : ok=2 changed=1 unreachable=0 failed=0
```
It seems **cache_update_time** didn't updated. | main | ansible apt ignore cache valid time value issue type bug report component name apt ansible version ansible config file home vagrant my ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables ansible cfg hostfile hosts os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific os you are running ansible from ubuntu os you are managing ubuntu summary after upgradig to ansible i always get changes in apt module because it ignore cache valid time value steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used test yml hosts localvm become yes tasks name only run update cache yes if the last one is more than seconds ago apt update cache yes cache valid time vagrant ans contrl my ansible playbook test yml vvv expected results update apt cache on first run skip updating cache on second run actual results always changes vagrant ans contrl my ansible playbook test yml vvv using home vagrant my ansible cfg as config file playbook test yml plays in test yml play task using module file usr lib dist packages ansible modules core system setup py establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home vagrant ansible tmp ansible tmp setup py ssh exec sftp b c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c chmod u x home vagrant ansible tmp ansible tmp home vagrant ansible tmp ansible tmp setup py sleep establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success bblyfpmawwxwihkyhdzgsrwimfkjlzuk usr bin python home vagrant ansible tmp ansible tmp setup py rm rf home vagrant ansible tmp ansible tmp dev null sleep ok task task path home vagrant my test yml using module file usr lib dist packages ansible modules core packaging os apt py establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home vagrant ansible tmp ansible tmp apt py ssh exec sftp b c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c chmod u x home vagrant ansible tmp ansible tmp home vagrant ansible tmp ansible tmp apt py sleep establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success soyskgemfitdsrhonujvdopjieqzexmq usr bin python home vagrant ansible tmp ansible tmp apt py rm rf home vagrant ansible tmp ansible tmp dev null sleep changed cache update time cache updated true changed true invocation module args allow unauthenticated false autoremove false cache valid time deb null default release null dpkg options force confdef force confold force false install recommends null only upgrade false package null purge false state present update cache true upgrade null module name apt play recap ok changed unreachable failed vagrant ans contrl my ansible playbook test yml vvv using home vagrant my ansible cfg as config file playbook test yml plays in test yml play task using module file usr lib dist packages ansible modules core system setup py establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home vagrant ansible tmp ansible tmp setup py ssh exec sftp b c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c chmod u x home vagrant ansible tmp ansible tmp home vagrant ansible tmp ansible tmp setup py sleep establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success lwuttqhzswvnqlvkfcbraivcbuceisuz usr bin python home vagrant ansible tmp ansible tmp setup py rm rf home vagrant ansible tmp ansible tmp dev null sleep ok task task path home vagrant my test yml using module file usr lib dist packages ansible modules core packaging os apt py establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home vagrant ansible tmp ansible tmp apt py ssh exec sftp b c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c chmod u x home vagrant ansible tmp ansible tmp home vagrant ansible tmp ansible tmp apt py sleep establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success xhjxsxornuzelhyvlsiksuindfcmjlpx usr bin python home vagrant ansible tmp ansible tmp apt py rm rf home vagrant ansible tmp ansible tmp dev null sleep changed cache update time cache updated true changed true invocation module args allow unauthenticated false autoremove false cache valid time deb null default release null dpkg options force confdef force confold force false install recommends null only upgrade false package null purge false state present update cache true upgrade null module name apt play recap ok changed unreachable failed it seems cache update time didn t updated | 1 |
273,306 | 23,745,130,865 | IssuesEvent | 2022-08-31 15:21:02 | Kong/gateway-operator | https://api.github.com/repos/Kong/gateway-operator | closed | make run target not working when running on GKE | bug area/tests priority/low | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
`make run` target seem to not work when running GKE
```console
$ make run
/Users/[email protected]/git/gateway-operator/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
/Users/[email protected]/git/gateway-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
/Users/[email protected]/git/gateway-operator/bin/kustomize build config/crd | kubectl apply -f -
customresourcedefinition.apiextensions.k8s.io/controlplanes.gateway-operator.konghq.com created
customresourcedefinition.apiextensions.k8s.io/dataplanes.gateway-operator.konghq.com created
customresourcedefinition.apiextensions.k8s.io/gatewayconfigurations.gateway-operator.konghq.com created
kubectl kustomize https://github.com/kubernetes-sigs/gateway-api.git/config/crd?ref=main | kubectl apply -f -
customresourcedefinition.apiextensions.k8s.io/gatewayclasses.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/gateways.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/httproutes.gateway.networking.k8s.io created
CONTROLLER_DEVELOPMENT_MODE=true go run ./main.go --no-leader-election
INFO: development mode has been enabled
INFO: leader election has been disabled
1.65732348903128e+09 ERROR Failed to get API Group-Resources {"error": "no Auth Provider found for name \"gcp\""}
sigs.k8s.io/controller-runtime/pkg/cluster.New
/Users/[email protected]/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/cluster.go:160
sigs.k8s.io/controller-runtime/pkg/manager.New
/Users/[email protected]/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/manager.go:322
github.com/kong/gateway-operator/internal/manager.Run
/Users/[email protected]/git/gateway-operator/internal/manager/run.go:84
main.main
/Users/[email protected]/git/gateway-operator/main.go:71
runtime.main
/opt/homebrew/Cellar/go/1.18.3/libexec/src/runtime/proc.go:250
unable to start manager: no Auth Provider found for name "gcp"
exit status 1
make: *** [run] Error 1
```
### Expected Behavior
Should work.
### Steps To Reproduce
```markdown
1. Setup google cloud cluster
2. Run `make run` command
```
### Kong Ingress Controller version
_No response_
### Kubernetes version
_No response_
### Anything else?
_No response_ | 1.0 | make run target not working when running on GKE - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
`make run` target seem to not work when running GKE
```console
$ make run
/Users/[email protected]/git/gateway-operator/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
/Users/[email protected]/git/gateway-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
/Users/[email protected]/git/gateway-operator/bin/kustomize build config/crd | kubectl apply -f -
customresourcedefinition.apiextensions.k8s.io/controlplanes.gateway-operator.konghq.com created
customresourcedefinition.apiextensions.k8s.io/dataplanes.gateway-operator.konghq.com created
customresourcedefinition.apiextensions.k8s.io/gatewayconfigurations.gateway-operator.konghq.com created
kubectl kustomize https://github.com/kubernetes-sigs/gateway-api.git/config/crd?ref=main | kubectl apply -f -
customresourcedefinition.apiextensions.k8s.io/gatewayclasses.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/gateways.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/httproutes.gateway.networking.k8s.io created
CONTROLLER_DEVELOPMENT_MODE=true go run ./main.go --no-leader-election
INFO: development mode has been enabled
INFO: leader election has been disabled
1.65732348903128e+09 ERROR Failed to get API Group-Resources {"error": "no Auth Provider found for name \"gcp\""}
sigs.k8s.io/controller-runtime/pkg/cluster.New
/Users/[email protected]/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/cluster.go:160
sigs.k8s.io/controller-runtime/pkg/manager.New
/Users/[email protected]/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/manager.go:322
github.com/kong/gateway-operator/internal/manager.Run
/Users/[email protected]/git/gateway-operator/internal/manager/run.go:84
main.main
/Users/[email protected]/git/gateway-operator/main.go:71
runtime.main
/opt/homebrew/Cellar/go/1.18.3/libexec/src/runtime/proc.go:250
unable to start manager: no Auth Provider found for name "gcp"
exit status 1
make: *** [run] Error 1
```
### Expected Behavior
Should work.
### Steps To Reproduce
```markdown
1. Setup google cloud cluster
2. Run `make run` command
```
### Kong Ingress Controller version
_No response_
### Kubernetes version
_No response_
### Anything else?
_No response_ | non_main | make run target not working when running on gke is there an existing issue for this i have searched the existing issues current behavior make run target seem to not work when running gke console make run users jarek mroz konghq com git gateway operator bin controller gen rbac rolename manager role crd webhook paths output crd artifacts config config crd bases users jarek mroz konghq com git gateway operator bin controller gen object headerfile hack boilerplate go txt paths go fmt go vet users jarek mroz konghq com git gateway operator bin kustomize build config crd kubectl apply f customresourcedefinition apiextensions io controlplanes gateway operator konghq com created customresourcedefinition apiextensions io dataplanes gateway operator konghq com created customresourcedefinition apiextensions io gatewayconfigurations gateway operator konghq com created kubectl kustomize kubectl apply f customresourcedefinition apiextensions io gatewayclasses gateway networking io created customresourcedefinition apiextensions io gateways gateway networking io created customresourcedefinition apiextensions io httproutes gateway networking io created controller development mode true go run main go no leader election info development mode has been enabled info leader election has been disabled error failed to get api group resources error no auth provider found for name gcp sigs io controller runtime pkg cluster new users jarek mroz konghq com go pkg mod sigs io controller runtime pkg cluster cluster go sigs io controller runtime pkg manager new users jarek mroz konghq com go pkg mod sigs io controller runtime pkg manager manager go github com kong gateway operator internal manager run users jarek mroz konghq com git gateway operator internal manager run go main main users jarek mroz konghq com git gateway operator main go runtime main opt homebrew cellar go libexec src runtime proc go unable to start manager no auth provider found for name gcp exit status make error expected behavior should work steps to reproduce markdown setup google cloud cluster run make run command kong ingress controller version no response kubernetes version no response anything else no response | 0 |
3,911 | 17,466,074,614 | IssuesEvent | 2021-08-06 17:01:50 | synthesized-io/fairlens | https://api.github.com/repos/synthesized-io/fairlens | closed | Updated README and documentation | category:repository-maintainance | What things do we have left to do here?
Updated:
- [x] Write short 2-3 tutorials based on either COMPAS, German Credit, Adult, or LSAC datasets.
- [x] Include a fairness scorer use case in README.
- [x] Polishing overview and quickstart.
- [x] Include contribution guides in docs | True | Updated README and documentation - What things do we have left to do here?
Updated:
- [x] Write short 2-3 tutorials based on either COMPAS, German Credit, Adult, or LSAC datasets.
- [x] Include a fairness scorer use case in README.
- [x] Polishing overview and quickstart.
- [x] Include contribution guides in docs | main | updated readme and documentation what things do we have left to do here updated write short tutorials based on either compas german credit adult or lsac datasets include a fairness scorer use case in readme polishing overview and quickstart include contribution guides in docs | 1 |
49,228 | 13,445,714,683 | IssuesEvent | 2020-09-08 11:52:12 | chaitanya00/aem-wknd | https://api.github.com/repos/chaitanya00/aem-wknd | opened | CVE-2018-16490 (High) detected in mpath-0.1.1.tgz | security vulnerability | ## CVE-2018-16490 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mpath-0.1.1.tgz</b></p></summary>
<p>{G,S}et object values using MongoDB path notation</p>
<p>Library home page: <a href="https://registry.npmjs.org/mpath/-/mpath-0.1.1.tgz">https://registry.npmjs.org/mpath/-/mpath-0.1.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/aem-wknd/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/aem-wknd/node_modules/mpath/package.json</p>
<p>
Dependency Hierarchy:
- mongoose-4.2.4.tgz (Root Library)
- :x: **mpath-0.1.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/chaitanya00/aem-wknd/commit/3f4c2902a45eb04bc7915c408df14545aa90511c">3f4c2902a45eb04bc7915c408df14545aa90511c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A prototype pollution vulnerability was found in module mpath <0.5.1 that allows an attacker to inject arbitrary properties onto Object.prototype.
<p>Publish Date: 2019-02-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16490>CVE-2018-16490</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://hackerone.com/reports/390860">https://hackerone.com/reports/390860</a></p>
<p>Release Date: 2019-02-01</p>
<p>Fix Resolution: 0.5.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-16490 (High) detected in mpath-0.1.1.tgz - ## CVE-2018-16490 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mpath-0.1.1.tgz</b></p></summary>
<p>{G,S}et object values using MongoDB path notation</p>
<p>Library home page: <a href="https://registry.npmjs.org/mpath/-/mpath-0.1.1.tgz">https://registry.npmjs.org/mpath/-/mpath-0.1.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/aem-wknd/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/aem-wknd/node_modules/mpath/package.json</p>
<p>
Dependency Hierarchy:
- mongoose-4.2.4.tgz (Root Library)
- :x: **mpath-0.1.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/chaitanya00/aem-wknd/commit/3f4c2902a45eb04bc7915c408df14545aa90511c">3f4c2902a45eb04bc7915c408df14545aa90511c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A prototype pollution vulnerability was found in module mpath <0.5.1 that allows an attacker to inject arbitrary properties onto Object.prototype.
<p>Publish Date: 2019-02-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16490>CVE-2018-16490</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://hackerone.com/reports/390860">https://hackerone.com/reports/390860</a></p>
<p>Release Date: 2019-02-01</p>
<p>Fix Resolution: 0.5.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in mpath tgz cve high severity vulnerability vulnerable library mpath tgz g s et object values using mongodb path notation library home page a href path to dependency file tmp ws scm aem wknd package json path to vulnerable library tmp ws scm aem wknd node modules mpath package json dependency hierarchy mongoose tgz root library x mpath tgz vulnerable library found in head commit a href vulnerability details a prototype pollution vulnerability was found in module mpath that allows an attacker to inject arbitrary properties onto object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
313,959 | 26,965,363,115 | IssuesEvent | 2023-02-08 21:50:13 | bcgov/zeva | https://api.github.com/repos/bcgov/zeva | closed | ZEVA - BCeID user unable to access their report once a reassessment is issued | Bug High Tested :) | **Describe the Bug**
For a BCeID user, when they have been issued a reassessment by government they can no linger access their associated supplementary report anymore - the link to the page appears to be being changed.
**Expected Behaviour**
A BCEID ZEVA user should always be able to access a supplementary report that they have created.
**Actual Behaviour**
The BCeID ZEVA user cannot access their supplementary report after a reassessment has been issued.
**Implications**
The BCeID user is denied access to important information in the app that they should have access to.
**Steps To Reproduce**
Steps to reproduce the behaviour:
User/Role: BCeID ZEVA User
1. After government has issued a reassessment to a supplier
2. Go to Compliance Reporting
3. Click on Reassessment and then click on the tab to view the supplementary report
3. See that you are unable to view the supplementary report.
**Acceptance Criteria**
Given that I am a BCeID user, when I am issued a reassessment, then I should still be able to view the supplementary report associated with the reassessment.
**Development Checklist**
(1) Maybe a fronted issue; see SupplementaryContainer.js first
| 1.0 | ZEVA - BCeID user unable to access their report once a reassessment is issued - **Describe the Bug**
For a BCeID user, when they have been issued a reassessment by government they can no linger access their associated supplementary report anymore - the link to the page appears to be being changed.
**Expected Behaviour**
A BCEID ZEVA user should always be able to access a supplementary report that they have created.
**Actual Behaviour**
The BCeID ZEVA user cannot access their supplementary report after a reassessment has been issued.
**Implications**
The BCeID user is denied access to important information in the app that they should have access to.
**Steps To Reproduce**
Steps to reproduce the behaviour:
User/Role: BCeID ZEVA User
1. After government has issued a reassessment to a supplier
2. Go to Compliance Reporting
3. Click on Reassessment and then click on the tab to view the supplementary report
3. See that you are unable to view the supplementary report.
**Acceptance Criteria**
Given that I am a BCeID user, when I am issued a reassessment, then I should still be able to view the supplementary report associated with the reassessment.
**Development Checklist**
(1) Maybe a fronted issue; see SupplementaryContainer.js first
| non_main | zeva bceid user unable to access their report once a reassessment is issued describe the bug for a bceid user when they have been issued a reassessment by government they can no linger access their associated supplementary report anymore the link to the page appears to be being changed expected behaviour a bceid zeva user should always be able to access a supplementary report that they have created actual behaviour the bceid zeva user cannot access their supplementary report after a reassessment has been issued implications the bceid user is denied access to important information in the app that they should have access to steps to reproduce steps to reproduce the behaviour user role bceid zeva user after government has issued a reassessment to a supplier go to compliance reporting click on reassessment and then click on the tab to view the supplementary report see that you are unable to view the supplementary report acceptance criteria given that i am a bceid user when i am issued a reassessment then i should still be able to view the supplementary report associated with the reassessment development checklist maybe a fronted issue see supplementarycontainer js first | 0 |
542 | 3,956,186,647 | IssuesEvent | 2016-04-30 01:45:43 | citp/coniks-ref-implementation | https://api.github.com/repos/citp/coniks-ref-implementation | closed | Refactor server | maintainability server | Modularize the server a bit more and update any remaining terminology from older versions of the paper. | True | Refactor server - Modularize the server a bit more and update any remaining terminology from older versions of the paper. | main | refactor server modularize the server a bit more and update any remaining terminology from older versions of the paper | 1 |
5,213 | 26,464,341,680 | IssuesEvent | 2023-01-16 21:18:26 | bazelbuild/intellij | https://api.github.com/repos/bazelbuild/intellij | closed | Flag --incompatible_disable_starlark_host_transitions will break IntelliJ Plugin Google in Bazel 7.0 | type: bug product: IntelliJ topic: bazel awaiting-maintainer | Incompatible flag `--incompatible_disable_starlark_host_transitions` will be enabled by default in the next major release (Bazel 7.0), thus breaking IntelliJ Plugin Google. Please migrate to fix this and unblock the flip of this flag.
The flag is documented here: [bazelbuild/bazel#17032](https://github.com/bazelbuild/bazel/issues/17032).
Please check the following CI builds for build and test results:
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-ec33-4c2b-a275-f8aa54ada99f)
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-ec30-4823-b959-41c7eec9a0e9)
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-ec3c-49d1-b4af-3c286c89b3a4)
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-ec39-4a25-afea-07841244a923)
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-ec3f-4c95-8328-9b5d8ef7ddf7)
Never heard of incompatible flags before? We have [documentation](https://docs.bazel.build/versions/master/backward-compatibility.html) that explains everything.
If you have any questions, please file an issue in https://github.com/bazelbuild/continuous-integration. | True | Flag --incompatible_disable_starlark_host_transitions will break IntelliJ Plugin Google in Bazel 7.0 - Incompatible flag `--incompatible_disable_starlark_host_transitions` will be enabled by default in the next major release (Bazel 7.0), thus breaking IntelliJ Plugin Google. Please migrate to fix this and unblock the flip of this flag.
The flag is documented here: [bazelbuild/bazel#17032](https://github.com/bazelbuild/bazel/issues/17032).
Please check the following CI builds for build and test results:
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-ec33-4c2b-a275-f8aa54ada99f)
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-ec30-4823-b959-41c7eec9a0e9)
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-ec3c-49d1-b4af-3c286c89b3a4)
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-ec39-4a25-afea-07841244a923)
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-ec3f-4c95-8328-9b5d8ef7ddf7)
Never heard of incompatible flags before? We have [documentation](https://docs.bazel.build/versions/master/backward-compatibility.html) that explains everything.
If you have any questions, please file an issue in https://github.com/bazelbuild/continuous-integration. | main | flag incompatible disable starlark host transitions will break intellij plugin google in bazel incompatible flag incompatible disable starlark host transitions will be enabled by default in the next major release bazel thus breaking intellij plugin google please migrate to fix this and unblock the flip of this flag the flag is documented here please check the following ci builds for build and test results never heard of incompatible flags before we have that explains everything if you have any questions please file an issue in | 1 |
733,425 | 25,305,604,245 | IssuesEvent | 2022-11-17 13:58:33 | googleapis/java-spanner-jdbc | https://api.github.com/repos/googleapis/java-spanner-jdbc | closed | JVM crash on version 2.5.3 and 2.5.4 | priority: p2 api: spanner | Hi guys,
I raised https://github.com/googleapis/java-spanner-jdbc/issues/657 and you very promptly got onto the issue and upgraded dependencies to resolve the vulnerability. Now there's a very weird thing happening.
Using google-cloud-spanner-jdbc 2.5.3 or 2.5.4 causes a big old JVM crash. 2.5.2 works fine.
#### Environment details
- Alpine 3.14.3
- Corretto JDK OpenJDK Runtime Environment Corretto-11.0.13.8.1 (build 11.0.13+8-LTS)
- Running containerised in GCP Cloud Run
- Error in google-cloud-spanner-jdbc 2.5.3 + 2.5.4
#### Stacktrace
- (Condensed because it's huge. Extracted from err file dump)
```
"Internal exceptions (20 events):"
"Event: 9.471 Thread 0x00003e3f01312800 Exception <a 'java/security/PrivilegedActionException'{0x00000007eb834d70}> (0x00000007eb834d70) thrown at [src/hotspot/share/prims/jvm.cpp, line 1304]"
"Event: 9.471 Thread 0x00003e3f01312800 Exception <a 'sun/nio/fs/UnixException'{0x00000007eb835ee0}> (0x00000007eb835ee0) thrown at [src/hotspot/share/prims/jni.cpp, line 616]"
"Event: 9.471 Thread 0x00003e3f01312800 Exception <a 'java/security/PrivilegedActionException'{0x00000007eb836ce8}> (0x00000007eb836ce8) thrown at [src/hotspot/share/prims/jvm.cpp, line 1304]"
"Event: 9.476 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb8b5508}: 'double java.lang.invoke.DirectMethodHandle$Holder.invokeStatic(java.lang.Object, java.lang.Object)'> (0x00000007eb8b5508) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]"
"Event: 9.478 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb8bdfe0}: 'void java.lang.invoke.DirectMethodHandle$Holder.invokeStatic(java.lang.Object, java.lang.Object, double)'> (0x00000007eb8bdfe0) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]"
"Event: 9.479 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb8c9058}: 'long java.lang.invoke.DirectMethodHandle$Holder.invokeVirtual(java.lang.Object, java.lang.Object)'> (0x00000007eb8c9058) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]"
"Event: 9.492 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb6bcf88}: 'java.lang.Object java.lang.invoke.DirectMethodHandle$Holder.invokeSpecial(java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object)'> (0x00000007eb6bcf88) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]"
"Event: 9.492 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb6c1428}: 'java.lang.Object java.lang.invoke.Invokers$Holder.linkToTargetMethod(java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object)'> (0x00000007eb6c1428) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]"
"Event: 9.510 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb46abb0}: 'double java.lang.invoke.DirectMethodHandle$Holder.invokeInterface(java.lang.Object, java.lang.Object)'> (0x00000007eb46abb0) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]"
"Event: 9.510 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb4709b8}: 'double java.lang.invoke.DirectMethodHandle$Holder.invokeSpecial(java.lang.Object, java.lang.Object, java.lang.Object)'> (0x00000007eb4709b8) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]"
"Event: 9.639 Thread 0x00003e3f01312800 Exception <a 'java/io/FileNotFoundException'{0x00000007ea3c07a0}> (0x00000007ea3c07a0) thrown at [src/hotspot/share/prims/jni.cpp, line 616]"
"Event: 9.701 Thread 0x00003e3f01312800 Implicit null exception at 0x00003e3ef0329edd to 0x00003e3ef032a130"
"Event: 10.237 Thread 0x00003e3edd81d800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007ff321008}: 'long java.lang.invoke.DirectMethodHandle$Holder.invokeStatic(java.lang.Object, long, long)'> (0x00000007ff321008) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]"
"Event: 10.866 Thread 0x00003e3f01312800 Exception <a 'java/lang/ClassNotFoundException'{0x00000007f74d4550}: org/springframework/boot/loader/http/Handler> (0x00000007f74d4550) thrown at [src/hotspot/share/classfile/systemDictionary.cpp, line 231]"
"Event: 11.209 Thread 0x00003e3f01312800 Implicit null exception at 0x00003e3ef02e0cec to 0x00003e3ef02e0d98"
"Event: 11.373 Thread 0x00003e3f01312800 Exception <a 'java/lang/ClassNotFoundException'{0x00000007f3046220}: org/springframework/boot/loader/https/Handler> (0x00000007f3046220) thrown at [src/hotspot/share/classfile/systemDictionary.cpp, line 231]"
"Event: 11.602 Thread 0x00003e3f01312800 Exception <a 'java/io/FileNotFoundException'{0x00000007f0cc2fa8}> (0x00000007f0cc2fa8) thrown at [src/hotspot/share/prims/jni.cpp, line 616]"
"Event: 11.603 Thread 0x00003e3f01312800 Exception <a 'java/lang/ClassNotFoundException'{0x00000007f0cd03c8}: sun/misc/SharedSecrets> (0x00000007f0cd03c8) thrown at [src/hotspot/share/classfile/systemDictionary.cpp, line 231]"
"Event: 11.742 Thread 0x00003e3f01312800 Exception <a 'java/lang/UnsatisfiedLinkError'{0x00000007ef9569b0}: 'int io.grpc.netty.shaded.io.netty.channel.epoll.Native.offsetofEpollData()'> (0x00000007ef9569b0) thrown at [src/hotspot/share/prims/nativeLookup.cpp, line 528]"
"Event: 11.746 Thread 0x00003e3f01312800 Exception <a 'java/lang/reflect/InvocationTargetException'{0x00000007ef9c55a8}> (0x00000007ef9c55a8) thrown at [src/hotspot/share/runtime/reflection.cpp, line 1245]"
[...]
The crash happened outside the Java Virtual Machine in native code.
[...]
Uncaught signal: 6, pid=1, tid=2, fault_addr=0.
[...]
Container terminated on signal 6.
| 1.0 | JVM crash on version 2.5.3 and 2.5.4 - Hi guys,
I raised https://github.com/googleapis/java-spanner-jdbc/issues/657 and you very promptly got onto the issue and upgraded dependencies to resolve the vulnerability. Now there's a very weird thing happening.
Using google-cloud-spanner-jdbc 2.5.3 or 2.5.4 causes a big old JVM crash. 2.5.2 works fine.
#### Environment details
- Alpine 3.14.3
- Corretto JDK OpenJDK Runtime Environment Corretto-11.0.13.8.1 (build 11.0.13+8-LTS)
- Running containerised in GCP Cloud Run
- Error in google-cloud-spanner-jdbc 2.5.3 + 2.5.4
#### Stacktrace
- (Condensed because it's huge. Extracted from err file dump)
```
"Internal exceptions (20 events):"
"Event: 9.471 Thread 0x00003e3f01312800 Exception <a 'java/security/PrivilegedActionException'{0x00000007eb834d70}> (0x00000007eb834d70) thrown at [src/hotspot/share/prims/jvm.cpp, line 1304]"
"Event: 9.471 Thread 0x00003e3f01312800 Exception <a 'sun/nio/fs/UnixException'{0x00000007eb835ee0}> (0x00000007eb835ee0) thrown at [src/hotspot/share/prims/jni.cpp, line 616]"
"Event: 9.471 Thread 0x00003e3f01312800 Exception <a 'java/security/PrivilegedActionException'{0x00000007eb836ce8}> (0x00000007eb836ce8) thrown at [src/hotspot/share/prims/jvm.cpp, line 1304]"
"Event: 9.476 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb8b5508}: 'double java.lang.invoke.DirectMethodHandle$Holder.invokeStatic(java.lang.Object, java.lang.Object)'> (0x00000007eb8b5508) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]"
"Event: 9.478 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb8bdfe0}: 'void java.lang.invoke.DirectMethodHandle$Holder.invokeStatic(java.lang.Object, java.lang.Object, double)'> (0x00000007eb8bdfe0) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]"
"Event: 9.479 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb8c9058}: 'long java.lang.invoke.DirectMethodHandle$Holder.invokeVirtual(java.lang.Object, java.lang.Object)'> (0x00000007eb8c9058) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]"
"Event: 9.492 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb6bcf88}: 'java.lang.Object java.lang.invoke.DirectMethodHandle$Holder.invokeSpecial(java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object)'> (0x00000007eb6bcf88) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]"
"Event: 9.492 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb6c1428}: 'java.lang.Object java.lang.invoke.Invokers$Holder.linkToTargetMethod(java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object)'> (0x00000007eb6c1428) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]"
"Event: 9.510 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb46abb0}: 'double java.lang.invoke.DirectMethodHandle$Holder.invokeInterface(java.lang.Object, java.lang.Object)'> (0x00000007eb46abb0) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]"
"Event: 9.510 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb4709b8}: 'double java.lang.invoke.DirectMethodHandle$Holder.invokeSpecial(java.lang.Object, java.lang.Object, java.lang.Object)'> (0x00000007eb4709b8) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]"
"Event: 9.639 Thread 0x00003e3f01312800 Exception <a 'java/io/FileNotFoundException'{0x00000007ea3c07a0}> (0x00000007ea3c07a0) thrown at [src/hotspot/share/prims/jni.cpp, line 616]"
"Event: 9.701 Thread 0x00003e3f01312800 Implicit null exception at 0x00003e3ef0329edd to 0x00003e3ef032a130"
"Event: 10.237 Thread 0x00003e3edd81d800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007ff321008}: 'long java.lang.invoke.DirectMethodHandle$Holder.invokeStatic(java.lang.Object, long, long)'> (0x00000007ff321008) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]"
"Event: 10.866 Thread 0x00003e3f01312800 Exception <a 'java/lang/ClassNotFoundException'{0x00000007f74d4550}: org/springframework/boot/loader/http/Handler> (0x00000007f74d4550) thrown at [src/hotspot/share/classfile/systemDictionary.cpp, line 231]"
"Event: 11.209 Thread 0x00003e3f01312800 Implicit null exception at 0x00003e3ef02e0cec to 0x00003e3ef02e0d98"
"Event: 11.373 Thread 0x00003e3f01312800 Exception <a 'java/lang/ClassNotFoundException'{0x00000007f3046220}: org/springframework/boot/loader/https/Handler> (0x00000007f3046220) thrown at [src/hotspot/share/classfile/systemDictionary.cpp, line 231]"
"Event: 11.602 Thread 0x00003e3f01312800 Exception <a 'java/io/FileNotFoundException'{0x00000007f0cc2fa8}> (0x00000007f0cc2fa8) thrown at [src/hotspot/share/prims/jni.cpp, line 616]"
"Event: 11.603 Thread 0x00003e3f01312800 Exception <a 'java/lang/ClassNotFoundException'{0x00000007f0cd03c8}: sun/misc/SharedSecrets> (0x00000007f0cd03c8) thrown at [src/hotspot/share/classfile/systemDictionary.cpp, line 231]"
"Event: 11.742 Thread 0x00003e3f01312800 Exception <a 'java/lang/UnsatisfiedLinkError'{0x00000007ef9569b0}: 'int io.grpc.netty.shaded.io.netty.channel.epoll.Native.offsetofEpollData()'> (0x00000007ef9569b0) thrown at [src/hotspot/share/prims/nativeLookup.cpp, line 528]"
"Event: 11.746 Thread 0x00003e3f01312800 Exception <a 'java/lang/reflect/InvocationTargetException'{0x00000007ef9c55a8}> (0x00000007ef9c55a8) thrown at [src/hotspot/share/runtime/reflection.cpp, line 1245]"
[...]
The crash happened outside the Java Virtual Machine in native code.
[...]
Uncaught signal: 6, pid=1, tid=2, fault_addr=0.
[...]
Container terminated on signal 6.
| non_main | jvm crash on version and hi guys i raised and you very promptly got onto the issue and upgraded dependencies to resolve the vulnerability now there s a very weird thing happening using google cloud spanner jdbc or causes a big old jvm crash works fine environment details alpine corretto jdk openjdk runtime environment corretto build lts running containerised in gcp cloud run error in google cloud spanner jdbc stacktrace condensed because it s huge extracted from err file dump internal exceptions events event thread exception thrown at event thread exception thrown at event thread exception thrown at event thread exception thrown at event thread exception thrown at event thread exception thrown at event thread exception thrown at event thread exception thrown at event thread exception thrown at event thread exception thrown at event thread exception thrown at event thread implicit null exception at to event thread exception thrown at event thread exception thrown at event thread implicit null exception at to event thread exception thrown at event thread exception thrown at event thread exception thrown at event thread exception thrown at event thread exception thrown at the crash happened outside the java virtual machine in native code uncaught signal pid tid fault addr container terminated on signal | 0 |
347,060 | 10,424,163,939 | IssuesEvent | 2019-09-16 13:06:35 | AY1920S1-CS2103-T16-3/main | https://api.github.com/repos/AY1920S1-CS2103-T16-3/main | opened | Morph address book into base task manager | priority.High type.Task | * [ ] Rename classes, documentation, etc.
* [ ] Edit methods to fit a task manager | 1.0 | Morph address book into base task manager - * [ ] Rename classes, documentation, etc.
* [ ] Edit methods to fit a task manager | non_main | morph address book into base task manager rename classes documentation etc edit methods to fit a task manager | 0 |
5,683 | 29,924,449,412 | IssuesEvent | 2023-06-22 03:22:52 | spicetify/spicetify-themes | https://api.github.com/repos/spicetify/spicetify-themes | closed | [Flow] Theme overlapping and entirely incorrect | ☠️ unmaintained | **Describe the bug**
Flow theme does not fit the full screen and all buttons/screens are overlapping/incorrect
**To Reproduce**
Open Spotify with Flow theme
**Expected behavior**
Expected Spotify to be fullscreen without overlapping (Look like screenshots in read me)

- OS: Windows 10
- Spotify version 1.1.91.824.g07f1e963
- Spicetify version 2.12.0
- Flow
| True | [Flow] Theme overlapping and entirely incorrect - **Describe the bug**
Flow theme does not fit the full screen and all buttons/screens are overlapping/incorrect
**To Reproduce**
Open Spotify with Flow theme
**Expected behavior**
Expected Spotify to be fullscreen without overlapping (Look like screenshots in read me)

- OS: Windows 10
- Spotify version 1.1.91.824.g07f1e963
- Spicetify version 2.12.0
- Flow
| main | theme overlapping and entirely incorrect describe the bug flow theme does not fit the full screen and all buttons screens are overlapping incorrect to reproduce open spotify with flow theme expected behavior expected spotify to be fullscreen without overlapping look like screenshots in read me os windows spotify version spicetify version flow | 1 |
1,543 | 6,572,237,030 | IssuesEvent | 2017-09-11 00:26:27 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | Gitlab python binding | affects_2.3 feature_idea waiting_on_maintainer | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
gitlab_user
gitlab_project
gitlab_group
##### ANSIBLE VERSION
Latest
##### SUMMARY
The gitlab_x modules depend on the pyapi-gitlab library. pyapi-gitlab is not actively being maintained (the current maintainer is looking for new maintainers), and there are lots and lots of missing features. I believe it would make sense to move to https://github.com/gpocentek/python-gitlab instead, or maybe even implement the parts of the api that are used natively in the gitlab_x modules.
| True | Gitlab python binding - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
gitlab_user
gitlab_project
gitlab_group
##### ANSIBLE VERSION
Latest
##### SUMMARY
The gitlab_x modules depend on the pyapi-gitlab library. pyapi-gitlab is not actively being maintained (the current maintainer is looking for new maintainers), and there are lots and lots of missing features. I believe it would make sense to move to https://github.com/gpocentek/python-gitlab instead, or maybe even implement the parts of the api that are used natively in the gitlab_x modules.
| main | gitlab python binding issue type feature idea component name gitlab user gitlab project gitlab group ansible version latest summary the gitlab x modules depend on the pyapi gitlab library pyapi gitlab is not actively being maintained the current maintainer is looking for new maintainers and there are lots and lots of missing features i believe it would make sense to move to instead or maybe even implement the parts of the api that are used natively in the gitlab x modules | 1 |
51,587 | 7,717,527,209 | IssuesEvent | 2018-05-23 13:58:23 | telerik/kendo-ui-core | https://api.github.com/repos/telerik/kendo-ui-core | opened | Add resources for filtering and sequential loading of items | C: TreeView Documentation Kendo2 | 1. Filtering with loadOnDemand true
2. Sequential expanding of items
3. Asynchronous loading of a dataItem | 1.0 | Add resources for filtering and sequential loading of items - 1. Filtering with loadOnDemand true
2. Sequential expanding of items
3. Asynchronous loading of a dataItem | non_main | add resources for filtering and sequential loading of items filtering with loadondemand true sequential expanding of items asynchronous loading of a dataitem | 0 |
415,922 | 12,137,060,520 | IssuesEvent | 2020-04-23 15:12:36 | olros/picturerama | https://api.github.com/repos/olros/picturerama | closed | Make the stage keep its size from the previous scene | priority | In GitLab by @martsha on Apr 7, 2020, 19:53
null | 1.0 | Make the stage keep its size from the previous scene - In GitLab by @martsha on Apr 7, 2020, 19:53
null | non_main | make the stage keep its size from the previous scene in gitlab by martsha on apr null | 0 |
725 | 4,318,960,452 | IssuesEvent | 2016-07-24 11:06:54 | gogits/gogs | https://api.github.com/repos/gogits/gogs | closed | 500 error when creating a release with an invalid tag name | kind/bug status/assigned to maintainer status/needs feedback | - Gogs version: 0.9.13.0318
- Git version: 1.8.3.1
- Operating system: CentOS 7
- Database: MySQL (MariaDB)
- Can you reproduce the bug at http://try.gogs.io:
- [x] Yes (provide example URL): https://try.gogs.io/beta/test-repo/releases/new
- [ ] No
- [ ] Not relevant
- Log:
```
2016/05/09 12:02:14 [...ters/repo/release.go:195 NewReleasePost()] [E] CreateRelease: exit status 128 - fatal: '1.0 alpha' is not a valid tag name.
```
## Description
When creating a release and input an invalid tag name (e.g., `1.0 alpha`), a 500 server error will be shown. There should be a guide to what a valid tag name is and also a gentle error info after failing.
| True | 500 error when creating a release with an invalid tag name - - Gogs version: 0.9.13.0318
- Git version: 1.8.3.1
- Operating system: CentOS 7
- Database: MySQL (MariaDB)
- Can you reproduce the bug at http://try.gogs.io:
- [x] Yes (provide example URL): https://try.gogs.io/beta/test-repo/releases/new
- [ ] No
- [ ] Not relevant
- Log:
```
2016/05/09 12:02:14 [...ters/repo/release.go:195 NewReleasePost()] [E] CreateRelease: exit status 128 - fatal: '1.0 alpha' is not a valid tag name.
```
## Description
When creating a release and input an invalid tag name (e.g., `1.0 alpha`), a 500 server error will be shown. There should be a guide to what a valid tag name is and also a gentle error info after failing.
| main | error when creating a release with an invalid tag name gogs version git version operating system centos database mysql mariadb can you reproduce the bug at yes provide example url no not relevant log createrelease exit status fatal alpha is not a valid tag name description when creating a release and input an invalid tag name e g alpha a server error will be shown there should be a guide to what a valid tag name is and also a gentle error info after failing | 1 |
4,335 | 21,786,655,184 | IssuesEvent | 2022-05-14 08:29:54 | Numble-challenge-Team/client | https://api.github.com/repos/Numble-challenge-Team/client | closed | eslint 규칙 적용 | maintain eslint | ### ISSUE
- Type: chore
- Page: -
### 변경 사항
- Icon, Layout, Navigation 컴포넌트 eslint 룰 적용
- pages index 파일 eslint 룰 적용
- my-video app 페이지 eslint 룰 적용 | True | eslint 규칙 적용 - ### ISSUE
- Type: chore
- Page: -
### 변경 사항
- Icon, Layout, Navigation 컴포넌트 eslint 룰 적용
- pages index 파일 eslint 룰 적용
- my-video app 페이지 eslint 룰 적용 | main | eslint 규칙 적용 issue type chore page 변경 사항 icon layout navigation 컴포넌트 eslint 룰 적용 pages index 파일 eslint 룰 적용 my video app 페이지 eslint 룰 적용 | 1 |
31,661 | 5,967,920,423 | IssuesEvent | 2017-05-30 16:58:40 | 10up/wp_mock | https://api.github.com/repos/10up/wp_mock | closed | Which branch should be used? | bug Documentation | The repository defaults to showing `dev` branch and it looks like that is the branch with active development. `master` hasn't been updated in a few years. Which branch do you recommend we install? | 1.0 | Which branch should be used? - The repository defaults to showing `dev` branch and it looks like that is the branch with active development. `master` hasn't been updated in a few years. Which branch do you recommend we install? | non_main | which branch should be used the repository defaults to showing dev branch and it looks like that is the branch with active development master hasn t been updated in a few years which branch do you recommend we install | 0 |
254,237 | 8,071,701,650 | IssuesEvent | 2018-08-06 13:57:54 | aiidateam/aiida_core | https://api.github.com/repos/aiidateam/aiida_core | opened | Global variable not defined in aiida.orm.data.remote._clean | priority/important type/bug | In `aiida.orm.data.remote._clean` at line 169, there is a call to the free function `clean_remote`, which is not defined. | 1.0 | Global variable not defined in aiida.orm.data.remote._clean - In `aiida.orm.data.remote._clean` at line 169, there is a call to the free function `clean_remote`, which is not defined. | non_main | global variable not defined in aiida orm data remote clean in aiida orm data remote clean at line there is a call to the free function clean remote which is not defined | 0 |
381,739 | 26,466,991,409 | IssuesEvent | 2023-01-17 01:25:53 | scprogramming/Olive | https://api.github.com/repos/scprogramming/Olive | closed | [Documentation] Login and Registration | In Progress High priority Documentation | I need to document these once I've got them up and running to my standards | 1.0 | [Documentation] Login and Registration - I need to document these once I've got them up and running to my standards | non_main | login and registration i need to document these once i ve got them up and running to my standards | 0 |
541 | 3,955,463,610 | IssuesEvent | 2016-04-29 21:01:28 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | closed | Improvement/suggestion for Curl cheat sheet | Maintainer Approved | This is for https://duck.co/ia/view/curl_cheat_sheet
The page looks fine to me, I just have few small improvements/suggestions
I guess it would be better to
1. Have a -v (verbose) and --connect-timeout (or -m/--max-time) since they're used frequently
2. Instead of having https://www.cheatography.com/ankushagarwal11/cheat-sheets/curl-cheat-sheet/ as the reference, would it be a good idea to have a link that has all options? Something like http://linux.about.com/od/commands/l/blcmdl1_curl.htm ? | True | Improvement/suggestion for Curl cheat sheet - This is for https://duck.co/ia/view/curl_cheat_sheet
The page looks fine to me, I just have few small improvements/suggestions
I guess it would be better to
1. Have a -v (verbose) and --connect-timeout (or -m/--max-time) since they're used frequently
2. Instead of having https://www.cheatography.com/ankushagarwal11/cheat-sheets/curl-cheat-sheet/ as the reference, would it be a good idea to have a link that has all options? Something like http://linux.about.com/od/commands/l/blcmdl1_curl.htm ? | main | improvement suggestion for curl cheat sheet this is for the page looks fine to me i just have few small improvements suggestions i guess it would be better to have a v verbose and connect timeout or m max time since they re used frequently instead of having as the reference would it be a good idea to have a link that has all options something like | 1 |
1,310 | 5,557,785,615 | IssuesEvent | 2017-03-24 13:07:54 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | route53_zone does not support split horizon setup | affects_2.0 aws bug_report cloud waiting_on_maintainer | ##### Issue Type:
- Bug Report
##### Plugin Name:
route53_zone
##### Ansible Version:
```
ansible 2.0.0.2
```
##### Ansible Configuration:
N/A
##### Environment:
Mac OS X against AWS api
##### Summary:
Split horizon dns setup fails in ansible.
##### Steps To Reproduce:
```
- name: "register new private zone for {{ domain }}"
route53_zone:
vpc_id: "{{ vpc.vpc_id }}"
vpc_region: "{{ ec2_region }}"
zone: "{{ domain }}"
state: present
register: priv_zone_out
- debug: var=priv_zone_out
- name: "register new zone for {{ domain }}"
route53_zone:
zone: "{{ domain }}"
state: present
register: pub_zone_out
- debug: var=pub_zone_out
```
<!-- You can also paste gist.github.com links for larger files. -->
##### Expected Results:
I expect two zones in AWS one public and one private.
##### Actual Results:
Ansible does not discern between public and private dns zones if they have the same name. It creates one first and next attempt it reuses the private one for the public one.
```
TASK [vpc : register new private zone for qa.tst.<****>] *****************
task path: /Users/olvesh/utvikling/vimond-ansible/_roles/vpc/tasks/eip.yml:39
ESTABLISH LOCAL CONNECTION FOR USER: olvesh
127.0.0.1 EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1456736155.85-14866030392441 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1456736155.85-14866030392441 )" )
127.0.0.1 PUT /var/folders/cw/pnp93xgs3zq16021bb6zc_680000gn/T/tmpTA3526 TO /Users/olvesh/.ansible/tmp/ansible-tmp-1456736155.85-14866030392441/route53_zone
127.0.0.1 EXEC LANG=no_NO.UTF-8 LC_ALL=no_NO.UTF-8 LC_MESSAGES=no_NO.UTF-8 /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python /Users/olvesh/.ansible/tmp/ansible-tmp-1456736155.85-14866030392441/route53_zone; rm -rf "/Users/olvesh/.ansible/tmp/ansible-tmp-1456736155.85-14866030392441/" > /dev/null 2>&1
changed: [localhost] => {"changed": true, "invocation": {"module_args": {"aws_access_key": null, "aws_secret_key": null, "comment": "", "ec2_url": null, "profile": null, "region": null, "security_token": null, "state": "present", "validate_certs": true, "vpc_id": "vpc-59ec7d30", "vpc_region": "eu-central-1", "zone": "qa.tst.<****>"}, "module_name": "route53_zone"}, "set": {"comment": "", "name": "qa.tst.<****>.", "private_zone": true, "vpc_id": "vpc-59ec7d30", "vpc_region": "eu-central-1", "zone_id": "Z3IM3APUOSPX29"}}
TASK [vpc : debug] *************************************************************
task path: /Users/olvesh/utvikling/vimond-ansible/_roles/vpc/tasks/eip.yml:47
ok: [localhost] => {
"priv_zone_out": {
"changed": true,
"set": {
"comment": "",
"name": "qa.tst.<****>.",
"private_zone": true,
"vpc_id": "vpc-****",
"vpc_region": "eu-central-1",
"zone_id": "Z3IM3APUOSPX29"
}
}
}
TASK [vpc : register new zone for qa.tst.<****>] *************************
task path: /Users/olvesh/utvikling/vimond-ansible/_roles/vpc/tasks/eip.yml:49
ESTABLISH LOCAL CONNECTION FOR USER: olvesh
127.0.0.1 EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1456736158.72-146896956005879 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1456736158.72-146896956005879 )" )
127.0.0.1 PUT /var/folders/cw/pnp93xgs3zq16021bb6zc_680000gn/T/tmpwMfLtN TO /Users/olvesh/.ansible/tmp/ansible-tmp-1456736158.72-146896956005879/route53_zone
127.0.0.1 EXEC LANG=no_NO.UTF-8 LC_ALL=no_NO.UTF-8 LC_MESSAGES=no_NO.UTF-8 /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python /Users/olvesh/.ansible/tmp/ansible-tmp-1456736158.72-146896956005879/route53_zone; rm -rf "/Users/olvesh/.ansible/tmp/ansible-tmp-1456736158.72-146896956005879/" > /dev/null 2>&1
ok: [localhost] => {"changed": false, "invocation": {"module_args": {"aws_access_key": null, "aws_secret_key": null, "comment": "", "ec2_url": null, "profile": null, "region": null, "security_token": null, "state": "present", "validate_certs": true, "vpc_id": null, "vpc_region": null, "zone": "qa.tst.<****>"}, "module_name": "route53_zone"}, "set": {"comment": "", "name": "qa.tst.<****>.", "private_zone": false, "vpc_id": null, "vpc_region": null, "zone_id": "Z3IM3APUOSPX29"}}
TASK [vpc : debug] *************************************************************
task path: /Users/olvesh/utvikling/vimond-ansible/_roles/vpc/tasks/eip.yml:55
ok: [localhost] => {
"pub_zone_out": {
"changed": false,
"set": {
"comment": "",
"name": "qa.tst.<****>.",
"private_zone": false,
"vpc_id": null,
"vpc_region": null,
"zone_id": "Z3IM3APUOSPX29"
}
}
}
```
| True | route53_zone does not support split horizon setup - ##### Issue Type:
- Bug Report
##### Plugin Name:
route53_zone
##### Ansible Version:
```
ansible 2.0.0.2
```
##### Ansible Configuration:
N/A
##### Environment:
Mac OS X against AWS api
##### Summary:
Split horizon dns setup fails in ansible.
##### Steps To Reproduce:
```
- name: "register new private zone for {{ domain }}"
route53_zone:
vpc_id: "{{ vpc.vpc_id }}"
vpc_region: "{{ ec2_region }}"
zone: "{{ domain }}"
state: present
register: priv_zone_out
- debug: var=priv_zone_out
- name: "register new zone for {{ domain }}"
route53_zone:
zone: "{{ domain }}"
state: present
register: pub_zone_out
- debug: var=pub_zone_out
```
<!-- You can also paste gist.github.com links for larger files. -->
##### Expected Results:
I expect two zones in AWS one public and one private.
##### Actual Results:
Ansible does not discern between public and private dns zones if they have the same name. It creates one first and next attempt it reuses the private one for the public one.
```
TASK [vpc : register new private zone for qa.tst.<****>] *****************
task path: /Users/olvesh/utvikling/vimond-ansible/_roles/vpc/tasks/eip.yml:39
ESTABLISH LOCAL CONNECTION FOR USER: olvesh
127.0.0.1 EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1456736155.85-14866030392441 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1456736155.85-14866030392441 )" )
127.0.0.1 PUT /var/folders/cw/pnp93xgs3zq16021bb6zc_680000gn/T/tmpTA3526 TO /Users/olvesh/.ansible/tmp/ansible-tmp-1456736155.85-14866030392441/route53_zone
127.0.0.1 EXEC LANG=no_NO.UTF-8 LC_ALL=no_NO.UTF-8 LC_MESSAGES=no_NO.UTF-8 /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python /Users/olvesh/.ansible/tmp/ansible-tmp-1456736155.85-14866030392441/route53_zone; rm -rf "/Users/olvesh/.ansible/tmp/ansible-tmp-1456736155.85-14866030392441/" > /dev/null 2>&1
changed: [localhost] => {"changed": true, "invocation": {"module_args": {"aws_access_key": null, "aws_secret_key": null, "comment": "", "ec2_url": null, "profile": null, "region": null, "security_token": null, "state": "present", "validate_certs": true, "vpc_id": "vpc-59ec7d30", "vpc_region": "eu-central-1", "zone": "qa.tst.<****>"}, "module_name": "route53_zone"}, "set": {"comment": "", "name": "qa.tst.<****>.", "private_zone": true, "vpc_id": "vpc-59ec7d30", "vpc_region": "eu-central-1", "zone_id": "Z3IM3APUOSPX29"}}
TASK [vpc : debug] *************************************************************
task path: /Users/olvesh/utvikling/vimond-ansible/_roles/vpc/tasks/eip.yml:47
ok: [localhost] => {
"priv_zone_out": {
"changed": true,
"set": {
"comment": "",
"name": "qa.tst.<****>.",
"private_zone": true,
"vpc_id": "vpc-****",
"vpc_region": "eu-central-1",
"zone_id": "Z3IM3APUOSPX29"
}
}
}
TASK [vpc : register new zone for qa.tst.<****>] *************************
task path: /Users/olvesh/utvikling/vimond-ansible/_roles/vpc/tasks/eip.yml:49
ESTABLISH LOCAL CONNECTION FOR USER: olvesh
127.0.0.1 EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1456736158.72-146896956005879 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1456736158.72-146896956005879 )" )
127.0.0.1 PUT /var/folders/cw/pnp93xgs3zq16021bb6zc_680000gn/T/tmpwMfLtN TO /Users/olvesh/.ansible/tmp/ansible-tmp-1456736158.72-146896956005879/route53_zone
127.0.0.1 EXEC LANG=no_NO.UTF-8 LC_ALL=no_NO.UTF-8 LC_MESSAGES=no_NO.UTF-8 /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python /Users/olvesh/.ansible/tmp/ansible-tmp-1456736158.72-146896956005879/route53_zone; rm -rf "/Users/olvesh/.ansible/tmp/ansible-tmp-1456736158.72-146896956005879/" > /dev/null 2>&1
ok: [localhost] => {"changed": false, "invocation": {"module_args": {"aws_access_key": null, "aws_secret_key": null, "comment": "", "ec2_url": null, "profile": null, "region": null, "security_token": null, "state": "present", "validate_certs": true, "vpc_id": null, "vpc_region": null, "zone": "qa.tst.<****>"}, "module_name": "route53_zone"}, "set": {"comment": "", "name": "qa.tst.<****>.", "private_zone": false, "vpc_id": null, "vpc_region": null, "zone_id": "Z3IM3APUOSPX29"}}
TASK [vpc : debug] *************************************************************
task path: /Users/olvesh/utvikling/vimond-ansible/_roles/vpc/tasks/eip.yml:55
ok: [localhost] => {
"pub_zone_out": {
"changed": false,
"set": {
"comment": "",
"name": "qa.tst.<****>.",
"private_zone": false,
"vpc_id": null,
"vpc_region": null,
"zone_id": "Z3IM3APUOSPX29"
}
}
}
```
| main | zone does not support split horizon setup issue type bug report plugin name zone ansible version ansible ansible configuration n a environment mac os x against aws api summary split horizon dns setup fails in ansible steps to reproduce name register new private zone for domain zone vpc id vpc vpc id vpc region region zone domain state present register priv zone out debug var priv zone out name register new zone for domain zone zone domain state present register pub zone out debug var pub zone out expected results i expect two zones in aws one public and one private actual results ansible does not discern between public and private dns zones if they have the same name it creates one first and next attempt it reuses the private one for the public one task task path users olvesh utvikling vimond ansible roles vpc tasks eip yml establish local connection for user olvesh exec umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp put var folders cw t to users olvesh ansible tmp ansible tmp zone exec lang no no utf lc all no no utf lc messages no no utf library frameworks python framework versions resources python app contents macos python users olvesh ansible tmp ansible tmp zone rm rf users olvesh ansible tmp ansible tmp dev null changed changed true invocation module args aws access key null aws secret key null comment url null profile null region null security token null state present validate certs true vpc id vpc vpc region eu central zone qa tst module name zone set comment name qa tst private zone true vpc id vpc vpc region eu central zone id task task path users olvesh utvikling vimond ansible roles vpc tasks eip yml ok priv zone out changed true set comment name qa tst private zone true vpc id vpc vpc region eu central zone id task task path users olvesh utvikling vimond ansible roles vpc tasks eip yml establish local connection for user olvesh exec umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp put var folders cw t tmpwmfltn to users olvesh ansible tmp ansible tmp zone exec lang no no utf lc all no no utf lc messages no no utf library frameworks python framework versions resources python app contents macos python users olvesh ansible tmp ansible tmp zone rm rf users olvesh ansible tmp ansible tmp dev null ok changed false invocation module args aws access key null aws secret key null comment url null profile null region null security token null state present validate certs true vpc id null vpc region null zone qa tst module name zone set comment name qa tst private zone false vpc id null vpc region null zone id task task path users olvesh utvikling vimond ansible roles vpc tasks eip yml ok pub zone out changed false set comment name qa tst private zone false vpc id null vpc region null zone id | 1 |
25,412 | 12,241,330,620 | IssuesEvent | 2020-05-05 03:38:48 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | python message.dead_letter(description) does not add description to the dead lettered message | Pri2 cxp doc-enhancement service-bus-messaging/svc triaged | If you do: message.dead_letter(description="SETTING A reason")
That description cannot be found in the dead letter properties:
for message in messages: # pylint: disable=not-an-iterable
print(message)
print(message.header)
print(message.properties)
print(message.user_properties)
print(message.annotations)
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 47bc6b40-39cd-eb95-1911-ddff96dda210
* Version Independent ID: d91a6110-e9c0-3a34-a321-6850778baaef
* Content: [Quickstart: Use Azure Service Bus queues with Python](https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-python-how-to-use-queues)
* Content Source: [articles/service-bus-messaging/service-bus-python-how-to-use-queues.md](https://github.com/Microsoft/azure-docs/blob/master/articles/service-bus-messaging/service-bus-python-how-to-use-queues.md)
* Service: **service-bus-messaging**
* GitHub Login: @axisc
* Microsoft Alias: **aschhab** | 1.0 | python message.dead_letter(description) does not add description to the dead lettered message - If you do: message.dead_letter(description="SETTING A reason")
That description cannot be found in the dead letter properties:
for message in messages: # pylint: disable=not-an-iterable
print(message)
print(message.header)
print(message.properties)
print(message.user_properties)
print(message.annotations)
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 47bc6b40-39cd-eb95-1911-ddff96dda210
* Version Independent ID: d91a6110-e9c0-3a34-a321-6850778baaef
* Content: [Quickstart: Use Azure Service Bus queues with Python](https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-python-how-to-use-queues)
* Content Source: [articles/service-bus-messaging/service-bus-python-how-to-use-queues.md](https://github.com/Microsoft/azure-docs/blob/master/articles/service-bus-messaging/service-bus-python-how-to-use-queues.md)
* Service: **service-bus-messaging**
* GitHub Login: @axisc
* Microsoft Alias: **aschhab** | non_main | python message dead letter description does not add description to the dead lettered message if you do message dead letter description setting a reason that description cannot be found in the dead letter properties for message in messages pylint disable not an iterable print message print message header print message properties print message user properties print message annotations document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service service bus messaging github login axisc microsoft alias aschhab | 0 |
1,655 | 6,573,991,771 | IssuesEvent | 2017-09-11 10:59:41 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | authorized_key: pull keys from git server before the module is copied to the target machine | affects_2.2 feature_idea waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Idea
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
module: authorized_key
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file = /home/username/.ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
None which affect module behaviour.
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
In my company we are using a local git repository server (gitlab) and very few servers are able to access it. The majority of servers don't have network access to our local gitlab instance since we use it exclusively for ansible. So when i use the authorized_key module to deploy ssh keys and tell it to pull the keys from our gitlab instance (https://gitlab_server/{{ username }}.keys) the servers that can't access our gitlab instance cannot pull the keys. I understand that the module is copied to the target machine first and then executed, but it would be neat if there could be a way to get the keys from the git server before the module is copied to the target machine. sorry if this is to much to ask and i know there are other ways to deploy ssh keys, but i find the ability to provide the keys from URL very useful and it seems useless if target servers cannot access the git server to get the keys.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Try to deploy the keys to a target that cannot access the git server.
<!--- Paste example playbooks or commands between quotes below -->
```
- name: "Deploy public ssh key for username"
authorized_key:
user: "username"
key: "https://gitlab_server/username.keys"
exclusive: yes
validate_certs: no
state: present
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
```
changed: [ansible_host]
```
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
Because the target server cannot access the local git server the following error appears.
```
fatal: [ansible_host]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"exclusive": true,
"key": "https://gitlab_server/username.keys",
"key_options": null,
"manage_dir": true,
"path": null,
"state": "present",
"unique": false,
"user": "username",
"validate_certs": false
},
"module_name": "authorized_key"
},
"msg": "Error getting key from: https://gitlab_server/username.keys"
}
```
| True | authorized_key: pull keys from git server before the module is copied to the target machine - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Idea
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
module: authorized_key
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file = /home/username/.ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
None which affect module behaviour.
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
In my company we are using a local git repository server (gitlab) and very few servers are able to access it. The majority of servers don't have network access to our local gitlab instance since we use it exclusively for ansible. So when i use the authorized_key module to deploy ssh keys and tell it to pull the keys from our gitlab instance (https://gitlab_server/{{ username }}.keys) the servers that can't access our gitlab instance cannot pull the keys. I understand that the module is copied to the target machine first and then executed, but it would be neat if there could be a way to get the keys from the git server before the module is copied to the target machine. sorry if this is to much to ask and i know there are other ways to deploy ssh keys, but i find the ability to provide the keys from URL very useful and it seems useless if target servers cannot access the git server to get the keys.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Try to deploy the keys to a target that cannot access the git server.
<!--- Paste example playbooks or commands between quotes below -->
```
- name: "Deploy public ssh key for username"
authorized_key:
user: "username"
key: "https://gitlab_server/username.keys"
exclusive: yes
validate_certs: no
state: present
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
```
changed: [ansible_host]
```
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
Because the target server cannot access the local git server the following error appears.
```
fatal: [ansible_host]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"exclusive": true,
"key": "https://gitlab_server/username.keys",
"key_options": null,
"manage_dir": true,
"path": null,
"state": "present",
"unique": false,
"user": "username",
"validate_certs": false
},
"module_name": "authorized_key"
},
"msg": "Error getting key from: https://gitlab_server/username.keys"
}
```
| main | authorized key pull keys from git server before the module is copied to the target machine issue type feature idea component name module authorized key ansible version ansible config file home username ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables none which affect module behaviour os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary in my company we are using a local git repository server gitlab and very few servers are able to access it the majority of servers don t have network access to our local gitlab instance since we use it exclusively for ansible so when i use the authorized key module to deploy ssh keys and tell it to pull the keys from our gitlab instance username keys the servers that can t access our gitlab instance cannot pull the keys i understand that the module is copied to the target machine first and then executed but it would be neat if there could be a way to get the keys from the git server before the module is copied to the target machine sorry if this is to much to ask and i know there are other ways to deploy ssh keys but i find the ability to provide the keys from url very useful and it seems useless if target servers cannot access the git server to get the keys steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used try to deploy the keys to a target that cannot access the git server name deploy public ssh key for username authorized key user username key exclusive yes validate certs no state present expected results changed actual results because the target server cannot access the local git server the following error appears fatal failed changed false failed true invocation module args exclusive true key key options null manage dir true path null state present unique false user username validate certs false module name authorized key msg error getting key from | 1 |
7,114 | 6,776,390,782 | IssuesEvent | 2017-10-27 17:40:23 | servo/servo | https://api.github.com/repos/servo/servo | closed | servo-mac9 and servo-mac5 take 14 minutes longer to compile than other mac builders | A-infrastructure | They both consistently report 37 minutes for ./mach build, while other macs like servo-mac3 report 23 minutes. This tends to push our end-to-end built times to >1 hour when these machines are selected. I sshed in and didn't see any renegade processes that have caused similar slowdowns in the past. | 1.0 | servo-mac9 and servo-mac5 take 14 minutes longer to compile than other mac builders - They both consistently report 37 minutes for ./mach build, while other macs like servo-mac3 report 23 minutes. This tends to push our end-to-end built times to >1 hour when these machines are selected. I sshed in and didn't see any renegade processes that have caused similar slowdowns in the past. | non_main | servo and servo take minutes longer to compile than other mac builders they both consistently report minutes for mach build while other macs like servo report minutes this tends to push our end to end built times to hour when these machines are selected i sshed in and didn t see any renegade processes that have caused similar slowdowns in the past | 0 |
127,973 | 5,041,569,246 | IssuesEvent | 2016-12-19 10:47:58 | restlet/restlet-framework-java | https://api.github.com/repos/restlet/restlet-framework-java | closed | [GWT] in IE 10, POST/PUT request without body are received on server side with an "undefined" body | Priority: high State: new Type: bug | It works fine with Chrome and Firefox | 1.0 | [GWT] in IE 10, POST/PUT request without body are received on server side with an "undefined" body - It works fine with Chrome and Firefox | non_main | in ie post put request without body are received on server side with an undefined body it works fine with chrome and firefox | 0 |
496,047 | 14,293,017,046 | IssuesEvent | 2020-11-24 02:34:00 | internetarchive/openlibrary | https://api.github.com/repos/internetarchive/openlibrary | closed | Unable to login successfully during the local development environment setup | Lead: @cdrini Priority: 1 Theme: Development Type: Bug | <!-- What problem are we solving? What does the experience look like today? What are the symptoms? -->
At the time of login to OL interface an internal error is noticed.
### Evidence / Screenshot (if possible)


### Relevant url?
<!-- `https://openlibrary.org/...` -->
### Steps to Reproduce
<!-- What steps caused you to find the bug? -->
1. Run docker-compose up
2. Browse to localhost:8080
3. Try to Login
<!-- What actually happened after these steps? What did you expect to happen? -->
* Actual: Showed Internal Error
* Expected:
### Details
- **Logged in (Y/N)?** Y
- **Browser type/version?** Firefox/Chromium
- **Operating system?** Ubuntu 18.04
- **Environment (prod/dev/local)?** local
<!-- If not sure, put prod -->
### Proposal & Constraints
<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->
### Related files
<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->
### Stakeholders
<!-- @ tag stakeholders of this bug -->
| 1.0 | Unable to login successfully during the local development environment setup - <!-- What problem are we solving? What does the experience look like today? What are the symptoms? -->
At the time of login to OL interface an internal error is noticed.
### Evidence / Screenshot (if possible)


### Relevant url?
<!-- `https://openlibrary.org/...` -->
### Steps to Reproduce
<!-- What steps caused you to find the bug? -->
1. Run docker-compose up
2. Browse to localhost:8080
3. Try to Login
<!-- What actually happened after these steps? What did you expect to happen? -->
* Actual: Showed Internal Error
* Expected:
### Details
- **Logged in (Y/N)?** Y
- **Browser type/version?** Firefox/Chromium
- **Operating system?** Ubuntu 18.04
- **Environment (prod/dev/local)?** local
<!-- If not sure, put prod -->
### Proposal & Constraints
<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->
### Related files
<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->
### Stakeholders
<!-- @ tag stakeholders of this bug -->
| non_main | unable to login successfully during the local development environment setup at the time of login to ol interface an internal error is noticed evidence screenshot if possible relevant url steps to reproduce run docker compose up browse to localhost try to login actual showed internal error expected details logged in y n y browser type version firefox chromium operating system ubuntu environment prod dev local local proposal constraints related files stakeholders | 0 |
5,334 | 26,922,607,326 | IssuesEvent | 2023-02-07 11:33:30 | dbt-labs/docs.getdbt.com | https://api.github.com/repos/dbt-labs/docs.getdbt.com | opened | `vars` in `dbt_project.yml` are not Jinja-rendered | content improvement maintainer request | ### Contributions
- [X] I have read the contribution docs, and understand what's expected of me.
### Link to the page on docs.getdbt.com requiring updates
https://docs.getdbt.com/docs/build/project-variables#defining-variables-in-dbt_projectyml
### What part(s) of the page would you like to see updated?
`vars` can take static input only. The `vars` dictionary in `dbt_project.yml` is not Jinja rendered. As such, you **cannot** have code like:
```yml
vars:
my_var: |
{% if target.name == 'dev' %} something
{% elif env_var('other_input') %} something_else
{% endif %}
```
### Additional information
This is a frequently opened issue:
- https://github.com/dbt-labs/dbt-core/issues/3105
- https://github.com/dbt-labs/dbt-core/issues/6382
- https://github.com/dbt-labs/dbt-core/issues/6880
Lengthier discussion:
- https://github.com/dbt-labs/dbt-core/discussions/6170 | True | `vars` in `dbt_project.yml` are not Jinja-rendered - ### Contributions
- [X] I have read the contribution docs, and understand what's expected of me.
### Link to the page on docs.getdbt.com requiring updates
https://docs.getdbt.com/docs/build/project-variables#defining-variables-in-dbt_projectyml
### What part(s) of the page would you like to see updated?
`vars` can take static input only. The `vars` dictionary in `dbt_project.yml` is not Jinja rendered. As such, you **cannot** have code like:
```yml
vars:
my_var: |
{% if target.name == 'dev' %} something
{% elif env_var('other_input') %} something_else
{% endif %}
```
### Additional information
This is a frequently opened issue:
- https://github.com/dbt-labs/dbt-core/issues/3105
- https://github.com/dbt-labs/dbt-core/issues/6382
- https://github.com/dbt-labs/dbt-core/issues/6880
Lengthier discussion:
- https://github.com/dbt-labs/dbt-core/discussions/6170 | main | vars in dbt project yml are not jinja rendered contributions i have read the contribution docs and understand what s expected of me link to the page on docs getdbt com requiring updates what part s of the page would you like to see updated vars can take static input only the vars dictionary in dbt project yml is not jinja rendered as such you cannot have code like yml vars my var if target name dev something elif env var other input something else endif additional information this is a frequently opened issue lengthier discussion | 1 |
3,540 | 13,932,592,824 | IssuesEvent | 2020-10-22 07:30:06 | pace/bricks | https://api.github.com/repos/pace/bricks | closed | objstore: move healthcheck registration into client creation | EST::Hours S::In Progress T::Maintainance | # Motivation
Do not register healthchecks if the package is simply being imported but the client not necessarily used, i.e., move them out of the `init()` method into the client creation. | True | objstore: move healthcheck registration into client creation - # Motivation
Do not register healthchecks if the package is simply being imported but the client not necessarily used, i.e., move them out of the `init()` method into the client creation. | main | objstore move healthcheck registration into client creation motivation do not register healthchecks if the package is simply being imported but the client not necessarily used i e move them out of the init method into the client creation | 1 |
1,797 | 6,575,903,229 | IssuesEvent | 2017-09-11 17:46:27 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Add limit of number of backup files to file modules with backup option | affects_2.3 feature_idea waiting_on_maintainer | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
File modules with `backup` option: `copy`, `template`, `lineinfile`, `ini_file`, `replace`.
##### SUMMARY
If you are using `backup` option for a long time, large number of backup files is piled up in config directory:
```
service.conf
service.conf.2016-03-09@12:20:22
service.conf.2016-03-15@18:17:20~
service.conf.2016-03-21@17:59:52~
service.conf.2016-03-24@19:19:26~
...
tons and tons and tons of backup files here
...
```
In my use case backup files are used to be able to quick-revert manually, if something got wrong. So old files are not interesting, they just become obsolete garbage. It would be very convenient to have options `backup_max_age` and `backup_max_files` , that will automatically clean up old backup files based on their age(in days) or total number.
##### STEPS TO REPRODUCE
Something like that:
``` yaml
- name: install service config
template: src=service.cfg dest=/etc/service/service.cfg mode=0644 backup=yes backup_max_age=14
```
If service.cfg was changed - creates new backup file and clears up backup files older than 2 weeks.
| True | Add limit of number of backup files to file modules with backup option - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
File modules with `backup` option: `copy`, `template`, `lineinfile`, `ini_file`, `replace`.
##### SUMMARY
If you are using `backup` option for a long time, large number of backup files is piled up in config directory:
```
service.conf
service.conf.2016-03-09@12:20:22
service.conf.2016-03-15@18:17:20~
service.conf.2016-03-21@17:59:52~
service.conf.2016-03-24@19:19:26~
...
tons and tons and tons of backup files here
...
```
In my use case backup files are used to be able to quick-revert manually, if something got wrong. So old files are not interesting, they just become obsolete garbage. It would be very convenient to have options `backup_max_age` and `backup_max_files` , that will automatically clean up old backup files based on their age(in days) or total number.
##### STEPS TO REPRODUCE
Something like that:
``` yaml
- name: install service config
template: src=service.cfg dest=/etc/service/service.cfg mode=0644 backup=yes backup_max_age=14
```
If service.cfg was changed - creates new backup file and clears up backup files older than 2 weeks.
| main | add limit of number of backup files to file modules with backup option issue type feature idea component name file modules with backup option copy template lineinfile ini file replace summary if you are using backup option for a long time large number of backup files is piled up in config directory service conf service conf service conf service conf service conf tons and tons and tons of backup files here in my use case backup files are used to be able to quick revert manually if something got wrong so old files are not interesting they just become obsolete garbage it would be very convenient to have options backup max age and backup max files that will automatically clean up old backup files based on their age in days or total number steps to reproduce something like that yaml name install service config template src service cfg dest etc service service cfg mode backup yes backup max age if service cfg was changed creates new backup file and clears up backup files older than weeks | 1 |
83,329 | 24,041,192,440 | IssuesEvent | 2022-09-16 02:05:11 | moclojer/moclojer | https://api.github.com/repos/moclojer/moclojer | closed | clojure devcontainer support | documentation docker build | Whats is [devcontainer](https://code.visualstudio.com/docs/remote/containers)?
Way to leave the development environment inside the container, it is a specification that started in vscode and other editors support. | 1.0 | clojure devcontainer support - Whats is [devcontainer](https://code.visualstudio.com/docs/remote/containers)?
Way to leave the development environment inside the container, it is a specification that started in vscode and other editors support. | non_main | clojure devcontainer support whats is way to leave the development environment inside the container it is a specification that started in vscode and other editors support | 0 |
594,321 | 18,043,226,923 | IssuesEvent | 2021-09-18 12:15:27 | UofA-SPEAR/software | https://api.github.com/repos/UofA-SPEAR/software | closed | Create SMACH state to drive to a GPS waypoint | good first issue priority 2 | https://circ.cstag.ca/2021/rules/#autonomy-guidelines
Some competition tasks will involve navigating to GPS waypoints. The rover should have a SMACH state where it navigates toward a given GPS waypoint and stops when it has reached that location (within a configurable margin of error). | 1.0 | Create SMACH state to drive to a GPS waypoint - https://circ.cstag.ca/2021/rules/#autonomy-guidelines
Some competition tasks will involve navigating to GPS waypoints. The rover should have a SMACH state where it navigates toward a given GPS waypoint and stops when it has reached that location (within a configurable margin of error). | non_main | create smach state to drive to a gps waypoint some competition tasks will involve navigating to gps waypoints the rover should have a smach state where it navigates toward a given gps waypoint and stops when it has reached that location within a configurable margin of error | 0 |
216,594 | 24,281,584,131 | IssuesEvent | 2022-09-28 17:54:33 | liorzilberg/struts | https://api.github.com/repos/liorzilberg/struts | opened | CVE-2020-13959 (Medium) detected in velocity-tools-2.0.jar | security vulnerability | ## CVE-2020-13959 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>velocity-tools-2.0.jar</b></p></summary>
<p>VelocityTools is an integrated collection of Velocity subprojects
with the common goal of creating tools and infrastructure to speed and ease
development of both web and non-web applications using the Velocity template
engine.</p>
<p>Path to dependency file: /plugins/sitemesh/pom.xml</p>
<p>Path to vulnerable library: /.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar,/.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar,/.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar,/.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar,/.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar,/.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **velocity-tools-2.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/liorzilberg/struts/commit/6950763af860884188f4080d19a18c5ede16cd74">6950763af860884188f4080d19a18c5ede16cd74</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The default error page for VelocityView in Apache Velocity Tools prior to 3.1 reflects back the vm file that was entered as part of the URL. An attacker can set an XSS payload file as this vm file in the URL which results in this payload being executed. XSS vulnerabilities allow attackers to execute arbitrary JavaScript in the context of the attacked website and the attacked user. This can be abused to steal session cookies, perform requests in the name of the victim or for phishing attacks.
<p>Publish Date: 2021-03-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13959>CVE-2020-13959</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-fh63-4r66-jc7v">https://github.com/advisories/GHSA-fh63-4r66-jc7v</a></p>
<p>Release Date: 2021-03-10</p>
<p>Fix Resolution: org.apache.velocity.tools:velocity-tools-view:3.1</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| True | CVE-2020-13959 (Medium) detected in velocity-tools-2.0.jar - ## CVE-2020-13959 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>velocity-tools-2.0.jar</b></p></summary>
<p>VelocityTools is an integrated collection of Velocity subprojects
with the common goal of creating tools and infrastructure to speed and ease
development of both web and non-web applications using the Velocity template
engine.</p>
<p>Path to dependency file: /plugins/sitemesh/pom.xml</p>
<p>Path to vulnerable library: /.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar,/.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar,/.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar,/.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar,/.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar,/.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **velocity-tools-2.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/liorzilberg/struts/commit/6950763af860884188f4080d19a18c5ede16cd74">6950763af860884188f4080d19a18c5ede16cd74</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The default error page for VelocityView in Apache Velocity Tools prior to 3.1 reflects back the vm file that was entered as part of the URL. An attacker can set an XSS payload file as this vm file in the URL which results in this payload being executed. XSS vulnerabilities allow attackers to execute arbitrary JavaScript in the context of the attacked website and the attacked user. This can be abused to steal session cookies, perform requests in the name of the victim or for phishing attacks.
<p>Publish Date: 2021-03-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13959>CVE-2020-13959</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-fh63-4r66-jc7v">https://github.com/advisories/GHSA-fh63-4r66-jc7v</a></p>
<p>Release Date: 2021-03-10</p>
<p>Fix Resolution: org.apache.velocity.tools:velocity-tools-view:3.1</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| non_main | cve medium detected in velocity tools jar cve medium severity vulnerability vulnerable library velocity tools jar velocitytools is an integrated collection of velocity subprojects with the common goal of creating tools and infrastructure to speed and ease development of both web and non web applications using the velocity template engine path to dependency file plugins sitemesh pom xml path to vulnerable library repository org apache velocity velocity tools velocity tools jar repository org apache velocity velocity tools velocity tools jar repository org apache velocity velocity tools velocity tools jar repository org apache velocity velocity tools velocity tools jar repository org apache velocity velocity tools velocity tools jar repository org apache velocity velocity tools velocity tools jar dependency hierarchy x velocity tools jar vulnerable library found in head commit a href found in base branch master vulnerability details the default error page for velocityview in apache velocity tools prior to reflects back the vm file that was entered as part of the url an attacker can set an xss payload file as this vm file in the url which results in this payload being executed xss vulnerabilities allow attackers to execute arbitrary javascript in the context of the attacked website and the attacked user this can be abused to steal session cookies perform requests in the name of the victim or for phishing attacks publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache velocity tools velocity tools view check this box to open an automated fix pr | 0 |
776,768 | 27,264,635,511 | IssuesEvent | 2023-02-22 17:06:08 | ascheid/itsg33-pbmm-issue-gen | https://api.github.com/repos/ascheid/itsg33-pbmm-issue-gen | opened | MA-3: Maintenance Tools | Priority: P3 ITSG-33 Suggested Assignment: IT Operations Group Class: Operational Control: MA-3 | # Control Definition
(A) The organization approves, controls, and monitors information system maintenance tools.
# Class
Operational
# Supplemental Guidance
This control addresses security-related issues associated with maintenance tools used specifically for diagnostic and repair actions on organizational information systems. Maintenance tools can include hardware, software, and firmware items. Maintenance tools are potential vehicles for transporting malicious code, either intentionally or unintentionally, into a facility and subsequently into organizational information systems. Maintenance tools can include, for example, hardware/software diagnostic test equipment and hardware/software packet sniffers. This control does not cover hardware/software components that may support information system maintenance, yet are a part of the system, such as the software implementing “ping,” “ls,” “ipconfig,” or the hardware and software implementing the monitoring port of an Ethernet switch. Related controls: MA-2, MA-5, MP-6
# Suggested Assignment
IT Operations Group
| 1.0 | MA-3: Maintenance Tools - # Control Definition
(A) The organization approves, controls, and monitors information system maintenance tools.
# Class
Operational
# Supplemental Guidance
This control addresses security-related issues associated with maintenance tools used specifically for diagnostic and repair actions on organizational information systems. Maintenance tools can include hardware, software, and firmware items. Maintenance tools are potential vehicles for transporting malicious code, either intentionally or unintentionally, into a facility and subsequently into organizational information systems. Maintenance tools can include, for example, hardware/software diagnostic test equipment and hardware/software packet sniffers. This control does not cover hardware/software components that may support information system maintenance, yet are a part of the system, such as the software implementing “ping,” “ls,” “ipconfig,” or the hardware and software implementing the monitoring port of an Ethernet switch. Related controls: MA-2, MA-5, MP-6
# Suggested Assignment
IT Operations Group
| non_main | ma maintenance tools control definition a the organization approves controls and monitors information system maintenance tools class operational supplemental guidance this control addresses security related issues associated with maintenance tools used specifically for diagnostic and repair actions on organizational information systems maintenance tools can include hardware software and firmware items maintenance tools are potential vehicles for transporting malicious code either intentionally or unintentionally into a facility and subsequently into organizational information systems maintenance tools can include for example hardware software diagnostic test equipment and hardware software packet sniffers this control does not cover hardware software components that may support information system maintenance yet are a part of the system such as the software implementing “ping ” “ls ” “ipconfig ” or the hardware and software implementing the monitoring port of an ethernet switch related controls ma ma mp suggested assignment it operations group | 0 |
5,803 | 30,743,622,135 | IssuesEvent | 2023-07-28 13:26:50 | camunda/zeebe | https://api.github.com/repos/camunda/zeebe | opened | Reduce size of distribution | kind/toil area/maintainability | **Description**
While I was working on the different Docker images, one thing notable is that the Zeebe layer/distribution is 174MB. Looking into it, it's almost entirely 3rd party dependencies. I've listed out the ones whose size are greater than 1MB:
- rocksdbjni 59MB
- grpc-xds 12MB
- grpc-netty-shaded 9.9MB
- scala-library 6MB
- zstdjni 5.5MB
- netty-tcnative-boringssl (1.1MB + 1.2MB + 1MB + 1.1MB + 1.0MB) 5.5MB
- conscrypt 4.5MB
- s3-2.20 3.3MB
- guava-jre 3.0MB
- proto-google-common-protos 2.0MB
- reactor-core 1.8MB
- log4j-core-2 1.8MB
- spring-boot-autoconfigure 1.8MB
- spring-core 1.8MB
- spring-web 1.8MB
- protobuf-java 1.7MB
- jackson-databind 1.6MB
- spring-boot 1.5MB
- kotlin-stdlib 1.5MB
- commons-compact 1.1MB
By specifying an architecture during build, you could easily cut down the size for RocksDB and netty-tcnative, both of which pull in multiple pre-compiled binaries for each architecture:
- rocksdbjni 59MB down to 19MB
- netty-tcnative 5.5MB down to 1.2MB
Then, since already include Netty everywhere, we don't need the `grpc-netty-shaded` dependency. We can simply use `grpc-netty` and our existing Netty dependency. That's another 9.9MB knocked off.
It also seems possible we could exclude `conscrypt` if we're using `netty-tcnative`, so that would be another 4.5MB. But that would need to be verified.
So opportunities to reduce from 174MB down to at least 115 MB. Possibly 103MB if we can also drop the requirement on `grpc-xds` (xDS being a service mesh protocol, a feature we're not really using for the gateway).
No real urgency here, I think. The benefit is a smaller image - meaning faster to push and pull - and for the dependencies we drop, a slightly smaller CVE surface.
| True | Reduce size of distribution - **Description**
While I was working on the different Docker images, one thing notable is that the Zeebe layer/distribution is 174MB. Looking into it, it's almost entirely 3rd party dependencies. I've listed out the ones whose size are greater than 1MB:
- rocksdbjni 59MB
- grpc-xds 12MB
- grpc-netty-shaded 9.9MB
- scala-library 6MB
- zstdjni 5.5MB
- netty-tcnative-boringssl (1.1MB + 1.2MB + 1MB + 1.1MB + 1.0MB) 5.5MB
- conscrypt 4.5MB
- s3-2.20 3.3MB
- guava-jre 3.0MB
- proto-google-common-protos 2.0MB
- reactor-core 1.8MB
- log4j-core-2 1.8MB
- spring-boot-autoconfigure 1.8MB
- spring-core 1.8MB
- spring-web 1.8MB
- protobuf-java 1.7MB
- jackson-databind 1.6MB
- spring-boot 1.5MB
- kotlin-stdlib 1.5MB
- commons-compact 1.1MB
By specifying an architecture during build, you could easily cut down the size for RocksDB and netty-tcnative, both of which pull in multiple pre-compiled binaries for each architecture:
- rocksdbjni 59MB down to 19MB
- netty-tcnative 5.5MB down to 1.2MB
Then, since already include Netty everywhere, we don't need the `grpc-netty-shaded` dependency. We can simply use `grpc-netty` and our existing Netty dependency. That's another 9.9MB knocked off.
It also seems possible we could exclude `conscrypt` if we're using `netty-tcnative`, so that would be another 4.5MB. But that would need to be verified.
So opportunities to reduce from 174MB down to at least 115 MB. Possibly 103MB if we can also drop the requirement on `grpc-xds` (xDS being a service mesh protocol, a feature we're not really using for the gateway).
No real urgency here, I think. The benefit is a smaller image - meaning faster to push and pull - and for the dependencies we drop, a slightly smaller CVE surface.
| main | reduce size of distribution description while i was working on the different docker images one thing notable is that the zeebe layer distribution is looking into it it s almost entirely party dependencies i ve listed out the ones whose size are greater than rocksdbjni grpc xds grpc netty shaded scala library zstdjni netty tcnative boringssl conscrypt guava jre proto google common protos reactor core core spring boot autoconfigure spring core spring web protobuf java jackson databind spring boot kotlin stdlib commons compact by specifying an architecture during build you could easily cut down the size for rocksdb and netty tcnative both of which pull in multiple pre compiled binaries for each architecture rocksdbjni down to netty tcnative down to then since already include netty everywhere we don t need the grpc netty shaded dependency we can simply use grpc netty and our existing netty dependency that s another knocked off it also seems possible we could exclude conscrypt if we re using netty tcnative so that would be another but that would need to be verified so opportunities to reduce from down to at least mb possibly if we can also drop the requirement on grpc xds xds being a service mesh protocol a feature we re not really using for the gateway no real urgency here i think the benefit is a smaller image meaning faster to push and pull and for the dependencies we drop a slightly smaller cve surface | 1 |
2,650 | 8,102,838,058 | IssuesEvent | 2018-08-13 04:48:56 | openshiftio/openshift.io | https://api.github.com/repos/openshiftio/openshift.io | closed | Jenkins is becoming Idle for pipeline build in OSIO launcher flow. | SEV2-high area/architecture/build priority/P4 sprint/next team/build-cd type/bug | Due to this Jenkins issue, No build could not able to see the finish line.
This is a critical issue from the build pipeline endpoint. Please check the below screenshot.

| 1.0 | Jenkins is becoming Idle for pipeline build in OSIO launcher flow. - Due to this Jenkins issue, No build could not able to see the finish line.
This is a critical issue from the build pipeline endpoint. Please check the below screenshot.

| non_main | jenkins is becoming idle for pipeline build in osio launcher flow due to this jenkins issue no build could not able to see the finish line this is a critical issue from the build pipeline endpoint please check the below screenshot | 0 |
2,208 | 7,802,987,465 | IssuesEvent | 2018-06-10 18:35:44 | OpenLightingProject/ola | https://api.github.com/repos/OpenLightingProject/ola | closed | libftdi API update | Component-Plugin Language-C++ Maintainability OpSys-Linux | Hi,
The Debian maintainer of libftdi filed a [bug](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=810374) against ola. I tried the simple fix ("s/libftdi-dev/libftdi1-dev/" over debian/control), but that results in no FTDI plugin being compiled.
Someone will need to look at the changes that were made in the new FTDI library and update ola accordingly. Mean time, I'll have to still compile against the old library.
| True | libftdi API update - Hi,
The Debian maintainer of libftdi filed a [bug](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=810374) against ola. I tried the simple fix ("s/libftdi-dev/libftdi1-dev/" over debian/control), but that results in no FTDI plugin being compiled.
Someone will need to look at the changes that were made in the new FTDI library and update ola accordingly. Mean time, I'll have to still compile against the old library.
| main | libftdi api update hi the debian maintainer of libftdi filed a against ola i tried the simple fix s libftdi dev dev over debian control but that results in no ftdi plugin being compiled someone will need to look at the changes that were made in the new ftdi library and update ola accordingly mean time i ll have to still compile against the old library | 1 |
371,185 | 10,962,670,445 | IssuesEvent | 2019-11-27 17:46:59 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | Kubectl version --server should return the server version | kind/feature priority/awaiting-more-evidence sig/cli | <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
I would like for `kubectl version --server` to return the server version as `kubectl version --client` returns the client version.
**Why is this needed**:
It will make it easier to write automating scripts for checking the server version.
It will maintain consistency as there already is a `--client` flag that returns the client version. | 1.0 | Kubectl version --server should return the server version - <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
I would like for `kubectl version --server` to return the server version as `kubectl version --client` returns the client version.
**Why is this needed**:
It will make it easier to write automating scripts for checking the server version.
It will maintain consistency as there already is a `--client` flag that returns the client version. | non_main | kubectl version server should return the server version what would you like to be added i would like for kubectl version server to return the server version as kubectl version client returns the client version why is this needed it will make it easier to write automating scripts for checking the server version it will maintain consistency as there already is a client flag that returns the client version | 0 |
1,226 | 5,218,843,895 | IssuesEvent | 2017-01-26 17:27:01 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | apache2_module fails for PHP 5.6 even though it is already enabled | affects_2.2 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
apache2_module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file = /Users/nick/Workspace/-redacted-/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
hostfile & roles_path
##### OS / ENVIRONMENT
Running Ansible on macOS Sierra, target server is Ubuntu Xenial
##### SUMMARY
Enabling the Apache2 module "[php5.6](https://launchpad.net/~ondrej/+archive/ubuntu/php)" with apache2_module fails even though the module is already enabled.
This is the same problem as #5559 and #4744 but with a different package.
This module is called `php5.6` but identifies itself in `apache2ctl -M` as `php5_module`.
##### STEPS TO REPRODUCE
```
- name: Enable PHP 5.6
apache2_module: state=present name=php5.6
```
##### ACTUAL RESULTS
```
failed: [nicksherlock.com] (item=php5.6) => {
"failed": true,
"invocation": {
"module_args": {
"force": false,
"name": "php5.6",
"state": "present"
},
"module_name": "apache2_module"
},
"item": "php5.6",
"msg": "Failed to set module php5.6 to enabled: Considering dependency mpm_prefork for php5.6:\nConsidering conflict mpm_event for mpm_prefork:\nConsidering conflict mpm_worker for mpm_prefork:\nModule mpm_prefork already enabled\nConsidering conflict php5 for php5.6:\nModule php5.6 already enabled\n",
"rc": 0,
"stderr": "",
"stdout": "Considering dependency mpm_prefork for php5.6:\nConsidering conflict mpm_event for mpm_prefork:\nConsidering conflict mpm_worker for mpm_prefork:\nModule mpm_prefork already enabled\nConsidering conflict php5 for php5.6:\nModule php5.6 already enabled\n",
"stdout_lines": [
"Considering dependency mpm_prefork for php5.6:",
"Considering conflict mpm_event for mpm_prefork:",
"Considering conflict mpm_worker for mpm_prefork:",
"Module mpm_prefork already enabled",
"Considering conflict php5 for php5.6:",
"Module php5.6 already enabled"
]
}
```
Running it manually on the server gives:
```
# a2enmod php5.6
Considering dependency mpm_prefork for php5.6:
Considering conflict mpm_event for mpm_prefork:
Considering conflict mpm_worker for mpm_prefork:
Module mpm_prefork already enabled
Considering conflict php5 for php5.6:
Module php5.6 already enabled
# echo $?
0
```
This is php5.6.load:
```
# Conflicts: php5
# Depends: mpm_prefork
LoadModule php5_module /usr/lib/apache2/modules/libphp5.6.so
```
Note that manually running "a2enmod php5.6" on the server directly gives a 0 exit status to signal success, can't apache2_module just check that instead of doing parsing with a regex?
What if I wanted several sets of conf files in `mods-available` for the same module? (e.g. php-prod.load, php-dev.load both loading the same module, but with different config) Wouldn't that make it impossible for Ansible to manage those with apache2_module?
It just seems odd that Ansible requires that the module's binary name be the same as the name of its .load file. | True | apache2_module fails for PHP 5.6 even though it is already enabled - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
apache2_module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file = /Users/nick/Workspace/-redacted-/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
hostfile & roles_path
##### OS / ENVIRONMENT
Running Ansible on macOS Sierra, target server is Ubuntu Xenial
##### SUMMARY
Enabling the Apache2 module "[php5.6](https://launchpad.net/~ondrej/+archive/ubuntu/php)" with apache2_module fails even though the module is already enabled.
This is the same problem as #5559 and #4744 but with a different package.
This module is called `php5.6` but identifies itself in `apache2ctl -M` as `php5_module`.
##### STEPS TO REPRODUCE
```
- name: Enable PHP 5.6
apache2_module: state=present name=php5.6
```
##### ACTUAL RESULTS
```
failed: [nicksherlock.com] (item=php5.6) => {
"failed": true,
"invocation": {
"module_args": {
"force": false,
"name": "php5.6",
"state": "present"
},
"module_name": "apache2_module"
},
"item": "php5.6",
"msg": "Failed to set module php5.6 to enabled: Considering dependency mpm_prefork for php5.6:\nConsidering conflict mpm_event for mpm_prefork:\nConsidering conflict mpm_worker for mpm_prefork:\nModule mpm_prefork already enabled\nConsidering conflict php5 for php5.6:\nModule php5.6 already enabled\n",
"rc": 0,
"stderr": "",
"stdout": "Considering dependency mpm_prefork for php5.6:\nConsidering conflict mpm_event for mpm_prefork:\nConsidering conflict mpm_worker for mpm_prefork:\nModule mpm_prefork already enabled\nConsidering conflict php5 for php5.6:\nModule php5.6 already enabled\n",
"stdout_lines": [
"Considering dependency mpm_prefork for php5.6:",
"Considering conflict mpm_event for mpm_prefork:",
"Considering conflict mpm_worker for mpm_prefork:",
"Module mpm_prefork already enabled",
"Considering conflict php5 for php5.6:",
"Module php5.6 already enabled"
]
}
```
Running it manually on the server gives:
```
# a2enmod php5.6
Considering dependency mpm_prefork for php5.6:
Considering conflict mpm_event for mpm_prefork:
Considering conflict mpm_worker for mpm_prefork:
Module mpm_prefork already enabled
Considering conflict php5 for php5.6:
Module php5.6 already enabled
# echo $?
0
```
This is php5.6.load:
```
# Conflicts: php5
# Depends: mpm_prefork
LoadModule php5_module /usr/lib/apache2/modules/libphp5.6.so
```
Note that manually running "a2enmod php5.6" on the server directly gives a 0 exit status to signal success, can't apache2_module just check that instead of doing parsing with a regex?
What if I wanted several sets of conf files in `mods-available` for the same module? (e.g. php-prod.load, php-dev.load both loading the same module, but with different config) Wouldn't that make it impossible for Ansible to manage those with apache2_module?
It just seems odd that Ansible requires that the module's binary name be the same as the name of its .load file. | main | module fails for php even though it is already enabled issue type bug report component name module ansible version ansible config file users nick workspace redacted ansible cfg configured module search path default w o overrides configuration hostfile roles path os environment running ansible on macos sierra target server is ubuntu xenial summary enabling the module with module fails even though the module is already enabled this is the same problem as and but with a different package this module is called but identifies itself in m as module steps to reproduce name enable php module state present name actual results failed item failed true invocation module args force false name state present module name module item msg failed to set module to enabled considering dependency mpm prefork for nconsidering conflict mpm event for mpm prefork nconsidering conflict mpm worker for mpm prefork nmodule mpm prefork already enabled nconsidering conflict for nmodule already enabled n rc stderr stdout considering dependency mpm prefork for nconsidering conflict mpm event for mpm prefork nconsidering conflict mpm worker for mpm prefork nmodule mpm prefork already enabled nconsidering conflict for nmodule already enabled n stdout lines considering dependency mpm prefork for considering conflict mpm event for mpm prefork considering conflict mpm worker for mpm prefork module mpm prefork already enabled considering conflict for module already enabled running it manually on the server gives considering dependency mpm prefork for considering conflict mpm event for mpm prefork considering conflict mpm worker for mpm prefork module mpm prefork already enabled considering conflict for module already enabled echo this is load conflicts depends mpm prefork loadmodule module usr lib modules so note that manually running on the server directly gives a exit status to signal success can t module just check that instead of doing parsing with a regex what if i wanted several sets of conf files in mods available for the same module e g php prod load php dev load both loading the same module but with different config wouldn t that make it impossible for ansible to manage those with module it just seems odd that ansible requires that the module s binary name be the same as the name of its load file | 1 |
2,639 | 8,960,177,921 | IssuesEvent | 2019-01-28 04:11:54 | portage-brew/portage-brew-staging-and-evolution | https://api.github.com/repos/portage-brew/portage-brew-staging-and-evolution | closed | Compose a Formal Announcement for Upstream | Needs Discussion Needs Maintainer Feedback enhancement help wanted | We need to publicize ourselves somewhat better, but I'm having trouble thinking of something to use for that purpose that remains in the spirit of #11 at the moment due to…remaining distaste over how affairs were handled upstream that I'm still processing. I'm thus leaving this issue open for idea submissions, wording proposals, and rough drafts (either as comments here or PRs to close this issue.)
CC:
- @blogabe (Since I've seen you doing marketing and you might therefore have an opinion here.)
- @portage-brew/maintainers in general. | True | Compose a Formal Announcement for Upstream - We need to publicize ourselves somewhat better, but I'm having trouble thinking of something to use for that purpose that remains in the spirit of #11 at the moment due to…remaining distaste over how affairs were handled upstream that I'm still processing. I'm thus leaving this issue open for idea submissions, wording proposals, and rough drafts (either as comments here or PRs to close this issue.)
CC:
- @blogabe (Since I've seen you doing marketing and you might therefore have an opinion here.)
- @portage-brew/maintainers in general. | main | compose a formal announcement for upstream nbsp nbsp nbsp nbsp nbsp we need to publicize ourselves somewhat better but i m having trouble thinking of something to use for that purpose that remains in the spirit of at the moment due to…remaining distaste over how affairs were handled upstream that i m still processing i m thus leaving this issue open for idea submissions wording proposals and rough drafts either as comments here or prs to close this issue cc blogabe since i ve seen you doing marketing and you might therefore have an opinion here portage brew maintainers in general | 1 |
311,695 | 26,806,045,735 | IssuesEvent | 2023-02-01 18:22:13 | art-here/art-here-backend | https://api.github.com/repos/art-here/art-here-backend | closed | 구글 로그인 테스트, 멤버 기능 추가 구현 | cleanup test feat | ## 🤷 구현할 기능
구글 로그인 코드를 수정한다.
구글 로그인 테스트 코드를 작성한다.
멤버 기능 추가 구현한다.
## 🔨 상세 작업 내용
- [x] 프론트엔드 깃허브 액션 완료
- [x] 구글 로그인 테스트 구현
- [x] 구글 로그인 코드 수정
## 📄 참고 사항
## ⏰ 예상 소요 기간
3일
| 1.0 | 구글 로그인 테스트, 멤버 기능 추가 구현 - ## 🤷 구현할 기능
구글 로그인 코드를 수정한다.
구글 로그인 테스트 코드를 작성한다.
멤버 기능 추가 구현한다.
## 🔨 상세 작업 내용
- [x] 프론트엔드 깃허브 액션 완료
- [x] 구글 로그인 테스트 구현
- [x] 구글 로그인 코드 수정
## 📄 참고 사항
## ⏰ 예상 소요 기간
3일
| non_main | 구글 로그인 테스트 멤버 기능 추가 구현 🤷 구현할 기능 구글 로그인 코드를 수정한다 구글 로그인 테스트 코드를 작성한다 멤버 기능 추가 구현한다 🔨 상세 작업 내용 프론트엔드 깃허브 액션 완료 구글 로그인 테스트 구현 구글 로그인 코드 수정 📄 참고 사항 ⏰ 예상 소요 기간 | 0 |